Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I have Java API which exports the data to an excel and generates a file on the POD where the request is served.
Now the next request (to download the file) might go to a different POD and the download fails.</p>
<p>How do I get around this?
How do I generate files on all the POD? Or how do I make sure the subsequent request goes to the same POD where file was generated?
I cant give the direct POD URL as it will not be accessible to clients.</p>
<p>Thanks.</p>
| Nishith Shah | <p>You need to use a persistent volumes to share the same files between your containers. You could use the node storage mounted on containers (easiest way) or other distributed file system like NFS, EFS (AWS), GlusterFS etc...</p>
<p>If you you need a simplest to share the file and your pods are in the same node, you could use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostpath</a> to store the file and share the volume with other containers.</p>
<p>Assuming you have a kubernetes cluster that has only one Node, and you want to share the path <code>/mtn/data</code> of your node with your pods:</p>
<p><strong>Create a PersistentVolume:</strong></p>
<blockquote>
<p>A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<p><strong>Create a PersistentVolumeClaim:</strong></p>
<blockquote>
<p>Pods use PersistentVolumeClaims to request physical storage</p>
</blockquote>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>Look at the PersistentVolumeClaim:</p>
<p><code>kubectl get pvc task-pv-claim</code></p>
<p>The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, <code>task-pv-volume</code>.</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
</code></pre>
<p><strong>Create a deployment with 2 replicas for example:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/mnt/data"
name: task-pv-storage
</code></pre>
<p>Now you can check inside both container the path <code>/mnt/data</code> has the same files.</p>
<p>If you have cluster with more than 1 node I recommend you to think about the other types of <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">persistent volumes</a>.</p>
<p><strong>References:</strong>
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure persistent volumes</a>
<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent volumes</a>
<a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">Volume Types</a></p>
| Mr.KoopaKiller |
<p>Using the following code:</p>
<pre class="lang-golang prettyprint-override"><code>func GetPods(clientset *kubernetes.Clientset, name, namespace string) ([]corev1.Pod, error) {
list, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{
LabelSelector: fmt.Sprintf("app=%s", name),
})
if err != nil {
return nil, err
}
return list.Items, nil
}
</code></pre>
<p>And then dump the results into yaml using <code>gopkg.in/yaml.v2</code>, and here's the yaml clause that describes container resource:</p>
<pre><code>resources:
limits:
cpu:
format: DecimalSI
memory:
format: BinarySI
requests:
cpu:
format: DecimalSI
memory:
format: BinarySI
</code></pre>
<p>Which includes none of the actual resource amount that I'm actually interested in, which should look like this using <code>kubectl get pod xxx -o yaml</code>:</p>
<pre><code>resources:
limits:
cpu: "4"
memory: 8Gi
requests:
cpu: 200m
memory: 100Mi
</code></pre>
<p>So how can I properly get the pod spec yaml, that includes all the resource info, using the golang library? What did I do wrong in the above process?</p>
<h2>Update</h2>
<p>I noticed the <a href="https://godoc.org/k8s.io/api/core/v1#PodSpec.String" rel="nofollow noreferrer"><code>Pod.String</code></a> and <a href="https://godoc.org/k8s.io/api/core/v1#PodSpec.Marshal" rel="nofollow noreferrer"><code>Pod.Marshal</code></a> methods.</p>
<p>The <code>pod.String()</code> output seems to be a formatted string of the <code>core.v1.Pod</code> instance, this isn't much use to me since it's not serialized.</p>
<p><code>Pod.Marshal()</code> gives a byte array, contains lots of gibberish when printed. The method itself is one of those undocumented methods inside <code>k8s.io/api/core/v1/generated.pb.go</code>, I really don't know what to do with its output:</p>
<pre class="lang-golang prettyprint-override"><code>func (p *PodResolver) SpecYaml() (string, error) {
bs, err := p.pod.Marshal()
fmt.Println(string(bs))
return string(bs), err
}
// prints a whole lot of gibberish like cpu\x12\x03\n\x014\n\x0f\n\x06memory\x12\x05\n\x038Gi\x12\r\n\x03cpu\x12\x06\n\x04200m\x12\x11\n\x06memory\x12\a\n\x05100MiJ-\n\n
</code></pre>
| timfeirg | <p>Try this out:</p>
<pre><code>func GetPods(clientset *kubernetes.Clientset, name, namespace string) ([]corev1.Pod, error) {
list, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{
LabelSelector: fmt.Sprintf("app=%s", name),
})
if err != nil {
return nil, err
}
for _, l := range list.Items {
fmt.Println("Request CPU ==> ", l.Spec.Containers[0].Resources.Requests.Cpu(), " Request Memory ==> ", l.Spec.Containers[0].Resources.Requests.Memory())
fmt.Println("Limit CPU ==> ", l.Spec.Containers[0].Resources.Limits.Cpu(), " Limit Memory ==> ", l.Spec.Containers[0].Resources.Limits.Memory()) } return list.Items, nil }
</code></pre>
<blockquote>
<p>Keep in mind that every time something becomes complex it's time to
choose another path.</p>
</blockquote>
<p>k8s APIs are not so well documented as could be, in this case my suggestion is open up the debug console and navigate through component trees which will certainly indicate which interface use and it's structure.</p>
| Caue Augusto dos Santos |
<p>I am planning to setup a kubernetes cluster, that looks as follows:
<a href="https://i.stack.imgur.com/ELhad.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELhad.png" alt="enter image description here"></a></p>
<p>As you can see on the image, the cluster will consist of 3 Ubuntu 18.04 Virtual Private Server, one is the master and the other two servers are nodes.
For the kubernetes installation, I am going to choose <a href="https://kubespray.io/#/" rel="nofollow noreferrer">kubespray</a>.
First, I have to care about, that the 3 VPS can communicate with each other. That is the first question, what do I have to do, that the 3 VPS server can
communicate to each other? </p>
<p>The second question is, how and where do I have to install kubespray? I would guess on the master server.</p>
| softshipper | <p>I would start with understanding how the setup of Kubernetes cluster for your use case looks like.
There is a useful <a href="https://hostadvice.com/how-to/how-to-set-up-kubernetes-in-ubuntu/" rel="nofollow noreferrer">guide</a> about this. Showing the dependencies, installing components, deploying a pod network step by step.</p>
<p>Answering your first question:
When you initialize your master with <code>kubeadm init</code> you can join your nodes to it (<code>kubeadm join</code>).
After that you need to install and configure a pod network. <a href="https://github.com/coreos/flannel#flannel" rel="nofollow noreferrer">Flannel</a> is one of the most used network plugins for Kubernetes.</p>
<p>For your second question:
There is a <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubespray/" rel="nofollow noreferrer">guide</a> from the official Kubernetes documentation about this. Prerequisites should be met on all the servers in order to make Kubespray work. <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Here</a> is the official GitHub link. However the installation steps there are minimal os I suggest supplementing with <a href="https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product" rel="nofollow noreferrer">this</a> and <a href="https://medium.com/@sarangrana/getting-started-with-kubernetes-part-3-kubespray-on-premise-installation-guide-90194f08be4e" rel="nofollow noreferrer">this</a>.</p>
| Wytrzymały Wiktor |
<p>I am currently playing around with a rpi based k3s cluster and I am observing a weird phenomenon. </p>
<p>I deployed two applications.
The first one is nginx which I can reach on the url <a href="http://external-ip/foo" rel="nofollow noreferrer">http://external-ip/foo</a> based on the following ingress rule:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
namespace: foo
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rule-type: "PathPrefixStrip"
traefik.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 8081
</code></pre>
<p>And the other one is grafana which I cannot reach on the url <a href="http://external-ip/grafana" rel="nofollow noreferrer">http://external-ip/grafana</a> based on the below ingress rule:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: grafana
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rule-type: "PathPrefixStrip"
traefik.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- http:
paths:
- path: /grafana
backend:
serviceName: grafana-service
servicePort: 3000
</code></pre>
<p>When I do a port-forward directly on the pod I can reach the grafana app, when I use the port-forward on the grafana service it also works. </p>
<p>However as soon as I try to reach it through the subpath I will get a gateway timeout.</p>
<p>Does anyone have a guess what I am missing?</p>
<p>Here the deployment and service for the grafana deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: grafana
labels:
app: grafana
tier: frontend
service: monitoring
spec:
selector:
matchLabels:
app: grafana
tier: frontend
template:
metadata:
labels:
app: grafana
tier: frontend
service: monitoring
spec:
containers:
- image: grafana
imagePullPolicy: IfNotPresent
name: grafana
envFrom:
- configMapRef:
name: grafana-config
ports:
- name: frontend
containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: grafana-service
namespace: grafana
labels:
app: grafana
tier: frontend
service: monitoring
spec:
selector:
app: grafana
tier: frontend
type: NodePort
ports:
- name: frontend
port: 3000
protocol: TCP
targetPort: 3000
</code></pre>
<h1>Solution</h1>
<p>I had to add the following two parameters to my configmap to make it work:</p>
<pre><code>GF_SERVER_ROOT_URL=http://localhost:3000/grafana/
GF_SERVER_FROM_SUB_PATH=true
</code></pre>
| tehemaroo | <p>As I mentioned in comments grafana is not listening on <code>/</code> like default nginx.</p>
<p>There is related <a href="https://github.com/helm/charts/issues/6264" rel="nofollow noreferrer">github issue</a> about this, and if you want to make it work you should specify root_url</p>
<pre><code>grafana.ini:
server:
root_url: https://subdomain.example.com/grafana
</code></pre>
<p>Specifically take a look at <a href="https://github.com/helm/charts/issues/6264#issuecomment-480530680" rel="nofollow noreferrer">this</a> and <a href="https://github.com/helm/charts/issues/6264#issuecomment-463755283" rel="nofollow noreferrer">this</a> comment.</p>
<hr />
<p>@tehemaroo add his own solution which include changing root url and sub_path in configmap</p>
<blockquote>
<p>I had to add the following two parameters to my configmap to make it work:</p>
</blockquote>
<pre><code>GF_SERVER_ROOT_URL=http://localhost:3000/grafana/
GF_SERVER_FROM_SUB_PATH=true
</code></pre>
<p>And related <a href="https://grafana.com/tutorials/run-grafana-behind-a-proxy/#1" rel="nofollow noreferrer">documentation</a> about that</p>
<blockquote>
<p>To serve Grafana behind a sub path:</p>
<p>Include the sub path at the end of the root_url.</p>
<p>Set serve_from_sub_path to true.</p>
</blockquote>
<pre><code>[server]
domain = example.com
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/
serve_from_sub_path = true
</code></pre>
| Jakub |
<p>I am trying to deploy to kubernetes using Gitlab CICD. No matter what I do, <code>kubectl apply -f helloworld-deployment.yml --record</code> in my <code>.gitlab-ci.yml</code> always returns that the deployment is unchanged:</p>
<pre><code>$ kubectl apply -f helloworld-deployment.yml --record
deployment.apps/helloworld-deployment unchanged
</code></pre>
<p>Even if I change the tag on the image, or if the deployment doesn't exist at all. However, if I run <code>kubectl apply -f helloworld-deployment.yml --record</code> from my own computer, it works fine and updates when a tag changes and creates the deployment when no deployment exist. Below is my <code>.gitlab-ci.yml</code> that I'm testing with:</p>
<pre><code>image: docker:dind
services:
- docker:dind
stages:
- deploy
deploy-prod:
stage: deploy
image: google/cloud-sdk
environment: production
script:
- kubectl apply -f helloworld-deployment.yml --record
</code></pre>
<p>Below is <code>helloworld-deployment.yml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: registry.gitlab.com/repo/helloworld:test
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
</code></pre>
<p><b>Update:</b></p>
<p>This is what I see if I run <code>kubectl rollout history deployments/helloworld-deployment</code> and there is no existing deployment:</p>
<pre><code>Error from server (NotFound): deployments.apps "helloworld-deployment" not found
</code></pre>
<p>If the deployment already exists, I see this:</p>
<pre><code>REVISION CHANGE-CAUSE
1 kubectl apply --filename=helloworld-deployment.yml --record=true
</code></pre>
<p>With only one revision.</p>
<p>I did notice this time that when I changed the tag, the output from my Gitlab Runner was:</p>
<pre><code>deployment.apps/helloworld-deployment configured
</code></pre>
<p>However, there were no new pods. When I ran it from my PC, then I did see new pods created.</p>
<p><b>Update:</b></p>
<p>Running <code>kubectl get pods</code> shows two different pods in Gitlab runner than I see on my PC.</p>
<p>I definitely only have one kubernetes cluster, but <code>kubectl config view</code> shows <i>some</i> differences (the server url is the same). The output for <code>contexts</code> shows different namespaces. Does this mean I need to set a namespace either in my <code>yml</code> file or pass it in the command? Here is the output from the Gitlab runner:</p>
<pre><code> apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
name: gitlab-deploy
contexts:
- context:
cluster: gitlab-deploy
namespace: helloworld-16393682-production
user: gitlab-deploy
name: gitlab-deploy
current-context: gitlab-deploy
kind: Config
preferences: {}
users:
- name: gitlab-deploy
user:
token: [MASKED]
</code></pre>
<p>And here is the output from my PC:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
contexts:
- context:
cluster: do-nyc3-helloworld
user: do-nyc3-helloworld-admin
name: do-nyc3-helloworld
current-context: do-nyc3-helloworld
kind: Config
preferences: {}
users:
- name: do-nyc3-helloworld-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- kubernetes
- cluster
- kubeconfig
- exec-credential
- --version=v1beta1
- --context=default
- VALUE
command: doctl
env: null
</code></pre>
<p>It looks like Gitlab adds their own <a href="https://docs.gitlab.com/ee/user/project/clusters/#deployment-variables" rel="nofollow noreferrer">default for namespace</a>:</p>
<pre><code><project_name>-<project_id>-<environment>
</code></pre>
<p>Because of this, I put this in the metadata section of helloworld-deployment.yml:</p>
<pre><code>namespace: helloworld-16393682-production
</code></pre>
<p>And then it worked as expected. It was deploying before, but <code>kubectl get pods</code> didn't show it since that command was using the <code>default</code> namespace.</p>
| srchulo | <p>Since Gitlab use a custom namespace you need to add a namespace flag to you command to display your pods:</p>
<p><code>kubectl get pods -n helloworld-16393682-production</code></p>
<p>You can set the default namespace for kubectl commands. See <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>You can permanently save the namespace for all subsequent kubectl commands in that contex</p>
</blockquote>
<p>In your case it could be:</p>
<pre><code>kubectl config set-context --current --namespace=helloworld-16393682-production
</code></pre>
<p>Or if you are using <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">multiples cluster</a>, you can switch between namespaces using:</p>
<pre><code>kubectl config use-context helloworld-16393682-production
</code></pre>
<p>In this <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration" rel="nofollow noreferrer">link</a> you can see a lot of useful commands and configurations.</p>
<p>I hope it helps! =)</p>
| Mr.KoopaKiller |
<p>Kubernetes dashboard is able to show "current running pods / pods capacity" per node. But when I try to get the same info with <code>kubectl</code> I have to run two commands:</p>
<pre><code>kubectl describe node | grep -E (^Name:|^Non-terminated)
</code></pre>
<p>which lists "current running pod on node", and</p>
<pre><code>kubectl get nodes -o=custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.pods
</code></pre>
<p>which shows the node's capacity</p>
<p>Does anyone know how I can get the output similar to below using <strong>one command</strong> only?</p>
<pre><code>NAME CURRENT CAPACITY
node_1 7 15
node_2 8 15
node_3 15 15
</code></pre>
<p>Thanks in advance!</p>
| alxndr | <p>There is no <strong>one command</strong> for this.</p>
<p>It is possible to write script to do that with combining those two commands.</p>
<p>Note that using integer based metrics like number of pods can be very misleading as pods can vary in how much space and cpus they consume. You might use up cpus and memory before you reach node pod count capacity.</p>
<p>You can check available resources with command: <code>kubectl top nodes</code></p>
<blockquote>
<h3>Node capacity<a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-capacity" rel="noreferrer"></a></h3>
<p>The capacity of the node (number of cpus and amount of memory) is part
of the node object. Normally, nodes register themselves and report
their capacity when creating the node object. If you are doing
<a href="https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration" rel="noreferrer">manual node
administration</a>,
then you need to set node capacity when adding a node.</p>
<p>The Kubernetes scheduler ensures that there are enough resources for
all the pods on a node. It checks that the sum of the requests of
containers on the node is no greater than the node capacity. It
includes all containers started by the kubelet, but not containers
started directly by the <a href="https://kubernetes.io/docs/concepts/overview/components/#node-components" rel="noreferrer">container
runtime</a>
nor any process running outside of the containers.</p>
</blockquote>
<p>P.S.</p>
<p>On Debian the first command had to be slightly modified to work:</p>
<pre><code>kubectl describe node | grep -E "(^Name:|^Non-terminated)"
</code></pre>
| Piotr Malec |
<p>Have few developer logs in kubernetes pods, what is the best method to get the logs for the developers to see it.</p>
<p>Any specific tools that we can use?</p>
<p>I have the option of graylog, but not sure if that can be customized to get the developer logs into it.</p>
| Leo Praveen Chelliah | <p>The most basic method would be to simply use <code>kubectl logs</code> command:</p>
<blockquote>
<p>Print the logs for a container in a pod or specified resource. If the
pod has only one container, the container name is optional.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">Here</a> you can find more details regarding the command and it's flags alongside some useful examples.</p>
<p>Also, you may want to use:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/" rel="nofollow noreferrer">Logging Using Elasticsearch and Kibana</a></p></li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/" rel="nofollow noreferrer">Logging Using Stackdriver</a></p></li>
</ul>
<p>Both should do the trick in your case.</p>
<p>Please let me know if that is what you had on mind and if my answer was helpful.</p>
| Wytrzymały Wiktor |
<p>I have setup nginx ingress like this on kubernetes</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: animefanz-ktor-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- mydomain.com
secretName: my-tls
rules:
- host: mydomain.com
http:
paths:
- path: /myapp(/|$)(.*)
backend:
serviceName: myservice
servicePort: 8080
</code></pre>
<p>So everything working fine but i want to do one thing when ever <a href="https://mydomian.com/myapp/api/history" rel="nofollow noreferrer">https://mydomian.com/myapp/api/history</a> called then i want to redirect it to <a href="https://mydomain2.com/myapp2/api/history" rel="nofollow noreferrer">https://mydomain2.com/myapp2/api/history</a> along with get params that is.</p>
<p>So i want to just forward one api request to another server.</p>
| William | <p>I think you can configure it with nginx <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">server-snippet</a>/<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">configuration-snippet</a> annotation.</p>
<hr>
<p>There is related <a href="https://stackoverflow.com/questions/58687909/kubernetes-ingress-domain-redirect">stackoverflow question</a> about that.</p>
<p>And examples provided by @Thanh Nguyen Van</p>
<pre><code>metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite /preview https://test.app.example.com$uri permanent;
spec:
rules:
- host: test.example.io
http:
paths:
- path: /
backend:
serviceName: service-1
servicePort: 80
- host: test.app.example.io
http:
paths:
- path: /preview/*
backend:
serviceName: service-2
servicePort: 80
</code></pre>
<p>And @Harsh Manvar</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/server-snippet: |
location ~ /preview {
rewrite /preview https://test.app.example.com$uri permanent;
}
name: staging-ingress
spec:
rules:
- host: test.example.io
http:
paths:
- path: /
backend:
serviceName: service-1
servicePort: 80
- path: /preview/*
backend:
url:
serviceName: service-2
servicePort: 80
tls:
- hosts:
- test.example.io
secretName: staging
</code></pre>
<hr>
<p>Additionally there is related <a href="https://github.com/kubernetes/ingress-nginx/issues/605" rel="nofollow noreferrer">github issue</a> about that.</p>
<p>Hope you find this useful.</p>
| Jakub |
<p>I am using kubernetes 1.15.7 version.</p>
<p>I am trying to follow the link <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration</a> to enable 'encryption-provider-config' option on 'kube-apiserver'.</p>
<p>I edited file '/etc/kubernetes/manifests/kube-apiserver.yaml' and provided below option</p>
<pre><code>- --encryption-provider-config=/home/rtonukun/secrets.yaml
</code></pre>
<p>But after that I am getting below error.</p>
<pre><code>The connection to the server 171.69.225.87:6443 was refused - did you specify the right host or port?
</code></pre>
<p>with all kubectl commands like 'kubectl get no'.</p>
<p>Mainy, how do I do these below two steps?</p>
<pre><code>3. Set the --encryption-provider-config flag on the kube-apiserver to point to the location of the config file.
4. Restart your API server.
</code></pre>
| Kalyan Kumar | <p>I've reproduced exactly your scenario, and I'll try to explain how I fixed it</p>
<h3>Reproducing the same scenario</h3>
<ol>
<li>Create the encrypt file on <code>/home/koopakiller/secrets.yaml</code>:</li>
</ol>
<pre><code>apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: r48bixfj02BvhhnVktmJJiuxmQZp6c0R60ZQBFE7558=
- identity: {}
</code></pre>
<p>Edit the file <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> and set the <code>--encryption-provider-config</code> flag:</p>
<pre><code> - --encryption-provider-config=/home/koopakiller/encryption.yaml
</code></pre>
<p>Save the file and exit.</p>
<p>When I checked the pods status got the same error:</p>
<pre><code>$ kubectl get pods -A
The connection to the server 10.128.0.62:6443 was refused - did you specify the right host or port?
</code></pre>
<h3>Troubleshooting</h3>
<p>Since kubectl is not working anymore, I tried to look directly the running containers using docker command, then I see kube-apiserver container was recently recreated:</p>
<pre><code>$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54203ea95e39 k8s.gcr.io/pause:3.1 "/pause" 1 minutes ago Up 1 minutes k8s_POD_kube-apiserver-lab-1_kube-system_015d9709c9881516d6ecf861945f6a10_0
...
</code></pre>
<p>Kubernetes store the logs of created pods on <code>/var/log/pods</code> directory, I've checked the kube-apiserver log file and found a valuable information:</p>
<blockquote>
<p>{"log":"Error: error opening encryption provider configuration file "/home/koopakiller/encryption.yaml": open <strong>/home/koopakiller/encryption.yaml: no such file or directory</strong>\n","stream":"stderr","time":"2020-01-22T13:28:46.772768108Z"}</p>
</blockquote>
<h3>Explanation</h3>
<p>Taking a look at manifest file <code>kube-apiserver.yaml</code> is possible to see the command <code>kube-apiserver</code>, it runs into container, so they need to have the <code>encryption.yaml</code> file mounted into container.</p>
<p>If you check the <code>volumeMounts</code> in this file, you could see that only the paths below is mounted in container by default:</p>
<blockquote>
<ul>
<li>/etc/ssl/certs</li>
<li>/etc/ca-certificates</li>
<li>/etc/kubernetes/pki</li>
<li>/usr/local/share/ca-certificates</li>
<li>/usr/share/ca-certificates</li>
</ul>
</blockquote>
<pre><code>
...
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
...
</code></pre>
<p>Based on the facts above, we can assume that apiserver failed to start because <code>/home/koopakiller/encryption.yaml</code> doesn't actually mounted into container.</p>
<h3>How to solve</h3>
<p>I can see 2 ways to solve this issue:</p>
<p><strong>1st</strong> - Copy the encryption file to <code>/etc/kubernetes/pki</code> (or any of the path above) and change the path in <code>/etc/kubernetes/kube-apiserver.yaml</code>:</p>
<pre><code> - --encryption-provider-config=/etc/kubernetes/encryption.yaml
</code></pre>
<p>Save the file and wait apiserver restart.</p>
<p><strong>2nd</strong> - Create a new <code>volumeMounts</code> in the kube-apiserver.yaml manifest to mount a custom directory from node into container.</p>
<p>Let's create a new directory in <code>/etc/kubernetes/secret</code> (home folder isn't a good location to leave config files =)).</p>
<p>Edit <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>:</p>
<pre><code>...
- --encryption-provider-config=/etc/kubernetes/secret/encryption.yaml
...
volumeMounts:
- mountPath: /etc/kubernetes/secret
name: secret
readOnly: true
...
volumes:
- hostPath:
path: /etc/kubernetes/secret
type: DirectoryOrCreate
name: secret
...
</code></pre>
<p>After save the file kubernetes will mount the node path <code>/etc/kubernetes/secret</code> into the same path into the apiserver container, wait start completely and try to list your node again.</p>
<p>Please let know if that helped!</p>
| Mr.KoopaKiller |
<p>I wish to use a Mutating WebHook or Istio to automatically inject a Sidecar container and a shared volume between the existing container and sidecar to a k8s deployment in a remote cluster for log archiving. The issue is that the mount path required for each pod differs and is provided as a user-provided input.</p>
<p>What would be the best way to pass this user-defined information to the webhook?</p>
| Ranika Nisal | <p>The best place to store that kind of data for the mutating webhook to read from are <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/" rel="nofollow noreferrer">annotations</a>.</p>
<p>More useful <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#extending-with-annotations" rel="nofollow noreferrer">information</a> about annotations and webhooks with examples. <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api" rel="nofollow noreferrer">Annotations</a> can also be accessed from containers inside a pod.</p>
<p>Note that if the mount path is different for different pods, then we need to get those paths from somewhere.</p>
| Piotr Malec |
<p>I have a kubernetes cluster with serviceA on namespaceA and serviceB on namespaceB.</p>
<p>I want, from serviceA, use kubernetes service discovery to programmatically list serviceB.
I am planning to use <a href="https://cloud.spring.io/spring-cloud-static/spring-cloud-kubernetes/2.1.0.RC1/single/spring-cloud-kubernetes.html#_discoveryclient_for_kubernetes" rel="nofollow noreferrer">spring cloud kubernetes</a> ( @EnableDiscoveryClient ).</p>
<p>However, there is a company wide policy to block the use of the configuration below that should have solved the problem:
<code>spring.cloud.kubernetes.discovery.all-namespaces=true</code></p>
<p>Is there any way to circumvent the problem? Maybe assign serviceB to two different namespaces or some other permission/configuration that I am not aware of?</p>
| guilhermecgs | <p>If you are trying to simply look up a service IP by service name through Kubernetes API than it should not really matter if you're doing it through <code>kubectl</code> or a Java client, the options you pass to the API are the same. </p>
<p>The thing that matters however is whether the service name would be looked up in the same namespace only or in all namespaces. Accessing a service from a different namespace can be done by specifying its name along with the namespace - instead of <code>my-service</code> they would need to write <code>my-service.some-namespace</code>.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Services without selectors</a> are also an option to expose a service from one namespace to another so that the namespace would be specified in Kubernetes objects and not in app code.</p>
<p>Please let me know if that helps.</p>
| Wytrzymały Wiktor |
<p>I am trying to learn about the microservice architecture and different microservices interacting with each other. I have written a simple microservice based web app and had a doubt regarding it. If a service has multiple versions running, load balancing is easily managed by the envoy siedcar in Istio. My question is that in case there is some vulnerability detected in one of the versions, is there a way to isolate the pod from receiving any more traffic.</p>
<p>We can manually do this with the help of a virtual service and the appropriate routing rule. But can it be dynamically performed based on some trigger event?</p>
<pre><code>---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: VirtualServiceName
spec:
hosts:
- SomeHost
http:
- route:
- destination:
host: SomeHost
subset: v1
weight: 0
- destination:
host: SomeHost
subset: v2
weight: 100
</code></pre>
<p>Any help is appreciated</p>
| gandelf_ | <p>According to <a href="https://istio.io/docs/reference/config/networking/destination-rule/#LocalityLoadBalancerSetting" rel="nofollow noreferrer">istio documentation</a> you can configure failover with LocalityLoadBalancerSetting.</p>
<blockquote>
<p>If the goal of the operator is not to distribute load across zones and regions but rather to restrict the regionality of failover to meet other operational requirements an operator can set a ‘failover’ policy instead of a ‘distribute’ policy.</p>
<p>The following example sets up a locality failover policy for regions. Assume a service resides in zones within us-east, us-west & eu-west this example specifies that when endpoints within us-east become unhealthy traffic should failover to endpoints in any zone or sub-zone within eu-west and similarly us-west should failover to us-east.</p>
</blockquote>
<pre><code> failover:
- from: us-east
to: eu-west
- from: us-west
to: us-east
</code></pre>
<blockquote>
<p>Failover requires <a href="https://istio.io/docs/reference/config/networking/destination-rule/#OutlierDetection" rel="nofollow noreferrer">outlier detection</a> to be in place for it to work.</p>
</blockquote>
<p>But it's rather for regions/zones not pods.</p>
<hr />
<p>If it's about pods you could take a look at this <a href="https://istio.io/docs/concepts/traffic-management/#working-with-your-applications" rel="nofollow noreferrer">istio documentation</a></p>
<blockquote>
<p>While Istio failure recovery features improve the reliability and availability of services in the mesh, applications must handle the failure or errors and take appropriate fallback actions. For example, when all instances in a load balancing pool have failed, Envoy returns an HTTP 503 code. The application must implement any fallback logic needed to handle the HTTP 503 error code..</p>
</blockquote>
<p>And take a look at <a href="https://github.com/istio/istio/issues/10537" rel="nofollow noreferrer">this</a> and this <a href="https://github.com/istio/istio/issues/18367" rel="nofollow noreferrer">github</a> issues.</p>
<blockquote>
<p>During HTTP health checking Envoy will send an HTTP request to the upstream host. By default, it expects a 200 response if the host is healthy. Expected response codes are configurable. The upstream host can return 503 if it wants to immediately notify downstream hosts to no longer forward traffic to it.</p>
</blockquote>
<hr />
<p>I hope you find this useful.</p>
| Jakub |
<p>I have an app in Kubernetes which is served over https. So now I would like to exclude one URL from that rule and use HTTP to serve it for performance reasons. I am struggling with that the whole day and it seems impossible.</p>
<p>These are my ingress YAML:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.31.1.11"],"port":443,"protocol":"HTTPS","serviceName":"myservice:myservice","ingressName":"myservice:myservice","hostname":"app.server.test.mycompany.com","path":"/","allNodes":true}]'
kubernetes.io/ingress.class: nginx
creationTimestamp: "2020-02-17T13:14:19Z"
generation: 1
labels:
app-kubernetes-io/instance: mycompany
app-kubernetes-io/managed-by: Tiller
app-kubernetes-io/name: mycompany
helm.sh/chart: mycompany-1.0.0
io.cattle.field/appId: mycompany
name: mycompany
namespace: mycompany
resourceVersion: "565608"
selfLink: /apis/extensions/v1beta1/namespaces/mycompany/ingresses/mycompany
uid: c6b93108-a28f-4de6-a62b-487708b3f5d1
spec:
rules:
- host: app.server.test.mycompany.com
http:
paths:
- backend:
serviceName: mycompany
servicePort: 80
path: /
tls:
- hosts:
- app.server.test.mycompany.com
secretName: mycompany-tls-secret
status:
loadBalancer:
ingress:
- ip: 172.31.1.11
</code></pre>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.31.1.1"],"port":80,"protocol":"HTTP","serviceName":"mycompany:mycompany","ingressName":"mycompany:mycompany-particular-service","hostname":"app.server.test.mycompany.com","path":"/account_name/particular_service/","allNodes":true}]'
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
creationTimestamp: "2020-02-17T13:14:19Z"
generation: 1
labels:
app-kubernetes-io/instance: mycompany
app-kubernetes-io/managed-by: Tiller
app-kubernetes-io/name: mycompany
helm.sh/chart: mycompany-1.0.0
io.cattle.field/appId: mycompany
name: mycompany-particular-service
namespace: mycompany
resourceVersion: "565609"
selfLink: /apis/extensions/v1beta1/namespaces/mycompany/ingresses/mycompany-particular-service
uid: 88127a02-e0d1-4b2f-b226-5e8d160c1654
spec:
rules:
- host: app.server.test.mycompany.com
http:
paths:
- backend:
serviceName: mycompany
servicePort: 80
path: /account_name/particular_service/
status:
loadBalancer:
ingress:
- ip: 172.31.1.11
</code></pre>
<p>So as you can see from above I would like to server <code>/particular_service/</code> over HTTP. Ingress, however, redirects to HTTPS as TLS is enabled for that host in the first ingress.</p>
<p>Is there any way to disable TLS just for that one specific path when the same host is being used for configuration?</p>
<p>In short summary I would like to have:</p>
<pre><code>https://app.server.test.mycompany.com
but
http://app.server.test.mycompany.com/account_name/particular_service/
</code></pre>
| szaman | <p>I've tested with 2 ingress of the same domain, the first one with tls enabled and the second without tls and it worked.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: echo-https
spec:
tls:
- hosts:
- myapp.mydomain.com
secretName: https-myapp.mydomain.com
rules:
- host: myapp.mydomain.com
http:
paths:
- backend:
serviceName: echo-svc
servicePort: 80
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: echo-http
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- backend:
serviceName: echo-svc
servicePort: 80
path: /insecure
</code></pre>
<p>By the Nginx <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#server-side-https-enforcement-through-redirect" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.</p>
<p>This can be disabled globally using <code>ssl-redirect: "false"</code> in the NGINX config map, or per-Ingress with the <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> annotation in the particular resource.</p>
</blockquote>
<p>Please let me if that helps.</p>
| Mr.KoopaKiller |
<p>There are multiple same pods running in one cluster but different namespaces. This is the web application running in Kubernetes. I have the URL <code><HOSTNAME>:<PORT>/context/abc/def/.....</code>. I want to redirect to particular service based on the context. Is there a way i can achieve it using ingress controller ? Or Is there any way i can achieve it using different ports through ingress ?</p>
<p>My web application works fine if the URL is <code><HOSTNAME>:<PORT>/abc/def/.....</code>. Since i have to access the different pods using the same URL, I am adding context to it. Do we have any other way to achieve this use case ?</p>
| Sunil Gudivada | <p>You can do that with <code>rewrite-target</code>. In example below i used <code><HOSTNAME></code> value of <code>rewrite.bar.com</code> and <code><PORT></code> with value <code>80</code>.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: context-service
servicePort: 80
path: /context1(/|$)(.*)
- backend:
serviceName: context-service2
servicePort: 80
path: /context2(/|$)(.*)
</code></pre>
<p>For example, the ingress definition above will result in the following rewrites:</p>
<p><code>rewrite.bar.com/context1</code> rewrites to <code>rewrite.bar.com/</code> for context 1 service.</p>
<p><code>rewrite.bar.com/context2</code> rewrites to <code>rewrite.bar.com/</code> for context 2 service.</p>
<p><code>rewrite.bar.com/context1/new</code> rewrites to <code>rewrite.bar.com/new</code> for context 1 service.</p>
<p><code>rewrite.bar.com/context2/new</code> rewrites to <code>rewrite.bar.com/new</code> for context 2 service.</p>
| Piotr Malec |
<p>I am deploying a Jhipster app to a Kubernetes environment, and am using Istio for the networking. </p>
<p>Below is my VirtualService. Note that when the <code>prefix</code> is set to <code>/</code>, everything works fine. However I have several apps running on this cluster, so I need to map it to <code>/mywebsite</code>. </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ingress
spec:
hosts:
- "*"
gateways:
- mywebsite-gateway
http:
- match:
- uri:
prefix: /mywebsite
route:
- destination:
host: mywebsite
port:
number: 80
websocketUpgrade: true
</code></pre>
<p>When I access the app, I get these set of errors:</p>
<pre><code>mywebsite:3 GET http://mywebsite.com:31380/app/vendor.bundle.js net::ERR_ABORTED 404 (Not Found)
manifest.bundle.js:55 Uncaught TypeError: Cannot read property 'call' of undefined
at __webpack_require__ (manifest.bundle.js:55)
at eval (app.main.ts?3881:1)
at Object../src/main/webapp/app/app.main.ts (main.bundle.js:447)
at __webpack_require__ (manifest.bundle.js:55)
at webpackJsonpCallback (manifest.bundle.js:26)
at main.bundle.js:1
__webpack_require__ @ manifest.bundle.js:55
eval @ app.main.ts?3881:1
./src/main/webapp/app/app.main.ts @ main.bundle.js:447
__webpack_require__ @ manifest.bundle.js:55
webpackJsonpCallback @ manifest.bundle.js:26
(anonymous) @ main.bundle.js:1
manifest.bundle.js:55 Uncaught TypeError: Cannot read property 'call' of undefined
at __webpack_require__ (manifest.bundle.js:55)
at eval (global.css?ca77:1)
at Object../node_modules/css-loader/index.js!./src/main/webapp/content/css/global.css (global.bundle.js:6)
at __webpack_require__ (manifest.bundle.js:55)
at eval (global.css?0a39:4)
at Object../src/main/webapp/content/css/global.css (global.bundle.js:13)
at __webpack_require__ (manifest.bundle.js:55)
at webpackJsonpCallback (manifest.bundle.js:26)
at global.bundle.js:1
__webpack_require__ @ manifest.bundle.js:55
eval @ global.css?ca77:1
./node_modules/css-loader/index.js!./src/main/webapp/content/css/global.css @ global.bundle.js:6
__webpack_require__ @ manifest.bundle.js:55
eval @ global.css?0a39:4
./src/main/webapp/content/css/global.css @ global.bundle.js:13
__webpack_require__ @ manifest.bundle.js:55
webpackJsonpCallback @ manifest.bundle.js:26
(anonymous) @ global.bundle.js:1
mywebsite:1 Unchecked runtime.lastError: The message port closed before a response was received.
</code></pre>
<p>I'm not sure why it is trying to go <code>/app/vendor.bundle.js</code>. I think it should go to <code>/mywebsite/app/vendor.bundle.js</code>. Although when I go to this page manually, I get a <code>Your request cannot be processed</code></p>
<p>Also in my <code>index.html</code>, I have <code><base href="./" /></code>, which had always been there ,as I read that as a possible solution.</p>
| Mike K. | <p>As you mentioned in comments</p>
<blockquote>
<p>I set new HtmlWebpackPlugin({ baseUrl: '/myapp/' }) in the webpack ... and in the index.html. I get some new errors (ie. can't find favicon), but still the site doesn't load as doesn't seem to find the javascript or soemthing – Mike K</p>
</blockquote>
<p>That's how it supposed to work. It doesn't load javascript and probably css/img's because you didn't show istio uri for this.</p>
<p>So the solution here is </p>
<ul>
<li>Make new baseUrl and basehref, for example /myapp</li>
<li>Create virtual service with /myapp prefix or any prefix and /myapp rewrite</li>
<li>Create additional uri's for your javascript/css/img's folder path</li>
</ul>
<p>Take a look at this <a href="https://rinormaloku.com/istio-practice-routing-virtualservices/" rel="nofollow noreferrer">example</a></p>
<blockquote>
<p>Let’s break down the requests that should be routed to Frontend:</p>
<p><strong>Exact path</strong> / should be routed to Frontend to get the Index.html</p>
<p><strong>Prefix path</strong> /static/* should be routed to Frontend to get any static files needed by the frontend, like <strong>Cascading Style Sheets</strong> and <strong>JavaScript files</strong>.</p>
<p><strong>Paths matching the regex ^.*.(ico|png|jpg)$</strong> should be routed to Frontend as it is an image, that the page needs to show.</p>
</blockquote>
<pre><code>http:
- match:
- uri:
exact: /
- uri:
exact: /callback
- uri:
prefix: /static
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: frontend
port:
number: 80
</code></pre>
| Jakub |
<p><strong>Problem</strong></p>
<p>I would like to host multiple services on a single domain name under different paths. The problem is that I'm unable to get request path rewriting working using <code>nginx-ingress</code>.</p>
<p><strong>What I've tried</strong></p>
<p>I've installed nginx-ingress using <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-on-digitalocean-kubernetes-using-helm" rel="nofollow noreferrer">these instructions</a>:</p>
<pre class="lang-sh prettyprint-override"><code>helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true
</code></pre>
<pre><code>CHART APP VERSION
nginx-ingress-0.3.7 1.5.7
</code></pre>
<p>The example works great with hostname based backends:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
</code></pre>
<p>However, I can't get path rewriting to work. This version redirects requests to the <code>hello-kubernetes-first</code> service, but doesn't do the path rewrite so I get a 404 error from that service because it's looking for the /foo directory within that service (which doesn't exist).</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
path: /foo
</code></pre>
<p>I've also tried <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">this example</a> for paths / rewriting:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
path: /foo(/|$)(.*)
</code></pre>
<p>But the requests aren't even directed to the <code>hello-kubernetes-first</code> service.</p>
<p>It appears that my rewrite configuration isn't making it to the <code>/etc/nginx/nginx.conf</code> file. When I run the following, I get no results:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec nginx-ingress-nginx-ingress-XXXXXXXXX-XXXXX cat /etc/nginx/nginx.conf | grep rewrite
</code></pre>
<p>How do I get the path rewriting to work?</p>
<p><strong>Additional information:</strong></p>
<ul>
<li>kubectl / kubernetes version: <code>v1.14.8</code></li>
<li>Hosting on Azure Kubernetes Service (AKS)</li>
</ul>
| RQDQ | <p>This is not likely to be an issue with AKS, as the components you use are working on top of Kubernetes layer. However, if you want to be sure you can deploy this on top of minikube locally and see if the problem persists. </p>
<p>There are also few other things to consider:</p>
<ol>
<li>There is a <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-tls" rel="nofollow noreferrer">detailed guide</a> about creating ingress controller on AKS. The guide is up to date and confirmed to be working fine. </li>
</ol>
<blockquote>
<p>This article shows you how to deploy the NGINX ingress controller in
an Azure Kubernetes Service (AKS) cluster. The cert-manager project is
used to automatically generate and configure Let's Encrypt
certificates. Finally, two applications are run in the AKS cluster,
each of which is accessible over a single IP address.</p>
</blockquote>
<ol start="2">
<li>You may also want to use alternative like <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="nofollow noreferrer">Traefik</a>:</li>
</ol>
<blockquote>
<p>Traefik is a modern HTTP reverse proxy and load balancer made to
deploy microservices with ease.</p>
</blockquote>
<ol start="3">
<li>Remember that:</li>
</ol>
<blockquote>
<p>Operators will typically wish to install this component into the
<code>kube-system</code> namespace where that namespace's default service account
will ensure adequate privileges to watch Ingress resources
cluster-wide.</p>
</blockquote>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>I use Kubernetes (<code>Openshift</code>) to deploy many microservices. I wish to utilise the same to deploy some of my Flink jobs. <code>Flink</code> jobs are critical - some jobs are stateless that process every data (exactly once), some jobs are stateful that looks for patterns in the stream or react to time. No jobs can tolerate long downtime or frequent shutdown (due to programming errors, the way Flink quits). </p>
<p>I find docs mostly lean to deploy Flink jobs in k8s as <code>Job Cluster</code>. But how should one take a practical approach in doing it? </p>
<ul>
<li>Though k8s can restart the failed Flink <code>pod</code>, how can Flink restore its state to recover?</li>
<li>Can the Flink <code>pod</code> be replicated more than one? How do the <code>JobManager</code> & <code>TaskManager</code> works when two or more pods exists? If not why? Other approaches?</li>
</ul>
| vvra | <blockquote>
<p>Though k8s can restart the failed Flink pod, how can Flink restore its state to recover?</p>
</blockquote>
<p>From Flink Documentation we have:</p>
<blockquote>
<p>Checkpoints allow Flink to recover state and positions in the streams to give the application the same semantics as a failure-free execution.</p>
</blockquote>
<p>It means that you need to have a <strong>Check Storage</strong> mounted in your pods to be able to recover the state. </p>
<p>In <strong>Kubernetes</strong> you could use <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">Persistent Volumes</a> to share the data across your pods.</p>
<p>Actually there are a lot of supported plugins, see <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">here</a>.</p>
<p>You can have more replicas of <code>TaskManager</code>, but in Kubernetes you don't need to take care of HA for <code>JobManager</code> since you can use Kubernetes <strong>self-healing</strong> deployment.</p>
<p>To use <em>self-healing</em> deployment in Kubernetes you just need to create a deployment and set the <code>replica</code> to <code>1</code>, like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
</code></pre>
<p>Finally, you can check this links to help you setup Flink in Kubernetes:</p>
<p><a href="https://jobs.zalando.com/en/tech/blog/running-apache-flink-on-kubernetes/" rel="nofollow noreferrer">running-apache-flink-on-kubernetes</a></p>
<p><a href="https://github.com/apache/flink/tree/master/flink-container/kubernetes" rel="nofollow noreferrer">Flink Job cluster on Kubernetes</a></p>
<p><a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html" rel="nofollow noreferrer">Flink Kubernetes Deployments</a></p>
<p><a href="https://flink.apache.org/news/2019/12/09/flink-kubernetes-kudo.html" rel="nofollow noreferrer">Running Flink on Kubernetes with KUDO</a></p>
| Mr.KoopaKiller |
<p>It's not so digital ocean specific, would be really nice to verify if this is an expected behavior or not.</p>
<p>I'm trying to setup ElasticSearch cluster on DO managed Kubernetes cluster with helm chart from ElasticSearch <a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch" rel="nofollow noreferrer">itself</a> </p>
<p>And they say that I need to specify a <code>storageClassName</code> in a <code>volumeClaimTemplate</code> in order to use volume which is provided by managed kubernetes service. For DO it's <code>do-block-storages</code> according to their <a href="https://www.digitalocean.com/docs/kubernetes/how-to/add-volumes/" rel="nofollow noreferrer">docs</a>. Also seems to be it's not necessary to define PVC, helm chart should do it itself. </p>
<p>Here's config I'm using</p>
<pre><code># Specify node pool
nodeSelector:
doks.digitalocean.com/node-pool: elasticsearch
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Specify Digital Ocean storage
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '/usr/share/elasticsearch/data/nodes/']
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
</code></pre>
<p>Helm chart i'm setting with terraform, but it doesn't matter anyway, which way you'll do it:</p>
<pre><code>resource "helm_release" "elasticsearch" {
name = "elasticsearch"
chart = "elastic/elasticsearch"
namespace = "elasticsearch"
values = [
file("charts/elasticsearch.yaml")
]
}
</code></pre>
<p>Here's what I've got when checking pod logs:</p>
<pre><code>51s Normal Provisioning persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 External provisioner is provisioning volume for claim "elasticsearch/elasticsearch-master-elasticsearch-master-2"
2m28s Normal ExternalProvisioning persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator
</code></pre>
<p>I'm pretty sure the problem is a volume. it should've been automagically provided by kubernetes. Describing persistent storage gives this:</p>
<pre><code>holms@debian ~/D/c/s/b/t/s/post-infra> kubectl describe pvc elasticsearch-master-elasticsearch-master-0 --namespace elasticsearch
Name: elasticsearch-master-elasticsearch-master-0
Namespace: elasticsearch
StorageClass: do-block-storage
Status: Pending
Volume:
Labels: app=elasticsearch-master
Annotations: volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: elasticsearch-master-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 4m57s (x176 over 14h) dobs.csi.digitalocean.com_master-setupad-eu_04e43747-fafb-11e9-b7dd-e6fd8fbff586 External provisioner is provisioning volume for claim "elasticsearch/elasticsearch-master-elasticsearch-master-0"
Normal ExternalProvisioning 93s (x441 over 111m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator
</code></pre>
<p>I've google everything already, it seems to be everything is correct, and volume should be up withing DO side with no problems, but it hangs in pending state. Is this expected behavior or should I ask DO support to check what's going on their side?</p>
| holms | <p>Yes, this is expected behavior. This chart might not be compatible with Digital Ocean Kubernetes service.</p>
<p><a href="https://www.digitalocean.com/docs/kubernetes/overview/#known-issues" rel="nofollow noreferrer">Digital Ocean</a> documentation has the following information in Known Issues section:</p>
<blockquote>
<ul>
<li><p>Support for resizing DigitalOcean Block Storage Volumes in Kubernetes has not yet been implemented.</p></li>
<li><p>In the DigitalOcean Control Panel, cluster resources (worker nodes, load balancers, and block storage volumes) are listed outside of the Kubernetes page. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with <code>kubectl</code> or from the control panel’s Kubernetes page.</p></li>
</ul>
</blockquote>
<p>In the <a href="https://github.com/helm/charts/tree/master/stable/elasticsearch#prerequisites-details" rel="nofollow noreferrer">charts/stable/elasticsearch</a> there are specific requirements mentioned:</p>
<blockquote>
<h3>Prerequisites Details</h3>
<ul>
<li>Kubernetes 1.10+</li>
<li>PV dynamic provisioning support on the underlying infrastructure</li>
</ul>
</blockquote>
<p>You can ask Digital Ocean support for help or try to deploy ElasticSearch without helm chart.</p>
<p>It is even mentioned on <a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch#usage-notes-and-getting-started" rel="nofollow noreferrer">github</a> that:</p>
<blockquote>
<p>Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine).</p>
</blockquote>
<hr>
<p><strong>Update:</strong></p>
<p>The same issue is present on my kubeadm ha cluster.</p>
<p>However I managed to get it working by manually creating <code>PersistentVolumes</code>'s for my <code>storageclass</code>.</p>
<p>My storageclass definition: <code>storageclass.yaml</code>:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
parameters:
type: pd-ssd
</code></pre>
<pre><code>$ kubectl apply -f storageclass.yaml
</code></pre>
<pre><code>$ kubectl get sc
NAME PROVISIONER AGE
ssd local 50m
</code></pre>
<p>My PersistentVolume definition: <code>pv.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: ssd
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <name of the node>
</code></pre>
<pre><code>kubectl apply -f pv.yaml
</code></pre>
<p>After that I ran helm chart:</p>
<pre><code>helm install stable/elasticsearch --name my-release --set data.persistence.storageClass=ssd,data.storage=30Gi --set data.persistence.storageClass=ssd,master.storage=30Gi
</code></pre>
<p>PVC finally got bound.</p>
<pre><code>$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default data-my-release-elasticsearch-data-0 Bound task-pv-volume2 30Gi RWO ssd 17m
default data-my-release-elasticsearch-master-0 Pending 17m
</code></pre>
<p>Note that I only manually satisfied only single pvc and ElasticSearch manual volume provisioning might be very inefficient.</p>
<p>I suggest contacting DO support for automated volume provisioning solution. </p>
| Piotr Malec |
<p>How to copy/ backup an existing kubernetes resource and its related entities as well as a backup option </p>
<p>for example when I run <code>kubectl get deploy my-deployment -n staging > backupdeploy.yaml</code></p>
<p>I get a file named backupdeploy.yaml with all the annotations and creation timestamps. </p>
<p>I need to be able to achieve a copy of the original my-deployment.yaml and the related resources in separate yamls.
is there any shell script available to do this?</p>
<p>I also need the secrets, configmaps, svc, pvc that are tied to the " <strong>my-deployment</strong> "</p>
<p>Please help me out. Thanks.</p>
| Chronograph3r | <p>In order to achieve that you need to use the <code>--export</code> export flag:</p>
<blockquote>
<p>If true, use 'export' for the resources. Exported resources are
stripped of cluster-specific information.</p>
</blockquote>
<p>So it would look like something like this: <code>kubectl get deploy my-deployment -n staging --export</code></p>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>I need to block pods communication to each other but I failed to do it.<br />
I installed <a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/" rel="nofollow noreferrer">weave plug-in</a> on my <strong>Minikube</strong> (v1.21.0), and started two pods in the same namespace:</p>
<pre><code> kubectl run nginx1 --image=nginx -n ns1
kubectl run nginx2 --image=nginx -n ns2
</code></pre>
<p><strong>The pods IPs:</strong><br />
<code>nginx1</code> with IP: <code>172.17.0.3</code><br />
<code>nginx2</code> with IP: <code>172.17.0.4</code></p>
<p>I can ping <code>nginx1</code> from <code>nginx2</code> and vice vesra.<br />
I wanted to try to deny it, so I firstly tried to deny all the network with this network policy:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol1
namespace: earth
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress: []
EOF
</code></pre>
<p>I still had ping, so I tried this one too:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol1
namespace: earth
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.17.0.0/16
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.17.0.0/16
EOF
</code></pre>
<p>I still can ping each pod from within the other pods in the same namespace.<br />
I verified that <code>weave</code> is installed:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-2r5z8 1/1 Running 0 2d3h
etcd-ip-172-31-37-46 1/1 Running 0 2d3h
kube-apiserver-ip-172-31-37-46 1/1 Running 0 2d3h
kube-controller-manager-ip-172-31-37-46 1/1 Running 0 2d3h
kube-proxy-787pj 1/1 Running 0 2d3h
kube-scheduler-ip-172-31-37-46 1/1 Running 0 2d3h
storage-provisioner 1/1 Running 0 2d3h
weave-net-wd52r 2/2 Running 1 23m
</code></pre>
<p>I also tried to restart kubelet but I still have access from each pod to the other one.<br />
What can be the reason?</p>
| E235 | <p>When you specify the Egress and Ingress resources, you do not specify the network protcol. In the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">kubernetes docs</a> you can see that the protocol can be specified too. Your kubernetes cluster defaults your Egress and Ingress resources to a protocol if you do not specify one.</p>
<p>If you block all TCP or UDP networking, you will find that ping still works just fine. This is because ping uses the ICMP network protocol, not TCP or UDP.</p>
<p>The actual configuration you need depends on your networking plugin. I do not know how to configure Weave to block ICMP.<br />
If you were using Calico, their <a href="https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping" rel="nofollow noreferrer">docs</a> explain how to handle the ICMP protocol.</p>
| anderio Moga |
<p>every one i have been searching the internet whole day but can't find a complete and decent example of how to use ambassador api gateway as istio ingress. The default documentation at ambassador site regarding istio isn't clear enough. So can someone please provide a complete and detailed example of how to use ambassador Api gateway along with istio service mesh?</p>
<pre><code>My platform specs are
OS: Windows10
Container-Platform: Docker-desktop
Kubernetes-version: 1.10.11
</code></pre>
| JayD | <p>This topic is explained in detail in Ambassador <a href="https://www.getambassador.io/user-guide/with-istio/" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Ambassador is a Kubernetes-native API gateway for microservices. Ambassador is deployed at the edge of your network, and routes incoming traffic to your internal services (aka "north-south" traffic). <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> is a service mesh for microservices, and is designed to add application-level Layer (L7) observability, routing, and resilience to service-to-service traffic (aka "east-west" traffic). Both Istio and Ambassador are built using <a href="https://www.envoyproxy.io/" rel="nofollow noreferrer">Envoy</a>.</p>
</blockquote>
<p>Follow this <a href="https://www.getambassador.io/user-guide/with-istio/#getting-ambassador-working-with-istio" rel="nofollow noreferrer">link</a> for step-by-step guide how to get Ambassador working with Istio.</p>
<hr>
<p>Additionally You will need to update Your Kubernetes version as Istio requirements are:</p>
<ul>
<li><p>Istio <code>1.4</code> and <code>1.3</code> has been tested with Kubernetes: <code>1.13</code>, <code>1.14</code>, <code>1.15</code>. </p></li>
<li><p>Istio <code>1.2</code> has been tested with Kubernetes: <code>1.12</code>, <code>1.13</code>, <code>1.14</code>.</p></li>
</ul>
<p>I suggest avoiding older versions.</p>
| Piotr Malec |
<p>I am unable to change the password of an existing user from MongoDB deployed on k8s, unless I am deleting the database and then recreating it again with the new password.</p>
<p>How can I change the password using the yaml for the mongo stateful object without deleting the db?</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-db-statefulset
namespace: development
spec:
serviceName: mongo-svc
replicas: 1
selector:
matchLabels:
component: mongo
template:
metadata:
labels:
component: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:4.0.4
volumeMounts:
- mountPath: /data/db
name: volume
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
# from secrets
value: password
- name: MONGO_INITDB_DATABASE
value: admin
volumes:
- name: volume
persistentVolumeClaim:
claimName: database-persistent-volume-claim
</code></pre>
| Mike Me | <p>If I understand your issue correctly:</p>
<ul>
<li>You have secret with your password as environment variable, and pod has access to the secret data through a Volume</li>
<li>You changed the secret password, but it's not getting picked up by a pod without a restart.</li>
</ul>
<p>According to <a href="https://kubernetes.io/docs/concepts/configuration/secret/#environment-variables-are-not-updated-after-a-secret-update" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Environment variables are not updated after a secret update, so if If a container already consumes a Secret in an environment variable, a Secret update will not be seen by the container unless it is restarted. There are third party solutions for triggering restarts when secrets change.</p>
</blockquote>
<p>This is a known <a href="https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes">issue</a>. You can read more about it in this <a href="https://github.com/kubernetes/kubernetes/issues/22368" rel="nofollow noreferrer">github issue</a>.</p>
<hr />
<p>So after you change the secret password you have to restart your pod to update this value, you don't have to delete it.</p>
<hr />
<p>As mentioned in documentation there are third party tools for triggering restart when secrets change, one of them is <a href="https://github.com/stakater/Reloader" rel="nofollow noreferrer">Reloader</a>.</p>
<blockquote>
<p>Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.</p>
</blockquote>
<hr />
<p>The quick way to restart deployment would be to use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-restart-em-" rel="nofollow noreferrer">kubectl rollout restart</a>, which performs a step by step shutdown and restarts each container in your deployment or statefulset.</p>
<p>If you change the password in your secret and use kubectl rollout restart
the new password should work.</p>
| Jakub |
<p>I am using Traefik as Kubernetes Ingress and I would like to know if I can use an IP address instead of a domain name. Example:</p>
<pre><code>http://ipaddress/service1
</code></pre>
<pre><code>http://ipdadress/service2
</code></pre>
<p>My ingress configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service1
namespace: staging
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- host: mydomain.dev
http:
paths:
- path: /service1
backend:
serviceName: service1
servicePort: 3000
</code></pre>
| cleitond | <p>Since it is a Layer 7 Load Balancer you can't use IP address directly. But if you use <a href="https://nip.io/" rel="noreferrer">nip.io</a> and for example 192-168-1-1.nip.io as your hostname it would work and you can do all the things you can regularly do with normal hostnames such as redirect app1.192-168-1-1.nip.io to app1 and 192-168-1-1.nip.io/app2 to app2 etc.</p>
| Akin Ozer |
<p>I have a confusion between Virtual Service and Destinationrule on which one is executed first?
Let’s say I have below configs,</p>
<p>Destinationrule -</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: movies
namespace: aio
spec:
host: movies
subsets:
- labels:
version: v1
name: version-v1
- labels:
version: v2
name: version-v2
---
</code></pre>
<p>VirtualService</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: movies
namespace: aio
spec:
hosts:
- movies
http:
- route:
- destination:
host: movies
subset: version-v1
weight: 10
- destination:
host: movies
subset: version-v2
weight: 90
---
</code></pre>
<p>I read somewhere that,
A VirtualService defines a set of traffic <strong>routing rules</strong> to apply when a host is addressed.
DestinationRule defines policies that apply to traffic intended for service <strong>after routing has occurred.</strong>
Does this mean Destinationrules are invoked after Virtualservices?</p>
<p>I have a small diagram, is my understanding correct?</p>
<p><a href="https://i.stack.imgur.com/1DlGg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1DlGg.png" alt="enter image description here"></a></p>
| John Seen | <p>Yes,</p>
<p>According to <a href="https://istio.io/docs/reference/config/networking/destination-rule/" rel="nofollow noreferrer">istio</a> documentation about <code>DestinationRule</code>:</p>
<blockquote>
<p>DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.</p>
</blockquote>
<p>And for <a href="https://istio.io/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer"><code>VirtualService</code></a>:</p>
<blockquote>
<p>A VirtualService defines a set of traffic routing rules to apply when a host is addressed.</p>
</blockquote>
<p>There is an youtube video: <a href="https://www.youtube.com/watch?v=oZrZlx2fmcM" rel="nofollow noreferrer">Life of a Packet through Istio</a> it explains in detail the order of processes that are applied to a packet going through the istio mesh. </p>
| Piotr Malec |
<p>I need to know about the below kube_state_metrics description. Exactly looking is what the particular metrics doing.</p>
<pre><code>Horizontal Pod Autoscaler Metrics:
kube_horizontalpodautoscaler_labels
kube_horizontalpodautoscaler_metadata_generation
kube_horizontalpodautoscaler_spec_max_replicas
kube_horizontalpodautoscaler_spec_min_replicas
kube_horizontalpodautoscaler_spec_target_metric
kube_horizontalpodautoscaler_status_condition
kube_horizontalpodautoscaler_status_current_replicas
kube_horizontalpodautoscaler_status_desired_replicas
Job Metrics:
kube_job_owner
Namespace Metrics:
kube_namespace_status_condition
Node Metrics:
kube_node_role
kube_node_status_capacity
kube_node_status_allocatable
PersistentVolumeClaim Metrics:
kube_persistentvolumeclaim_access_mode
kube_persistentvolumeclaim_status_condition
PersistentVolume Metrics:
kube_persistentvolume_capacity_bytes
Pod Metrics:
kube_pod_restart_policy
kube_pod_init_container_info
kube_pod_init_container_status_waiting_reason
kube_pod_init_container_status_terminated_reason
kube_pod_init_container_status_last_terminated_reason
kube_pod_init_container_resource_limits
ReplicaSet metrics:
kube_replicaset_labels
Service Metrics:
kube_statefulset_status_current_revision
kube_statefulset_status_update_revision
StorageClass Metrics:
kube_storageclass_info
kube_storageclass_labels
kube_storageclass_created
ValidatingWebhookConfiguration Metrics:
kube_validatingwebhookconfiguration_info
kube_validatingwebhookconfiguration_created
kube_validatingwebhookconfiguration_metadata_resource_version
Vertical Pod Autoscaler Metrics:
kube_verticalpodautoscaler_spec_resourcepolicy_container_policies_minallowed
kube_verticalpodautoscaler_spec_resourcepolicy_container_policies_maxallowed
kube_verticalpodautoscaler_status_recommendation_containerrecommendations_lowerbound
kube_verticalpodautoscaler_status_recommendation_containerrecommendations_target
kube_verticalpodautoscaler_status_recommendation_containerrecommendations_uncappedtarget
kube_verticalpodautoscaler_status_recommendation_containerrecommendations_upperbound
kube_verticalpodautoscaler_labels
kube_verticalpodautoscaler_spec_updatepolicy_updatemode
volumeattachment-metrics:
kube_volumeattachment_info
kube_volumeattachment_created
kube_volumeattachment_labels
kube_volumeattachment_spec_source_persistentvolume
kube_volumeattachment_status_attached
kube_volumeattachment_status_attachment_metadata
CertificateSigningRequest Metrics:
kube_certificatesigningrequest_created
kube_certificatesigningrequest_condition
kube_certificatesigningrequest_labels
kube_certificatesigningrequest_cert_length
</code></pre>
| Sibi Prasanth | <p>I suggest you start with <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">the documentation</a> where you can find the list of <a href="https://github.com/kubernetes/kube-state-metrics/tree/master/docs#exposed-metrics" rel="nofollow noreferrer">Exposed Metrics</a>:</p>
<ul>
<li><p>CertificateSigningRequest Metrics</p></li>
<li><p>ConfigMap Metrics</p></li>
<li><p>CronJob Metrics</p></li>
<li><p>DaemonSet Metrics</p></li>
<li><p>Deployment Metrics</p></li>
<li><p>Endpoint Metrics</p></li>
<li><p>Horizontal Pod Autoscaler Metrics</p></li>
<li><p>Ingress Metrics</p></li>
<li><p>Job Metrics</p></li>
<li><p>Lease Metrics</p></li>
<li><p>LimitRange Metrics</p></li>
<li><p>MutatingWebhookConfiguration Metrics</p></li>
<li><p>Namespace Metrics</p></li>
<li><p>NetworkPolicy Metrics</p></li>
<li><p>Node Metrics</p></li>
<li><p>PersistentVolume Metrics</p></li>
<li><p>PersistentVolumeClaim Metrics</p></li>
<li><p>Pod Disruption Budget Metrics</p></li>
<li><p>Pod Metrics</p></li>
<li><p>ReplicaSet Metrics</p></li>
<li><p>ReplicationController Metrics</p></li>
<li><p>ResourceQuota Metrics</p></li>
<li><p>Secret Metrics</p></li>
<li><p>Service Metrics</p></li>
<li><p>StatefulSet Metrics</p></li>
<li><p>StorageClass Metrics</p></li>
<li><p>ValidatingWebhookConfiguration Metrics</p></li>
<li><p>VerticalPodAutoscaler Metrics</p></li>
<li><p>VolumeAttachment Metrics </p></li>
</ul>
<p>There you will find all the necessary info and descriptions you are looking for. Also I recommend reading <a href="https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-6-kube-state-metrics-14f4e7c8710b" rel="nofollow noreferrer">this blog</a> to get a better understanding of how they work.</p>
<p>Please let me know if that helped.</p>
| Wytrzymały Wiktor |
<p>I have create an internal load balancer for my Istio Ingress controller as shown below</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
profile: default #or demo
components:
ingressGateways:
- name: istio-internal-ingressgateway
enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
</code></pre>
<p>due this the default external loadbalancer got removed.</p>
<p>Does it mean that Istio support only one Loadbalancer? Can't I have 2 or more loadbalancer, may be one per Istio Gateway?</p>
| One Developer | <blockquote>
<p>Does it mean that Istio support only one Loadbalancer? Can't I have 2 or more loadbalancer, may be one per Istio Gateway?</p>
</blockquote>
<p>No, istio support multiple gateways, you changed wrong component.</p>
<blockquote>
<p>Gateways are a special type of component, since multiple ingress and egress gateways can be defined. In the IstioOperator API, gateways are defined as a list type.</p>
</blockquote>
<hr />
<p>Take a look at <a href="https://istio.io/latest/docs/setup/install/istioctl/#configure-gateways" rel="nofollow noreferrer">this</a> documentation.</p>
<p>There is an example.</p>
<blockquote>
<p>A new user gateway can be created by adding a new list entry:</p>
</blockquote>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- namespace: user-ingressgateway-ns
name: ilb-gateway
enabled: true
k8s:
resources:
requests:
cpu: 200m
serviceAnnotations:
cloud.google.com/load-balancer-type: "internal"
service:
ports:
- port: 8060
targetPort: 8060
name: tcp-citadel-grpc-tls
- port: 5353
name: tcp-dns
</code></pre>
<p>Additional resources:</p>
<ul>
<li><a href="https://medium.com/faun/setup-multiple-ingress-gateways-in-istio-52ad0dc7f99d" rel="nofollow noreferrer">https://medium.com/faun/setup-multiple-ingress-gateways-in-istio-52ad0dc7f99d</a></li>
<li><a href="https://banzaicloud.com/blog/istio-multiple-gateways/" rel="nofollow noreferrer">https://banzaicloud.com/blog/istio-multiple-gateways/</a></li>
</ul>
| Jakub |
<p>I am trying to set configurable host paths in kubernetes, but I am facing issues. I created a config map, which has the path and then I'm trying to replace the placeholder with the config map value. Here is my configuration:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: PHP_FOLDER
type: Directory
</code></pre>
<p>I also tried </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: php
namespace: app
spec:
template:
spec:
containers:
- name: php
env:
- name: PHP_FOLDER
valueFrom:
configMapKeyRef:
name: local-paths
key: CODE_PATH
volumes:
- name: src-code
hostPath:
path: $(PHP_FOLDER)
type: Directory
</code></pre>
<p>I either get Error: <code>Error response from daemon:
create $(PHP_FOLDER): "$(PHP_FOLDER)" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path</code> or <code>MountVolume.SetUp failed for volume "src-code" : hostPath type check failed: PHP_FOLDER is not a directory</code></p>
| norbit | <p>You just can't use environment values in yaml files directly. What you did actually a lot worse. ConfigMaps or secrets are created in runtime which means they won't be available until container starts running. However yaml parsing happens before pod or deployment even created. I think you should practice timeline more until you are using kubernetes in production or even testing.</p>
<p>In this case best practice would be to use bash script to change yaml file just before deploying and automatize it.</p>
| Akin Ozer |
<p>I use minikube v1.6.2, kubectl 1.17.</p>
<p>I start minikube without Virtualbox, with:</p>
<pre><code>sudo minikube start --vm-driver none
</code></pre>
<p>Now, to stop it, I do:</p>
<pre><code>sudo minikube stop
minikube stop # I don't know which one is the good one, but I do both
</code></pre>
<p>but, after that, when I do: </p>
<pre><code>kubectl get po
</code></pre>
<p>I still get the pods listing. The only way to stop it is to actually reboot my machine.</p>
<p>Why is it happening, and how should I fix it ?</p>
| Juliatzin | <p><code>minikube stop</code> when used with <code>--vm-driver=none</code> does not do any cleanup of the pods. As mentioned <a href="https://brokenco.de/2018/09/04/minikube-vmdriver-none.html" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>When minikube starts without a hypervisor, it installs a local kubelet
service on your host machine, which is important to know for later.</p>
<p>Right now it seems that minikube start is the only command aware of
--vm-driver=none. Running minikube stop keeps resulting in errors related to docker-machine, and as luck would have it also results in
none of the Kubernetes containers terminating, nor the kubelet service
stopping.</p>
<p>Of course, if you wish to actually terminate minikube, you will need
to execute service kubelet stop and then ensure the k8s containers are
removed from the output in docker ps.</p>
</blockquote>
<p>If you wish to know the overview of none (bare-metal) driver you can find it <a href="https://minikube.sigs.k8s.io/docs/reference/drivers/none/" rel="nofollow noreferrer">here</a>.</p>
<p>Also as a workaround you can stop and remove all Docker containers that have 'k8s' in their name by executing the following command: <code>docker stop (docker ps -q --filter name=k8s)</code> and <code>docker rm (docker ps -aq --filter name=k8s)</code>.</p>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>Let's assume that I have a configuration for istio (I use GCP):</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-route
namespace: my-service
spec:
hosts:
- "service.cluster.local"
http:
- match:
- headers:
specific-id:
regex: ^(1|2|3|4|5)$
route:
- destination:
host: "service.cluster.local"
subset: s-01
- match:
- headers:
specific-id:
regex: ^(6|7|8|9|10)$
route:
- destination:
host: "service.cluster.local"
subset: s-02
</code></pre>
<p>My destination rules are based on specific header (int value).</p>
<p>Now I want to change this configuration (because I need to do resharding), and I will have something like this:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-route
namespace: my-service
spec:
hosts:
- "service.cluster.local"
http:
- match:
- headers:
specific-id:
regex: ^(1|2|3|9|10)$
route:
- destination:
host: "service.cluster.local"
subset: s-03
- match:
- headers:
specific-id:
regex: ^(4|5|6|7|8|)$
route:
- destination:
host: "service.cluster.local"
subset: s-04
- match:
- headers:
specific-id:
regex: ^(1|3|5|7|9|)$
route:
- destination:
host: "service.cluster.local"
subset: s-05
</code></pre>
<p>My questions are:</p>
<ul>
<li><p>Does istio rules allows to have intersections inside subsets (like to
have for one service regex: <code>^(1|2|3|4|5)$</code> and for another
<code>^(1|3|5|7|9|)$</code>)?</p></li>
<li><p>After deployment of the new schema with new rules, when istio will
apply it? Does istio guarantee that it won't be applied (to not
remove old rules) before all of my new instances will be ready for
traffic? </p></li>
</ul>
| mchernyakov | <blockquote>
<p>Does istio rules allows to have intersections inside subsets (like to have for one service regex: <code>^(1|2|3|4|5)$</code> and for another <code>^(1|3|5|7|9|)$</code>)?</p>
</blockquote>
<p>According to <a href="https://istio.io/docs/concepts/traffic-management/#routing-rule-precedence" rel="nofollow noreferrer">istio</a> documentation:</p>
<blockquote>
<p>Routing rules are <strong>evaluated in sequential order from top to bottom</strong>, with the first rule in the virtual service definition being given highest priority. In this case you want anything that doesn’t match the first routing rule to go to a default destination, specified in the second rule. Because of this, the second rule has no match conditions and just directs traffic to the v3 subset.</p>
<pre><code>- route:
- destination:
host: reviews
subset: v3
</code></pre>
<p>We recommend providing a default “no condition” or weight-based rule (described below) like this as the last rule in each virtual service to ensure that traffic to the virtual service always has at least one matching route.</p>
</blockquote>
<p>The routing rules are evaluated in order. So the first match will always be selected.</p>
<p><code>regex: ^(1|2|3|9|10)$</code> would end up in <code>subset: s-03</code></p>
<p><code>regex: ^(4|5|6|7|8|)$</code> would end up in <code>subset: s-04</code></p>
<p>No matches would end up in <code>subset: s-05</code> as <code>regex: ^(1|3|5|7|9|)$</code> is already covered by <code>subset: s-03</code> and <code>subset: s-04</code>.</p>
<p>Note that You could set the <code>subset: s-05</code> as default match with “no condition”.</p>
<hr>
<p>However You can use "weight" to distribute traffic between matching rules.</p>
<p>And with little bit of creativity (by splitting intersecting groups into unique subsets) We can get the following configuration:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-route
namespace: my-service
spec:
hosts:
- "service.cluster.local"
http:
- match:
- headers:
specific-id:
regex: ^(1|3|9)$
route:
- destination:
host: "service.cluster.local"
subset: s-03
weight: 50
- destination:
host: "service.cluster.local"
subset: s-05
weight: 50
- match:
- headers:
specific-id:
regex: ^(2|10)$
route:
- destination:
host: "service.cluster.local"
subset: s-03
- match:
- headers:
specific-id:
regex: ^(5|7)$
route:
- destination:
host: "service.cluster.local"
subset: s-04
weight: 50
- destination:
host: "service.cluster.local"
subset: s-05
weight: 50
- match:
- headers:
specific-id:
regex: ^(4|6|8|)$
route:
- destination:
host: "service.cluster.local"
subset: s-04
</code></pre>
<p>This way You can have:</p>
<p><code>subset: s-03</code> that matches <code>regex: ^(1|3|9)$ OR ^(2|10)$</code></p>
<p><code>subset: s-04</code> that matches <code>regex: ^(5|7)$ OR ^(4|6|8|)$</code></p>
<p><code>subset: s-05</code> that matches <code>regex: ^(1|3|9)$ OR ^(5|7)$</code></p>
<p>Where traffic for regex: </p>
<p><code>^(1|3|9)$</code> is split evenly between <code>subset: s-03</code> and <code>subset: s-05</code>.</p>
<p><code>^(5|7)$</code> is split evenly between <code>subset: s-04</code> and <code>subset: s-05</code>.</p>
<hr>
<blockquote>
<p>After deployment of the new schema with new rules, when istio will apply it? Does istio guarantee that it won't be applied (to not remove old rules) before all of my new instances will be ready for traffic?</p>
</blockquote>
<p>Istio uses envoy for routing and <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/operations/hot_restart" rel="nofollow noreferrer">Envoy</a> documentation has the following statement:</p>
<blockquote>
<p><strong>Service discovery and dynamic configuration:</strong> Envoy optionally consumes a layered set of <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/operations/dynamic_configuration#arch-overview-dynamic-config" rel="nofollow noreferrer">dynamic configuration APIs</a> for centralized management. The layers provide an Envoy with dynamic updates about: hosts within a backend cluster, the backend clusters themselves, HTTP routing, listening sockets, and cryptographic material. For a simpler deployment, backend host discovery can be <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#arch-overview-service-discovery-types-strict-dns" rel="nofollow noreferrer">done through DNS resolution</a> (or even <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#arch-overview-service-discovery-types-static" rel="nofollow noreferrer">skipped entirely</a>), with the further layers replaced by static config files.</p>
</blockquote>
<p>So as soon as istio object modifies the envoy dynamic configuration, it pushes its changes to envoy proxy. Yes envoy will make sure that new instances will be ready for traffic and drain the old traffic gracefully before shutting down.</p>
<p>More info: <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/operations/runtime" rel="nofollow noreferrer">Runtime configuration</a>, <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/operations/hot_restart" rel="nofollow noreferrer">Hot restart</a></p>
<p>Hope this helps.</p>
| Piotr Malec |
<p>We are facing error while deploying an application through dockerfile. While installing the Kubernetes (Kubectl, Kubeadm) through dockerfile, the client version is installed but the error while installing the server version i.e., kubeadm installation. The resultant screenshot is
attached. It would be great if any one helps me to solve the issue.
Kindly provide the document to install kubernetes via dockerfile.</p>
<p><a href="https://i.stack.imgur.com/CqDpp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CqDpp.png" alt="enter image description here"></a></p>
| Abhishek | <p>My guess you want to create kubernetes cluster in docker environment, this is possible but your approach might not as good as you think.</p>
<p>There is <a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">KinD</a> for that. It is actually what you want.</p>
<p>However if you still want to use your implementation. You should use <a href="https://github.com/jpetazzo/dind" rel="nofollow noreferrer">DinD</a> as base image, bind volume to local docker then use kubeadm in dockerfile as "CMD" not as "RUN".</p>
| Akin Ozer |
<p>I used to test installed mongodb-shard on kubernetes by helm,but I found that those helm charts are not really a qualified mongodb shard.These helm charts can correctly create Pods with names like <code>mongos-1</code> <code>mongod-server-1</code> <code>mongod-shard-1</code>,this seems to be a correct shard cluster configuration, but the appropriate mongos, mongod server instance is not created on the corresponding Pod.They just create a normal mongod instance on the corresponding pod, and there is no connection between them.Do I need to add scripts to execute commands similar to <code>rs.addShard(config)</code>?Encountered the same problem when installing mysql cluster using helm.</p>
<p>What I want to know is, is it not appropriate to install mysql/mongodb cluster on kubernetes in general scenarios?Is the database installed independently or deployed on Kubernetes?</p>
| jiangyongbing24 | <p>Yes, you can deploy MongoDB instances on Kubernetes clusters. </p>
<p>Use <a href="https://docs.opsmanager.mongodb.com/current/tutorial/deploy-standalone/" rel="nofollow noreferrer">standalone instance</a> if you want to test and develop and <a href="https://docs.opsmanager.mongodb.com/current/tutorial/deploy-replica-set/" rel="nofollow noreferrer">replica set</a> for production like deployments.</p>
<p>Also to make things easier you can use <a href="https://github.com/mongodb/mongodb-enterprise-kubernetes#mongodb-enterprise-kubernetes-operator" rel="nofollow noreferrer">MongoDB Enterprise Kubernetes Operator</a>:</p>
<blockquote>
<p>The Operator enables easy deploys of MongoDB into Kubernetes clusters,
using our management, monitoring and backup platforms, Ops Manager and
Cloud Manager. By installing this integration, you will be able to
deploy MongoDB instances with a single simple command.</p>
</blockquote>
<p>This guide has references to the official MongoDB documentation with more necessary details regarding:</p>
<ul>
<li><p>Install Kubernetes Operator</p></li>
<li><p>Deploy Standalone</p></li>
<li><p>Deploy Replica Set</p></li>
<li><p>Deploy Sharded Cluster</p></li>
<li><p>Edit Deployment</p></li>
<li><p>Kubernetes Resource Specification</p></li>
<li><p>Troubleshooting Kubernetes Operator</p></li>
<li><p>Known Issues for Kubernetes Operator</p></li>
</ul>
<p>So basically all you need to know in this topic.</p>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>I am using prometheus(v2.16.0) to scrape the metrics data,now I want to login prometheus pod to check the config file path:</p>
<pre><code> ~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/kubernetes/cluster/addons/prometheus ⌚ 20:29:57
$ kubectl exec -it prometheus-0 -n kube-system /bin/bash
Defaulting container name to prometheus-server-configmap-reload.
Use 'kubectl describe pod/prometheus-0 -n kube-system' to see all of the containers in this pod.
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
command terminated with exit code 126
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/kubernetes/cluster/addons/prometheus ⌚ 20:30:10
$ kubectl exec -it prometheus-0 -n kube-system /bin/ash
Defaulting container name to prometheus-server-configmap-reload.
Use 'kubectl describe pod/prometheus-0 -n kube-system' to see all of the containers in this pod.
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/ash\": stat /bin/ash: no such file or directory": unknown
command terminated with exit code 126
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/kubernetes/cluster/addons/prometheus ⌚ 20:31:30
$ kubectl exec -it prometheus-0 -n kube-system /bash
Defaulting container name to prometheus-server-configmap-reload.
Use 'kubectl describe pod/prometheus-0 -n kube-system' to see all of the containers in this pod.
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bash\": stat /bash: no such file or directory": unknown
command terminated with exit code 126
</code></pre>
<p>obviously I could not login into the prometheus pod, and I am try another way to login into using Web UI:</p>
<p><a href="https://i.stack.imgur.com/pY35H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pY35H.png" alt="enter image description here"></a></p>
<p>is it possible to login pod like this? Why I could not login prometheus pod? </p>
| Dolphin | <p>As it says in the output: </p>
<pre><code>Defaulting container name to prometheus-server-configmap-reload.
</code></pre>
<p>This means that in the pod there are multiple containers and it automatically picks the <code>prometheus-server-configmap-reload</code>. Which is probably not the container we want to access.</p>
<hr>
<p>So the correct way to access prometheus bash command line:</p>
<ol>
<li>List the containers in the pod:</li>
</ol>
<pre><code>kubectl get pods prometheus-0 -n kube-system -o jsonpath='{.spec.containers[*].name}*
</code></pre>
<ol start="2">
<li>Exec into the right container from the list above using:</li>
</ol>
<pre><code>kubectl exec --namespace <namespace> -it <pod_name> -c <container> /bin/ash
</code></pre>
<p>In some cases there needs to be double slash before the command as well:</p>
<pre><code>kubectl exec -it -n prometheus-0 -c prometheus //bin/bash
</code></pre>
<p>You can also try <code>/bin/sh</code> or <code>//bin/sh</code> if bash is not available in the container image.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>Recently I was adding Istio to my kubernetes cluster. When enabling istio to one of the namespaces where MongoDB statefulset were deployed, MongoDB was failed to start up.</p>
<p>The error message was "keyfile permissions too open"</p>
<p>When I analyzed whats going on, keyfile is coming from the /etc/secrets-volume which is mounted to the statefulset from kubernetes secret.</p>
<p>The file permissions was 440 instead of 400. Because of this MongoDB started to complain that "permissions too open" and the pod went to Crashbackloopoff.</p>
<p>When I disable Istio injection in that namespace, MongoDB is starting fine.</p>
<p>Whats going on here? Does Istio has anything to do with container filesystem, especially default permissions?</p>
| karthikeayan | <p>The istio sidecar injection is not always meant for all kinds of containers like mentioned in istio documentation <a href="https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#more-control-adding-exceptions" rel="nofollow noreferrer">guide</a>. These containers should be excluded from istio sidecar injection.</p>
<p>In case of Databases that are deployed using <code>StatefulSets</code> some of the containers might be temporary or used as operators which can end up in crash loop or other problematic states.</p>
<p>There is also alternative approach to not istio inject databases at all and just add them as external services with <code>ServiceEntry</code> objects. There is entire <a href="https://istio.io/latest/blog/2018/egress-mongo/" rel="nofollow noreferrer">blog post</a> in istio documentation how to do that specifically with MongoDB. the guide is little outdated so be sure to refer to current <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer">documentation page</a> for <code>ServiceEntry</code> which also has examples of using external MongoDB.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I have noticed that setting values through <code>istioctl manifest apply</code> will affect other Istio resources. For example, when I set <code>--set values.tracing.enabled=true</code>, Kiali which was previously installed in cluster vanished.</p>
<p>And what is the right way to set values(option) like <code>values.pilot.traceSampling</code>? </p>
<p>Thanks</p>
| RMNull | <p>Istio install has been introduced in istio 1.6 however the <code>--set</code> options work the same as in <code>istioctl manifest apply</code> which it replaces. I suspect it is made for better
clarity and accessibility as <code>istioctl manifest</code> has lots of other uses like <code>istioctl manifest generate</code> which allows to create manifest yaml and save it to a file.</p>
<p>According to istio <a href="https://istio.io/docs/setup/install/istioctl/#generate-a-manifest-before-installation" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>While istioctl install will automatically detect environment specific settings from your Kubernetes context, manifest generate cannot as it runs offline, which may lead to unexpected results. In particular, you must ensure that you follow these steps if your Kubernetes environment does not support third party service account tokens.</p>
</blockquote>
<p>As for Kiali You need to install it separately like in this <a href="https://istio.io/docs/tasks/observability/gateways/" rel="nofollow noreferrer">guide</a>.</p>
<p>To set values like <code>values.pilot.tracingSampling</code> i suggest using istio <a href="https://istio.io/docs/setup/install/standalone-operator/" rel="nofollow noreferrer">Operator</a>.</p>
<p>Hope it helps.</p>
| Piotr Malec |
<p>I'm setting a bare-metal kubernetes cluster for a web application in a google cloud instance, I am connecting to microservices through an ingress controller. How do I access the ingress controller from all incoming hosts?</p>
<p>There is a pod running angular web application and another pod running a node api microservice. Angular Web Application has been exposed globally. When accessing the microservice externally and passing the header with the hostname I was able to get the expected response. On removing the host in the ingress yaml I am not able to access the ingress.</p>
<pre><code>kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: nginx
creationTimestamp: "2019-08-12T07:41:37Z"
generation: 7
name: test
namespace: default
resourceVersion: "546400"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/test
uid: 374836d2-34c3-4053-b0e3-9fe3f63167cc
spec:
rules:
- host: bar.com
http:
paths:
- backend:
serviceName: login-service
servicePort: 3000
path: /login-service
- backend:
serviceName: organization-service
servicePort: 3000
path: /organization-service
status:
loadBalancer:
ingress:
- ip: 10.128.0.16
- ip: 203.0.113.2
</code></pre>
<p>I except the ingress to be accessed from all the hosts other than the specified host(bar.com) in ingress.</p>
<p>Any other way to access the API microservice from the outside cluster(globally)?</p>
| Jagan Karan | <p>In order to access the API service From outside the cluster(Globally).</p>
<p>Create a proxy nginx server and expose the port of the nginx proxy server. From the web application server, call a request to the proxy server through the external IP and exposed Port. The proxy server will pass the request to the respected API microservice and return the expected response.</p>
<p>Edit the nginx.conf file.</p>
<pre><code>location /<your_requested_URL> {
proxy_pass http://service_name:port;
}
</code></pre>
| Jagan Karan |
<p>I've added some scaling policies to my <strong>HorizontalPodAutoscaler</strong> but they are not being applied. The <em>scaleUp</em> and <em>scaleDown</em> behaviours are being ignored. I need a way to stop pods scaling up and down every few minutes in response to small CPU spikes. Ideally the HPA would scale up quickly in response to more traffic but scale down slowly after about 30 minutes of reduced traffic.</p>
<p>I'm running this on an AWS EKS cluster and I have setup the policies according to <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior</a>.</p>
<p>Could this be a limitation of EKS or my K8s version which is 1.14. I have run <code>kubectl api-versions</code> and my cluster does support <em>autoscaling/v2beta2</em>.</p>
<p>My Helm spec is:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ template "app.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
scaleTargetRef:
apiVersion: apps/v1beta2
kind: Deployment
name: "{{ template "app.fullname" . }}-server"
minReplicas: {{ .Values.hpa.minReplicas }}
maxReplicas: {{ .Values.hpa.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: AverageValue
averageValue: 200m
behavior:
scaleUp:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 1
periodSeconds: 300
scaleDown:
stabilizationWindowSeconds: 1200
policies:
- type: Pods
value: 1
periodSeconds: 300
</code></pre>
| Mikhail Janowski | <p>As already discussed in the comments, even with <code>autoscaling/v2beta2</code> enabled this function will not work on version 1.14.</p>
<blockquote>
<p>Starting from <strong>v1.18</strong> the v2beta2 API allows scaling behavior to be
configured through the HPA behavior field.</p>
</blockquote>
<p>The easiest way out of it would be to upgrade to 1.18.</p>
| Wytrzymały Wiktor |
<p>I’m a newbie to istio and k8s, recently I’ve been exploring istio outlier detection and I’m kinda confused for couple of things.Please correct me if I’m wrong:</p>
<p>1.Outlier detections are based on pods and readiness probes based on containers? But actually both will remove unhealty pods out of “lb”(svc or subnets’ connection pool )</p>
<p>2.The best scenario I can think of is that , we might configure our readiness probe lets say 30s interval but outlier detection will take the unhealth pod out of pool when they get 5** .</p>
<p>3.The outlier dections will add the pod back after BaseEjectionTime , I presume the case is like , one pod got picked out from pool and then liveness probe shows unhealth and restart the container. After all this the pod is healthy again and added back to the pool ?</p>
<p>4.Ideally if the readiness probe run every second and no false alarms , does that works the same as outlier detections? Or because it needs to talk to apiserver and there might be network latency and scheduler issue blah blah so istio is more efficient ?</p>
<p>5.Just curious how they two work together in production , any best practice ?</p>
<p>Any comments/thoughts are appreciated , thank you everyone !</p>
| Ray Gao | <p>The best explanation on how the istio outlier detection works is covered by this <a href="https://banzaicloud.com/blog/istio-circuit-breaking/" rel="nofollow noreferrer">article</a>. I recommend reading it.</p>
<p>The health checking probes allow to detect when pod is ready or responds according to specific configuration. Outlier detection on the other hand controls the number of errors before a service is ejected from the connection pool.</p>
<p>When the k8s health checks are failed the pod is restarted. In case of outlier detection, the endpoint that triggered outlier detection is suspended on envoy level and is given time to recover.</p>
<p>There can be a scenario where outlier detection triggers without any changes in k8s heath checks.</p>
<p>Also note that the interval and base ejection times of istio outlier detection are dynamic and can be longer each time they are triggered and not very precise.</p>
| Piotr Malec |
<p>While trying to configure Repilicated Control Planes as described in this guide:
<a href="https://istio.io/latest/docs/setup/install/multicluster/gateways/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/install/multicluster/gateways/</a></p>
<p>After doing all the configuration the "sleep" application is unable to communicate with "httpbin" application as described in the documentation. The result of the test is always the same about 503 Service Unavailable error:</p>
<pre><code>kubectl exec --context=kontiki $SLEEP_POD -n multi-test -c sleep -- curl -k -I httpbin.multi-test-bar.global:8000/headers
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 91 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 503 Service Unavailable
content-length: 91
content-type: text/plain
date: Wed, 07 Oct 2020 13:55:19 GMT
server: envoy
</code></pre>
<p>The relevant logs found are:</p>
<ol>
<li>istio-proxy container in sleep pod on origin cluster</li>
</ol>
<pre><code>[2020-10-07T13:58:21.775Z] "HEAD /headers HTTP/1.1" 503 UF,URX "-" "-" 0 0 137 - "-" "curl/7.69.1" "5ccf05b7-d0e3-9e38-a581-8c0bdabc98b3" "httpbin.multi-t0" "10.14.10.99:31383" outbound|8000||httpbin.multi-test-bar.global - 240.0.0.2:8000 172.17.141.20:59704 - default
</code></pre>
<ol start="2">
<li>ingress pods on destination cluster</li>
</ol>
<pre><code>[2020-10-07T13:58:21.900Z] "- - -" 0 NR "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - 172.17.184.60:15443 172.31.4.248:62395 - -
[2020-10-07T13:58:21.814Z] "- - -" 0 NR "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - 172.17.133.59:15443 172.31.4.209:38326 - -
</code></pre>
<p>Istio 1.7.3 is deployed with Istio-operator on vanilla k8s clusters with version 1.17. Certificates are configured as described in the referenced guide, ServiceEntry created for httpbin is the following:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: httpbin-multi-test-bar
spec:
hosts:
- httpbin.multi-test-bar.global
location: MESH_INTERNAL
ports:
- name: http1
number: 8000
protocol: http
resolution: DNS
addresses:
- 240.0.0.2
endpoints:
- address: cluster-2
network: external
ports:
http1: 31383 # nodePort tls exposed port via our proxy
- address: istio-egressgateway.istio-system.svc.cluster.local
ports:
http1: 15443
</code></pre>
<p>This error is also referenced in this <a href="https://github.com/istio/istio/issues/25077" rel="nofollow noreferrer">#Issue</a> where for other people, this very same thing was happening and the way they fixed it was to moving to a previous Istio version like 1.6.8.
edit: I can confirm that my configuration works with version 1.6.8, but it fails with 1.7.3.</p>
<p>Can you please help me understand what's happening or how could it be fixed?</p>
| carrotcakeslayer | <p>There are more issues reported with that specific issue:</p>
<ul>
<li><a href="https://github.com/istio/istio/issues/26990" rel="nofollow noreferrer">https://github.com/istio/istio/issues/26990</a></li>
<li><a href="https://discuss.istio.io/t/getting-503-for-multi-cluster-replicated-control-plane/7336" rel="nofollow noreferrer">https://discuss.istio.io/t/getting-503-for-multi-cluster-replicated-control-plane/7336</a></li>
</ul>
<p>and there is no answer so far about how to fix it.</p>
<p>I would suggest to wait for the answer from istio devs in the <a href="https://github.com/istio/istio/issues/25077" rel="nofollow noreferrer">issue</a> you mentioned and use 1.6.8 untill that will be solved.</p>
<hr />
<p>The issue itself might be related to <a href="https://preliminary.istio.io/latest/docs/ops/deployment/deployment-models/#dns-with-multiple-clusters" rel="nofollow noreferrer">dns changes</a> in 1.8, but these are just my thoughts.</p>
<blockquote>
<p>Starting with Istio 1.8, the Istio agent on the sidecar will ship with a caching DNS proxy, programmed dynamically by Istiod.</p>
</blockquote>
<p>There are more informations about dns changes:</p>
<ul>
<li><a href="https://preliminary.istio.io/latest/blog/2020/dns-proxy/" rel="nofollow noreferrer">https://preliminary.istio.io/latest/blog/2020/dns-proxy/</a></li>
<li><a href="https://preliminary.istio.io/latest/docs/ops/deployment/deployment-models/#dns-with-multiple-clusters" rel="nofollow noreferrer">https://preliminary.istio.io/latest/docs/ops/deployment/deployment-models/#dns-with-multiple-clusters</a></li>
<li><a href="https://github.com/istio-ecosystem/istio-coredns-plugin" rel="nofollow noreferrer">https://github.com/istio-ecosystem/istio-coredns-plugin</a></li>
</ul>
<p>And there are preliminary <a href="https://preliminary.istio.io/latest/docs/setup/install/multicluster/" rel="nofollow noreferrer">docs</a> for 1.8 multi cluster installation.</p>
| Jakub |
<p>I have ALB on AWS running on EKS cluster. I'm trying to apply change in Ingress resource on routing so it points to different backend. </p>
<p>The only difference in Ingresses below is spec for backend.</p>
<p>Why is update not working? How to update routing on ALB?</p>
<p>Original ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
labels:
app: api
type: ingress
spec:
backend:
serviceName: api-service
servicePort: 80
</code></pre>
<p><em>Update ingress:</em></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
labels:
app: api
type: ingress
spec:
backend:
serviceName: offline-service
servicePort: 9001
</code></pre>
<p>Controller:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=cluster-22
env:
- name: AWS_ACCESS_KEY_ID
value: key
- name: AWS_SECRET_ACCESS_KEY
value: key
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.3
serviceAccountName: alb-ingress-controller
</code></pre>
| Andrija | <p>Posting info from the comments as an answer (community wiki):</p>
<blockquote>
<p>What often happens is that one of the services defined in the ingress
is unreachable, at which point the ALB-ingress controller decides that
it will not update any of the rules in the AWS ALB. </p>
<p>You have to deploy an offline-service.</p>
</blockquote>
| Wytrzymały Wiktor |
<p>I am using Kubernetes on Minikube.
I created a user context for a new user with role and rolebinding, using<br>
<code>kubectl config set-context user1-context --cluster=minikibe --namespace=default --user=user1</code><br>
When I try to see the runnig pods, target machine refuses it. But, in minikube context I can see the pods although both are running on the same cluster.</p>
<pre><code>>kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
minikube minikube minikube
* user1-context minikibe user1
>kubectl get pods
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
>kubectl config use-context minikube
Switched to context "minikube".
>kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-test 1/1 Running 0 44m
</code></pre>
<p>Config for rolebinding:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
subjects:
- kind: User
name: user1
apiGroup: rbac.authorization.k8s.io
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
</code></pre>
<p>Role config</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
</code></pre>
<p>I also verified the creation of roles an rolesbinding</p>
<pre><code>>kubectl get roles
NAME CREATED AT
pod-reader 2020-10-20T05:59:43Z
>kubectl get rolebindings
NAME ROLE AGE
read-pods Role/pod-reader 65m
</code></pre>
| Anwesh Budhathoki | <h2>Issue</h2>
<p>There was a typo in a <code>kubectl config set-context</code> command.</p>
<h2>Solution</h2>
<p>Use this command with <code>--cluster=minikibe</code> typo fixed.</p>
<pre><code>kubectl config set-context user1-context --cluster=minikube --namespace=default --user=user1
</code></pre>
| Jakub |
<p>I used volume with these configuration in kubernetes:</p>
<pre><code>emptyDir:
medium: Memory
</code></pre>
<p>How to dynamically/programatically figure out the host path on the nodes?</p>
| SOWMITHRA KUMAR G M | <p>Basing on the official <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>By default, emptyDir volumes are stored on whatever medium is backing
the node - that might be disk or SSD or network storage, depending on
your environment. However, you can set the emptyDir.medium field to
"Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
for you instead. While tmpfs is very fast, be aware that unlike disks,
tmpfs is cleared on node reboot and any files you write will count
against your Container’s memory limit.</p>
</blockquote>
<p>If I understand you correctly (and by <code>host path</code> you doesn't mean <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a>) you can simply execute <code># df -h</code> which will display how much disk space is available in a human readable form. Also showing:</p>
<ul>
<li>Filesystem (in your case <code>tmpfs</code>)</li>
<li>Size</li>
<li>Used</li>
<li>Available</li>
<li>Use%</li>
<li>Mounted on</li>
</ul>
<p>It's worth noting that the default size of a RAM-based <code>emptyDir</code> is half the RAM of the node it runs on.</p>
<p>Please let me know if that helps. </p>
| Wytrzymały Wiktor |
<p>I'm running a rails service inside a minikube cluster on my local machine. I like to throw breakpoints into my code in order to interact with the process. This doesn't work while inside Minikube. I can attach to the pod running my rails container and hit the <code>binding.pr</code> statement in my code, and instead of getting an interactive breakpoint, I simply see the pry attempt to create a breakpoint, but ultimately move right past it. Anyone figure out how to get this working? I'm guessing the deployed pod itself isn't interactive. </p>
| E.E.33 | <p>You are trying to get interactive access to your application.</p>
<p>Your problem is caused by the fact that the k8s does not allocate a TTY
and stdin buffer for the container by default.</p>
<p>I have replicated your issue and found a solution.</p>
<p>To get an interactive breakpoint you have to add 2 flags to your Deployment yaml to indicate that you need interactive session:</p>
<pre><code> stdin: true
tty: true
</code></pre>
<p>Here is an example of a deployment:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
selector:
matchLabels:
run: test
template:
metadata:
labels:
run: test
spec:
containers:
- image: test
name: test
stdin: true
tty: true
</code></pre>
<p>You can find more info about it <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#container-v1-core" rel="noreferrer">here</a>.</p>
<p>Remember to use -it option when attaching to pod, like shown below:</p>
<pre><code> kubectl attach -it <pod_name>
</code></pre>
<p>Let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>Newbie to kubernetes so might be a silly question, bear with me -</p>
<p>I created a cluster with one node, applied a sample deployment like below </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: coffedep
spec:
selector:
matchLabels:
app: coffedepapp
template:
metadata:
labels:
app: coffedepapp
spec:
containers:
- name: coffepod
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80'
</code></pre>
<p>Now I want to ping/connect an external website/entity from this pod, so I was hoping my ping would fail as one needs services applied like NodePort/LoadBalancer to connect to the outside world. But surprisingly, ping passed? I know I am horribly wrong somewhere, please correct my understanding here.</p>
<p><strong>Pod's interfaces and trace route -</strong> </p>
<pre><code>/ # traceroute google.com
traceroute to google.com (172.217.194.138), 30 hops max, 46 byte packets
1 * * *
2 10.244.0.1 (10.244.0.1) 0.013 ms 0.006 ms 0.004 ms
3 178.128.80.254 (178.128.80.254) 1.904 ms 178.128.80.253 (178.128.80.253) 0.720 ms 178.128.80.254 (178.128.80.254) 5.185 ms
4 138.197.250.254 (138.197.250.254) 0.995 ms 138.197.250.248 (138.197.250.248) 0.634 ms 138.197.250.252 (138.197.250.252) 0.523 ms
5 138.197.245.12 (138.197.245.12) 5.295 ms 138.197.245.14 (138.197.245.14) 0.956 ms 138.197.245.0 (138.197.245.0) 1.160 ms
6 103.253.144.255 (103.253.144.255) 1.396 ms 0.857 ms 0.763 ms
7 108.170.254.226 (108.170.254.226) 1.391 ms 74.125.242.35 (74.125.242.35) 0.963 ms 108.170.240.164 (108.170.240.164) 1.679 ms
8 66.249.95.248 (66.249.95.248) 2.136 ms 72.14.235.152 (72.14.235.152) 1.727 ms 66.249.95.248 (66.249.95.248) 1.821 ms
9 209.85.243.180 (209.85.243.180) 2.813 ms 108.170.230.73 (108.170.230.73) 1.831 ms 74.125.252.254 (74.125.252.254) 2.293 ms
10 209.85.246.17 (209.85.246.17) 2.758 ms 209.85.245.135 (209.85.245.135) 2.448 ms 66.249.95.23 (66.249.95.23) 4.538 ms
11^Z[3]+ Stopped traceroute google.com
/ #
/ #
/ #
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether ee:97:21:eb:98:bc brd ff:ff:ff:ff:ff:ff
inet 10.244.0.183/32 brd 10.244.0.183 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ec97:21ff:feeb:98bc/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<p><strong>Node's interfaces -</strong></p>
<pre><code>root@pool-3mqi2tbi6-b3dc:~# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 3a:c1:6f:8d:0f:45 brd ff:ff:ff:ff:ff:ff
inet 178.128.82.251/20 brd 178.128.95.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.15.0.5/16 brd 10.15.255.255 scope global eth0:1
valid_lft forever preferred_lft forever
inet6 fe80::38c1:6fff:fe8d:f45/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 06:88:c4:23:4b:cc brd ff:ff:ff:ff:ff:ff
inet 10.130.227.173/16 brd 10.130.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::488:c4ff:fe23:4bcc/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:61:08:39:8a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9a:3c:d3:35:b3:35 brd ff:ff:ff:ff:ff:ff
inet6 fe80::983c:d3ff:fe35:b335/64 scope link
valid_lft forever preferred_lft forever
6: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:13:c5:6e:52:bf brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/32 scope link cilium_host
valid_lft forever preferred_lft forever
inet6 fe80::5013:c5ff:fe6e:52bf/64 scope link
valid_lft forever preferred_lft forever
7: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 4a:ab:3b:3b:0d:b5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::48ab:3bff:fe3b:db5/64 scope link
valid_lft forever preferred_lft forever
9: cilium_health@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b6:2f:45:83:e0:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::b42f:45ff:fe83:e044/64 scope link
valid_lft forever preferred_lft forever
11: lxc1408c930131e@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8e:45:4d:7b:94:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::8c45:4dff:fe7b:94e5/64 scope link
valid_lft forever preferred_lft forever
13: lxc0cef46c3977c@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 16:eb:36:8b:fb:45 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::14eb:36ff:fe8b:fb45/64 scope link
valid_lft forever preferred_lft forever
15: lxca02c5de95d1c@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 62:9d:0c:34:0f:11 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::609d:cff:fe34:f11/64 scope link
valid_lft forever preferred_lft forever
17: lxc32eddb70fa07@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether da:1a:08:95:fb:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::d81a:8ff:fe95:fbf2/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
| pa1 | <p>You don't need services, nodeport, or loadbalancer to connect to the outside world. If your network policies allow pods to talk to outside, you can.</p>
<p>You need services to access your pods from within your cluster. You need loadbalancers, or nodeports to connect to your cluster from outside.</p>
| Burak Serdar |
<p>How to list the kubenetes pods based on any particular exitCode value. For example i need to list all the pods which has exitCode value = 255.</p>
<p>I have tried below command and it will give all pods along with all exitcodes.</p>
<pre><code>kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{..exitCode}{'\n'}{end}"
kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{..exitCode}{'\n'}{end}"
</code></pre>
| Ankit | <p>if I understand you correctly you may want to check out the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">Field Selectors</a>.</p>
<blockquote>
<p>Field selectors let you select Kubernetes resources based on the value
of one or more resource fields. Here are some example field selector
queries:</p>
<ul>
<li>metadata.name=my-service </li>
<li>metadata.namespace!=default</li>
<li>status.phase=Pending </li>
</ul>
<p>This kubectl command selects all Pods for which
the value of the status.phase field is Running:</p>
<p><code>kubectl get pods --field-selector status.phase=Running</code></p>
</blockquote>
<p>Here is some more <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get" rel="nofollow noreferrer">documentation</a> regarding this.</p>
<blockquote>
<p>Selector (field query) to filter on, supports '=',
'==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The
server only supports a limited number of field queries per type.</p>
</blockquote>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>Kubernetes has the concept of ephemeral-storage which can be applied by the deployment to a container like this:</p>
<pre><code>limits:
cpu: 500m
memory: 512Mi
ephemeral-storage: 100Mi
requests:
cpu: 50m
memory: 256Mi
ephemeral-storage: 50Mi
</code></pre>
<p>Now, when applying this to a k8s 1.18 cluster (IBM Cloud managed k8s), I cannot see any changes when I look at a running container:</p>
<pre><code>kubectl exec -it <pod> -n <namespace> -c nginx -- /bin/df
</code></pre>
<p>I would expect to see there changes. Am I wrong?</p>
| Matthias Rich | <p>You can see the allocated resources by using <code>kubectl describe node <insert-node-name-here></code> on the node that is running the pod of the deployment.</p>
<p>You should see something like this:</p>
<pre><code>Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1130m (59%) 3750m (197%)
memory 4836Mi (90%) 7988Mi (148%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-azure-disk 0 0
</code></pre>
<p>When you requested 50Mi of ephemeral-storage it should show up under <code>Requests</code>.
When your pod tries to use more than the limit (100Mi) the pod will be evicted and restarted.</p>
<p>On the node side, any pod that uses more than its requested resources is subject to eviction when the node runs out of resources. In other words, Kubernetes never provides any guarantees of availability of resources beyond a Pod's requests.</p>
<p>In kubernetes documentation you can find more details how Ephemeral storage consumption management works <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption" rel="noreferrer">here</a>.</p>
<p>Note that using kubectl exec with <code>df</code> command might not show actual use of storage.</p>
<p>According to kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#configurations-for-local-ephemeral-storage" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>The kubelet can measure how much local storage it is using. It does this provided that:</p>
<ul>
<li>the <code>LocalStorageCapacityIsolation</code> <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="noreferrer">feature gate</a> is enabled (the feature is on by default), and</li>
<li>you have set up the node using one of the supported configurations for local ephemeral storage.</li>
</ul>
<p>If you have a different configuration, then the kubelet does not apply resource limits for ephemeral local storage.</p>
<p><em><strong>Note:</strong> The kubelet tracks <code>tmpfs</code> emptyDir volumes as container memory use, rather than as local ephemeral storage.</em></p>
</blockquote>
| Piotr Malec |
<p>I am trying to enable communication between Kubernetes (K8s) pods and a Docker container but I cannot find how to do so on Windows. Help would be greatly appreciated.</p>
<p>Scenario description:</p>
<p>The n K8s pods are composed of one (and only) container: a Go microservice accessed by Postman at this time, itself supposed to access a database.</p>
<p>To enable a load-balanced access to the pods as one, a Kubernetes service is put on top, using a NodePort. So far everything is working.</p>
<p>But the K8s-Pod/Go-microservice <> Docker-container/db part do not see each other. The Go microservice obviously says the "db" cannot be resolved. The container db is itself in a user-defined docker network (called "mynw"); its type is a bridge.</p>
<p>Would anyone know how to acheive this in the simplest way? (I mean without third-party/tierce things, heavy proxy configuration, etc.)</p>
<p>Thanks!</p>
| willemavjc | <p>You can create a headless service in k8s, with an endpoint pointing to your database container. Then you have to direct your Go service to use that headless service as the db connection.</p>
| Burak Serdar |
<p>Spark needs lots of resources to does its job. Kubernetes is great environment for resource management. How many Spark PODs do you run per node to have the best resource utilization? </p>
<p>Trying to run Spark Cluster on Kubernetes Cluster.</p>
| hnajafi | <p>It depends on many factors. We need to know how much resources do you have and how much is being consumed by the pods. To do so you need to <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">setup a Metrics-server</a>.</p>
<p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/" rel="nofollow noreferrer">Metrics Server</a> is a cluster-wide aggregator of resource usage data. </p>
<p>Next step is to setup HPA.</p>
<p>The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization or other custom metrics. HPA normally fetches metrics from a series of aggregated APIs:</p>
<ul>
<li>metrics.k8s.io</li>
<li>custom.metrics.k8s.io</li>
<li>external.metrics.k8s.io</li>
</ul>
<p>How to make it work?</p>
<p>HPA is being supported by kubectl by default: </p>
<ul>
<li><code>kubectl create</code> - creates a new autoscaler</li>
<li><code>kubectl get hpa</code> - lists your autoscalers</li>
<li><code>kubectl describe hpa</code> - gets a detailed description of autoscalers</li>
<li><code>kubectl delete</code> - deletes an autoscaler</li>
</ul>
<p>Example:
<code>kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80</code> creates an autoscaler for replication set foo, with target CPU utilization set to 80% and the number of replicas between 2 and 5. You can and should adjust all values to your needs.</p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale" rel="nofollow noreferrer">Here</a> is a detailed documentation of how to use kubectl autoscale command.</p>
<p>Please let me know if you find that useful.</p>
| Wytrzymały Wiktor |
<p>I have used the following configuration to setup the Istio</p>
<pre><code>cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
</code></pre>
<p>I could see that the istio services</p>
<p><a href="https://i.stack.imgur.com/2WTvd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2WTvd.png" alt="enter image description here" /></a></p>
<blockquote>
<p>kubectl get svc -n istio-system</p>
</blockquote>
<p>I have deployed the sleep app</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/sleep/sleep.yaml
-n akv2k8s-test
</code></pre>
<p>and have deployed the ServiceEntry</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin-ext
namespace: akv2k8s-test
spec:
hosts:
- httpbin.org
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
EOF
</code></pre>
<p>and tried accessing the external URL</p>
<pre><code>export SOURCE_POD=$(kubectl get -n akv2k8s-test pod -l app=sleep -o jsonpath='{.items..metadata.name}')
kubectl exec "$SOURCE_POD" -n akv2k8s-test -c sleep -- curl -sI http://httpbin.org/headers | grep "HTTP/";
</code></pre>
<p>however I could not see any logs reported on the proxy</p>
<pre><code>kubectl logs "$SOURCE_POD" -n akv2k8s-test -c istio-proxy | tail
</code></pre>
<p><a href="https://i.stack.imgur.com/PPxV3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PPxV3.png" alt="enter image description here" /></a></p>
<p>as per the documentation I should see this</p>
<p><a href="https://i.stack.imgur.com/JBPbY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JBPbY.png" alt="enter image description here" /></a></p>
<p>however I don't see the header</p>
<p><a href="https://i.stack.imgur.com/x1MfC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x1MfC.png" alt="enter image description here" /></a></p>
<p>am I missing something here?</p>
| One Developer | <p>Just to clarify what Karthikeyan Vijayakumar did to make this work.</p>
<p>In theory istio Egress Gateway won't work here because you haven't used it, you just used istio Service Entry to access publicly accessible service edition.cnn.com from within your Istio cluster.</p>
<p>Take a look at documentation <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-http-traffic" rel="nofollow noreferrer">here</a>, you need few more components to actually use it.</p>
<p>You're missing Egress Gateway, Destination Rule and Virtual Service as per above documentation point 3 and 4.</p>
<p>Service entry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. It's not like you enable egress gateway and just with that every traffic goes through Egress Gateway.</p>
<p>When you add below dependencies the traffic goes as follow, I assume you are using the pod inside the cluster to communicate with external service.</p>
<p>[sleep pod - envoy sidecar] -> mesh gateway -> egress gateway -> external service</p>
| Jakub |
<p>I've port-forwarded an external API to localhost:3000 using kubectl, and am able to make requests and receive responses through Postman, browser, etc. However, when I build and deploy a Spring application to use the port-forwarded API address and port (i.e. localhost:3000) configuration to receive and process responses, checking the logs of the local Kubernetes pod in which it is deploys shows that the connection is being refused:</p>
<pre><code>java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
</code></pre>
<p>I'm failing to understand why the locally deployed pod can't access the URL provided, when it seemingly otherwise works fine.</p>
| jvbb1995 | <p>It sounds like you are deploying this Spring app to a Kubernetes cluster. When the Spring app deploys, it will run inside a container, and localhost:3000 is the port 3000 of the pod the app is running in, it is not the port 3000 of the host. If you cannot access this external service using something like an ingress, maybe you can try running a port forward in a separate container deployed in the same pod as your Spring app.</p>
| Burak Serdar |
<p>I have istio 1.1.3 with mtls and service-registry on a kubespray k8 cluster. I want to secure all outgoing traffic, hence I create service-entries for each external service that my services want to talk to.</p>
<p>I would like to use istio-proxy logs to see the blocked communication attempts for all sort of traffic.</p>
<p>If i curl from within container to a (blocked) <a href="http://google.com" rel="nofollow noreferrer">http://google.com</a> i see 404 NR in istio-proxy logs. Also curl receives 404. As expected</p>
<p>If i change call to use https, and curl (still blocked) <a href="https://google.com" rel="nofollow noreferrer">https://google.com</a>, i see the following curl error
(35) Unknown SSL protocol error in connection to google.com:443
and nothing shows up in istio-proxy logs (why nothing?)</p>
<p>How can i see all connection attempts in istio-proxy? I have a pretty convoluted bunch of services that do covert-ops outgoing calls and i need to figure out what hostnames/ips/ports they are trying to hit</p>
| strzelecki.maciek | <p>If I understand you correctly you can try to set different logging levels with:</p>
<pre><code>--log_output_level <string>
</code></pre>
<blockquote>
<p>Comma-separated minimum per-scope logging level of messages to output,
in the form of :,:,... where scope can be
one of [all, default, model, rbac] and level can be one of [debug,
info, warn, error, fatal, none] (default <code>default:info</code>)</p>
</blockquote>
<p>More info can be found <a href="https://istio.io/docs/reference/commands/pilot-agent/" rel="nofollow noreferrer">here</a></p>
<p>Please let me know if that helped. </p>
| Wytrzymały Wiktor |
<p>I have the following docker file:</p>
<pre><code>FROM openjdk:8-jdk-alpine
ENV PORT 8094
EXPOSE 8094
RUN mkdir -p /app/
COPY build/libs/fqdn-cache-service.jar /app/fqdn-cache-service.jar
WORKDIR /build
ENTRYPOINT [ "sh" "-c", "java -jar /app/fqdn-cache-service.jar" ]
</code></pre>
<p>docker-compose.yaml file:</p>
<pre><code>version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
image: fqdn-cache-service
ports:
- "8094:8094"
links:
- "db:redis"
db:
image: "redis:alpine"
#hostname: redis
ports:
- "6378:6378"
</code></pre>
<p>deployment.yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: fqdn-cache-service
spec:
selector:
matchLabels:
run: spike
replicas: 1
template:
metadata:
labels:
app: redis
run: spike
spec:
containers:
- name: fqdn-cache-service
imagePullPolicy: Never
image: fqdn-cache-service:latest
ports:
- containerPort: 8094
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
run: spike
replicas: 1
template:
metadata:
labels:
run: spike
spec:
hostname: redis
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: fqdn-cache-service
labels:
run: spike
spec:
type: NodePort
ports:
- port: 8094
nodePort: 30001
selector:
run: spike
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
run: spike
app: redis
spec:
type: NodePort
ports:
- port: 6379
nodePort: 30002
selector:
run: spike
</code></pre>
<p>And the cluster info ip is 127.0.0.1.
I'm using microk8s over ubuntu OS.
If I request for get by and ID (127.0.0.1/webapi/users/1) I get the error:</p>
<pre><code>Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
</code></pre>
<p>Although on regular java application with redis or dockerize spring boot with redis it's working.</p>
<p>Any help why this is happened?</p>
<p>This is the configuration of the spring boot:</p>
<pre><code>@Configuration
public class ApplicationConfig {
@Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory factory = new JedisConnectionFactory();
factory.setHostName("127.0.0.1");
factory.setPort(30001);
factory.setUsePool(true);
return factory;
}
@Bean
RedisTemplate redisTemplate() {
RedisTemplate<String, FqdnMapping> redisTemplate = new RedisTemplate<String, FqdnMapping>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
}
</code></pre>
<p>The issue also happenes if the host name is localhost and\or the port is 6379...</p>
<p>Thanks!</p>
| Shay Zambrovski | <p>When you're running in a container, 127.0.0.1 usually refers to the container itself, not to the host the container is running on. If you're trying to connect to a service, try using its name and port: "redis" on port 6379 and "fqdn-cache-service" on 8094.</p>
| Burak Serdar |
<p>i ran the minikube start in windows 10 and i got this error,i totally installed the minikube and virtualbox and kubectl .</p>
<pre><code>-->>minikube start
* minikube v1.2.0 on windows (amd64)
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Re-using the currently running virtualbox VM for "minikube" ...
* Waiting for SSH access ...
* Found network options:
- NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24,192.168.39.0/24
* Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
- env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24,192.168.39.0/24
* Relaunching Kubernetes v1.15.0 using kubeadm ...
X Error restarting cluster: waiting for apiserver: timed out waiting for the condition
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new
</code></pre>
<pre><code>-->minikube status
host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.101
</code></pre>
<p>if there is a way to handle this problem please let me know.</p>
| Sajjad Hadafi | <p>There are few things you should try:</p>
<ol>
<li><p>You might not be waiting long enough for the apiserver to become healthy. Increase apiserver wait time. </p></li>
<li><p>Use different version of Minikube. Remember to run <code>minikube delete</code> to remove the previous cluster state.</p></li>
<li><p>If your environment is behind a proxy than setup a correct <code>NO_PROXY</code> env. More about this can be found <a href="https://github.com/kubernetes/minikube/blob/master/docs/http_proxy.md" rel="nofollow noreferrer">here</a>.</p></li>
<li>use minikube delete then minikube start</li>
</ol>
<p>Please let me know if that helped.</p>
| Wytrzymały Wiktor |
<p>I deployed <strong>istio</strong> in my <strong>aks</strong> cluster using <a href="https://istio.io/latest/docs/setup/getting-started/" rel="nofollow noreferrer">enter link description here</a>, exposed istio sample applications(product_page,)through <strong>istio gatewayservice,</strong> it worked fine as expected, but when I exposed my service, showing <strong>404</strong> error.</p>
<p>Here is my gateway.yaml</p>
<p><a href="https://i.stack.imgur.com/kjdlq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kjdlq.png" alt="enter image description here" /></a></p>
<p>Here is my virtual-service.yaml</p>
<p><a href="https://i.stack.imgur.com/80I2o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/80I2o.png" alt="enter image description here" /></a></p>
| harish hari | <p>As mentioned in the comments setting regex value to <code>regex: /*</code> solved the issue in this case.</p>
<p>It is possible to see and inspect what is exposed and what is accessible in the istio ingress by generating the mesh graph using kiali as mentioned on <a href="https://istio.io/latest/docs/tasks/observability/kiali/" rel="nofollow noreferrer">this documentation page</a>.</p>
<p>This is especially useful when working with regex uri match method as it is very easy to mess up the scope of a regex value.</p>
| Piotr Malec |
<p>I am sure I am missing something clear as day, but here goes. I have a frontend and backend being deployed behind an Nginx ingress. The requests to the backend were timing out, so I tested with a <code>curl</code> pod. I found that I was able to hit the pods directly, but not the service. This led me to run:</p>
<pre class="lang-sh prettyprint-override"><code>> kubectl get endpoints
NAME ENDPOINTS AGE
backend-api <none> 10m
kubernetes 167.99.101.163:443 121d
nginx-ingress-ingress-nginx-controller 10.244.0.17:80,10.244.0.17:443 96m
nginx-ingress-ingress-nginx-controller-admission 10.244.0.17:8443 96m
vue-frontend 10.244.0.24:80 84m
</code></pre>
<p>No endpoints... I recall (from when I initially set this deployment up) that this usually has to do with selectors. I have checked and checked and checked, but I swear I have it set up correctly. Clearly not though.</p>
<pre class="lang-yaml prettyprint-override"><code>## api-deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-api
labels:
app: backend-api
spec:
replicas: 2
selector:
matchLabels:
app: backend-api
template:
metadata:
labels:
app: backend-api
spec:
containers:
- name: backend-api
image: us.gcr.io/container-registry-276104/backend-api:0.88
ports:
- containerPort: 8080
env:
...
</code></pre>
<pre class="lang-sh prettyprint-override"><code>> kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
backend-api-76d6d4f4c9-drbmn 1/1 Running 0 85m app=backend-api,pod-template-hash=76d6d4f4c9
backend-api-76d6d4f4c9-qpbhk 1/1 Running 0 85m app=backend-api,pod-template-hash=76d6d4f4c9
curl-curlpod-7b46d7776f-jkt6f 1/1 Running 0 47m pod-template-hash=7b46d7776f,run=curl-curlpod
nginx-ingress-ingress-nginx-controller-5475c95bbf-psmzr 1/1 Running 0 97m app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5475c95bbf
vue-frontend-6dbf68446f-pzv5h 1/1 Running 0 85m app=vue-frontend,pod-template-hash=6dbf68446f
</code></pre>
<p>and the service:</p>
<pre class="lang-yaml prettyprint-override"><code>## api-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: backend-api
spec:
selector:
app: backend-api
ports:
- port: 8080
protocol: TCP
targetPort: http
publishNotReadyAddresses: true
</code></pre>
<p>Help is, of course, appreciated :) I normally beat my head against the wall for hours, but I'm learning to work smarter, not harder!</p>
<p>Happy to answer any questions - thanks in advance!</p>
| createchange | <p>It looks like you got the service ports mixed up. <code>targetPort</code> is set to <code>http</code>, which is port 80. This is the port that the service forwards to on the pod. The <code>port</code> value of the service is the port service exposes. So if you want your service to forward traffic to port 8080 of your pod, set <code>targetPort</code> to 8080.</p>
| Burak Serdar |
<p>I have installed ISTIO with the below configuration</p>
<pre><code>cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
meshConfig:
accessLogFile: /dev/stdout
outboundTrafficPolicy:
mode: REGISTRY_ONLY
EOF
</code></pre>
<p>and have configured the Egress Gateway, Destination Rule & Virtual Service as shown below</p>
<pre><code>cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: akv2k8s-test
labels:
istio-injection: enabled
azure-key-vault-env-injection: enabled
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: edition-cnn-com
namespace: akv2k8s-test
spec:
hosts:
- edition.cnn.com
ports:
- number: 443
name: https-port
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: edition-cnn-com
namespace: akv2k8s-test
spec:
hosts:
- edition.cnn.com
tls:
- match:
- port: 443
sniHosts:
- edition.cnn.com
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
EOF
</code></pre>
<p>While trying to access it throws an error</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.7/samples/sleep/sleep.yaml -n akv2k8s-test
export SOURCE_POD=$(kubectl get pod -l app=sleep -n akv2k8s-test -o jsonpath={.items..metadata.name})
kubectl exec "$SOURCE_POD" -n akv2k8s-test -c sleep -- curl -sL -o /dev/null -D - https://edition.cnn.com/politics
kubectl logs -l istio=egressgateway -c istio-proxy -n istio-system | tail
</code></pre>
<p><a href="https://i.stack.imgur.com/tBPsQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tBPsQ.png" alt="enter image description here" /></a></p>
<p>How do I fix this?</p>
<p><strong>Update</strong>: I have also tried the below, but still the same result</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: svc-entry
namespace: akv2k8s-test
spec:
hosts:
- google.com
ports:
- number: 443
name: https
protocol: HTTPS
location: MESH_EXTERNAL
resolution: DNS
EOF
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ext-res-dr
namespace: akv2k8s-test
spec:
host: google.com
EOF
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ext-res-gw
namespace: akv2k8s-test
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- google.com
tls:
mode: PASSTHROUGH
EOF
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ext-res-vs
namespace: akv2k8s-test
spec:
hosts:
- google.com
gateways:
- mesh
- ext-res-gw
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- google.com
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: google
port:
number: 443
- match:
- gateways:
- ext-res-gw
port: 443
sniHosts:
- google.com
route:
- destination:
host: google.com
port:
number: 443
weight: 100
EOF
</code></pre>
| One Developer | <p>I'm not sure what's wrong with first example as there are no all dependencies, about the update there was an issue with your DestinationRule</p>
<p>It should be</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ext-res-dr
namespace: akv2k8s-test
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: google
</code></pre>
<p>Instead of</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ext-res-dr
namespace: akv2k8s-test
spec:
host: google.com
</code></pre>
<p>and hosts/sniHosts</p>
<p>It should be</p>
<pre><code>www.google.com
</code></pre>
<p>Instead of</p>
<pre><code>google.com
</code></pre>
<hr />
<p>There is working example for <code>https://www.google.com</code>.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: svc-entry
namespace: akv2k8s-test
spec:
hosts:
- www.google.com
ports:
- number: 443
name: https
protocol: HTTPS
location: MESH_EXTERNAL
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ext-res-dr
namespace: akv2k8s-test
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: google
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ext-res-gw
namespace: akv2k8s-test
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
hosts:
- www.google.com
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ext-res-vs
namespace: akv2k8s-test
spec:
hosts:
- www.google.com
gateways:
- mesh
- ext-res-gw
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- www.google.com
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
subset: google
port:
number: 443
- match:
- gateways:
- ext-res-gw
port: 443
sniHosts:
- www.google.com
route:
- destination:
host: www.google.com
port:
number: 443
weight: 100
</code></pre>
<p>And there is registry mode, curl and egress logs.</p>
<pre><code>kubectl get istiooperator istio-control-plane -n istio-system -o jsonpath='{.spec.meshConfig.outboundTrafficPolicy.mode}'
REGISTRY_ONLY
kubectl exec "$SOURCE_POD" -n akv2k8s-test -c sleep -- curl -sL -o /dev/null -D - https://www.google.com
HTTP/2 200
kubectl logs -l istio=egressgateway -c istio-proxy -n istio-system | tail
[2020-10-27T14:16:37.735Z] "- - -" 0 - "-" "-" 844 17705 45 - "-" "-" "-" "-" "xxx.xxx.xxx.xxx:443" outbound|443||www.google.com xx.xx.xx.xx:59814 xx.xx.xx.xx:8443 1xx.xx.xx.xx:33112 www.google.com -
[2020-10-27T14:18:45.896Z] "- - -" 0 - "-" "-" 883 17647 38 - "-" "-" "-" "-" "xxx.xxx.xxx.xxx:443" outbound|443||www.google.com xx.xx.xx.xx:56834 xx.xx.xx.xx:8443 xx.xx.xx.xx:33964 www.google.com -
</code></pre>
<p>Please refer to this <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-https-traffic" rel="nofollow noreferrer">documentation</a>.</p>
| Jakub |
<p>I’m running airflow on Kubernetes and using <strong>“Kubernetes Operator”</strong>.
When I run <strong>BashOperator</strong> or <strong>PythonOperator</strong> it works <strong>fine</strong>
Using:</p>
<pre><code>executor_config = {
"KubernetesExecutor": {
"image": "image_with_airflow_and_my_code:latest"
}
}
</code></pre>
<p><strong>When I try run run dag with KubernetesPodOperator it fails</strong></p>
<p>for example:</p>
<pre><code>k = KubernetesPodOperator(namespace='default',
image="ubuntu:18.04",
cmds=["bash", "-cx"],
arguments=["echo", "10"],
name="test",
task_id="task",
is_delete_operator_pod=False,
dag=dag
)
</code></pre>
<p>I see that the docker image that was created is not the image that I specified above (ubuntu:18.04) but the default image from the configuration (AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY)</p>
<p><strong>in the scheduler log I see:</strong></p>
<blockquote>
<p>[2019-10-06 12:59:56,279] {{scheduler_job.py:921}} INFO - 1 tasks up for execution:
[2019-10-06 12:59:56,325] {{scheduler_job.py:953}} INFO - Figuring out tasks to run in Pool(name=default_pool) with 128 open slots and 1 task instances ready to be queued
[2019-10-06 12:59:56,326] {{scheduler_job.py:981}} INFO - DAG koperator has 0/16 running and queued tasks
[2019-10-06 12:59:56,361] {{scheduler_job.py:1031}} INFO - Setting the following tasks to queued state:
[2019-10-06 12:59:56,398] {{scheduler_job.py:1107}} INFO - Setting the following 1 tasks to queued state:
[2019-10-06 12:59:56,401] {{scheduler_job.py:1143}} INFO - Sending ('koperator', 'task', datetime.datetime(2019, 10, 6, 12, 59, 50, 146375, tzinfo=), 1) to executor with priority 1 and queue default
[2019-10-06 12:59:56,403] {{base_executor.py:59}} INFO - Adding to queue: ['airflow', 'run', 'koperator', 'task', '2019-10-06T12:59:50.146375+00:00', '--local', '--pool', 'default_pool', '-sd', '/usr/local/airflow/dags/KubernetesPodOperator.py']
[2019-10-06 12:59:56,405] {{kubernetes_executor.py:764}} INFO - Add task ('koperator', 'task', datetime.datetime(2019, 10, 6, 12, 59, 50, 146375, tzinfo=), 1) with command ['airflow', 'run', 'koperator', 'task', '2019-10-06T12:59:50.146375+00:00', '--local', '--pool', 'default_pool', '-sd', '/usr/local/airflow/dags/KubernetesPodOperator.py'] with executor_config {}
[2019-10-06 12:59:56,417] {{kubernetes_executor.py:441}} INFO - Kubernetes job is (('koperator', 'task', datetime.datetime(2019, 10, 6, 12, 59, 50, 146375, tzinfo=), 1), ['airflow', 'run', 'koperator', 'task', '2019-10-06T12:59:50.146375+00:00', '--local', '--pool', 'default_pool', '-sd', '/usr/local/airflow/dags/KubernetesPodOperator.py'], KubernetesExecutorConfig(image=None, image_pull_policy=None, request_memory=None, request_cpu=None, limit_memory=None, limit_cpu=None, limit_gpu=None, gcp_service_account_key=None, node_selectors=None, affinity=None, annotations={}, volumes=[], volume_mounts=[], tolerations=None, labels={}))
[2019-10-06 12:59:56,498] {{kubernetes_executor.py:353}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d had an event of type ADDED
[2019-10-06 12:59:56,509] {{kubernetes_executor.py:385}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d Pending
[2019-10-06 12:59:56,528] {{kubernetes_executor.py:353}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d had an event of type MODIFIED
[2019-10-06 12:59:56,529] {{kubernetes_executor.py:385}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d Pending
[2019-10-06 12:59:56,543] {{kubernetes_executor.py:353}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d had an event of type MODIFIED
[2019-10-06 12:59:56,544] {{kubernetes_executor.py:385}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d Pending
[2019-10-06 12:59:59,492] {{kubernetes_executor.py:353}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d had an event of type MODIFIED
[2019-10-06 12:59:59,492] {{kubernetes_executor.py:393}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d is Running
[2019-10-06 13:00:10,873] {{kubernetes_executor.py:353}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d had an event of type MODIFIED
[2019-10-06 13:00:10,874] {{kubernetes_executor.py:390}} INFO - Event: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d Succeeded
[2019-10-06 13:00:12,236] {{kubernetes_executor.py:493}} INFO - Attempting to finish pod; pod_id: koperatortask-2f35f3b347a149bcb2133ef58cf9e77d; state: None; labels: {'airflow-worker': 'b46fd37e-959c-4844-81e1-dff9df2e98e2', 'dag_id': 'koperator', 'execution_date': '2019-10-06T12_59_50.146375_plus_00_00', 'task_id': 'task', 'try_number': '1'}
[2019-10-06 13:00:12,245] {{kubernetes_executor.py:616}} INFO - Checking 1 task instances.
[2019-10-06 13:00:12,247] {{kubernetes_executor.py:626}} INFO - Found matching task koperator-task (2019-10-06 12:59:50.146375+00:00) with current state of up_for_retry
[2019-10-06 13:00:12,253] {{kubernetes_executor.py:783}} INFO - Changing state of (('koperator', 'task', datetime.datetime(2019, 10, 6, 12, 59, 50, 146375, tzinfo=tzlocal()), 1), None, 'koperatortask-2f35f3b347a149bcb2133ef58cf9e77d', '34894988') to None
[2019-10-06 13:00:12,273] {{scheduler_job.py:1283}} INFO - Executor reports execution of koperator.task execution_date=2019-10-06 12:59:50.146375+00:00 exited with status None for</p>
</blockquote>
<p><strong>the log of the raised pod:</strong></p>
<blockquote>
<p>[2019-10-06 12:02:11,961] {{<strong>init</strong>.py:51}} INFO - Using executor LocalExecutor
[2019-10-06 12:02:12,844] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags/KubernetesPodOperator.py
[2019-10-06 12:02:13,571] {{cli.py:516}} INFO - Running on host koperatortask-bd0c81d6039c4b329ae8dd2292c0c566</p>
</blockquote>
<p>what am I doing wrong?</p>
<p>how can I run dag on kubernetes with KubernetesPodOperator?</p>
<p>thanks, Aviad</p>
| Aviad | <p>unfortunately I do not see enough information to determine what is wrong.
add the parameter.</p>
<p>"get_logs": True</p>
<p>to the KubernetesPodOperator.</p>
<p>That way, the run will combine the stdouts from both the KubernetesExecutor and KubernetesPodOperator pods into an Airflow task log.</p>
<p>It should give you a much clearer idea of what is going on.</p>
| Grant T |
<p>I used calico as CNI in my k8s, I'm trying to deploy a single master cluster in 3 servers. I'm using <code>kubeadm</code>, follow the official <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">setup guide</a>. But some error occurred, <code>kube-controller-manager</code> and <code>kube-scheduler</code> go in CrashLoopBackOff error and cannot run well.</p>
<p>I have tried <code>kubeadm reset</code> at every server, and also restart the server, downgrade docker.</p>
<p>I use <code>kubeadm init --apiserver-advertise-address=192.168.213.128 --pod-network-cidr=192.168.0.0/16</code> to init the master, and run <code>kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml</code> and <code>kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml</code> to start calico.</p>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# docker info
Containers: 20
Running: 18
Paused: 0
Stopped: 2
Images: 10
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.12.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 972.6MiB
Name: k8s-master
ID: RN6I:PP52:4WTU:UP7E:T3LF:MXVZ:EDBX:RSII:BIRW:36O2:CYJ3:FRV2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://i70c3eqq.mirror.aliyuncs.com/
https://docker.mirrors.ustc.edu.cn/
Live Restore Enabled: false
Product License: Community Engine
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# kubelet --version
Kubernetes v1.14.1
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl get no -A
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 49m v1.14.1
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-xmc5t 2/2 Running 0 27m
kube-system coredns-6765558d84-945mt 1/1 Running 0 28m
kube-system coredns-6765558d84-xz7lw 1/1 Running 0 28m
kube-system coredns-fb8b8dccf-z87sl 1/1 Running 0 31m
kube-system etcd-k8s-master 1/1 Running 0 30m
kube-system kube-apiserver-k8s-master 1/1 Running 0 29m
kube-system kube-controller-manager-k8s-master 0/1 CrashLoopBackOff 8 30m
kube-system kube-proxy-wp7n9 1/1 Running 0 31m
kube-system kube-scheduler-k8s-master 1/1 Running 7 29m
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl logs -n kube-system kube-controller-manager-k8s-master
I0513 13:49:51.836448 1 serving.go:319] Generated self-signed cert in-memory
I0513 13:49:52.988794 1 controllermanager.go:155] Version: v1.14.1
I0513 13:49:53.003873 1 secure_serving.go:116] Serving securely on 127.0.0.1:10257
I0513 13:49:53.005146 1 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
I0513 13:49:53.008661 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-controller-manager...
I0513 13:50:12.687383 1 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager
I0513 13:50:12.700344 1 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"39adc911-7582-11e9-a70e-000c2908c796", APIVersion:"v1", ResourceVersion:"1706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k8s-master_fbfa0502-7585-11e9-9939-000c2908c796 became leader
I0513 13:50:13.131264 1 plugins.go:103] No cloud provider specified.
I0513 13:50:13.166088 1 controller_utils.go:1027] Waiting for caches to sync for tokens controller
I0513 13:50:13.368381 1 controllermanager.go:497] Started "podgc"
I0513 13:50:13.368666 1 gc_controller.go:76] Starting GC controller
I0513 13:50:13.368697 1 controller_utils.go:1027] Waiting for caches to sync for GC controller
I0513 13:50:13.368717 1 controller_utils.go:1034] Caches are synced for tokens controller
I0513 13:50:13.453276 1 controllermanager.go:497] Started "attachdetach"
I0513 13:50:13.453534 1 attach_detach_controller.go:323] Starting attach detach controller
I0513 13:50:13.453545 1 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
I0513 13:50:13.461756 1 controllermanager.go:497] Started "clusterrole-aggregation"
I0513 13:50:13.461833 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0513 13:50:13.461849 1 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
I0513 13:50:13.517257 1 controllermanager.go:497] Started "endpoint"
I0513 13:50:13.525394 1 endpoints_controller.go:166] Starting endpoint controller
I0513 13:50:13.525425 1 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
I0513 13:50:14.151371 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0513 13:50:14.151463 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0513 13:50:14.151489 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0513 13:50:14.163632 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0513 13:50:14.163695 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0513 13:50:14.163721 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0513 13:50:14.163742 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0513 13:50:14.163757 1 shared_informer.go:311] resyncPeriod 67689210101997 is smaller than resyncCheckPeriod 86008177281797 and the informer has already started. Changing it to 86008177281797
I0513 13:50:14.163840 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0513 13:50:14.163848 1 shared_informer.go:311] resyncPeriod 64017623179979 is smaller than resyncCheckPeriod 86008177281797 and the informer has already started. Changing it to 86008177281797
I0513 13:50:14.163867 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0513 13:50:14.163885 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
I0513 13:50:14.163911 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
I0513 13:50:14.163925 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0513 13:50:14.163942 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0513 13:50:14.163965 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0513 13:50:14.163994 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0513 13:50:14.164004 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0513 13:50:14.164019 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
I0513 13:50:14.164030 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0513 13:50:14.164039 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0513 13:50:14.164054 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0513 13:50:14.164079 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0513 13:50:14.164097 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0513 13:50:14.164115 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
E0513 13:50:14.164139 1 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "crd.projectcalico.org/v1, Resource=networkpolicies": unable to monitor quota for resource "crd.projectcalico.org/v1, Resource=networkpolicies"]
I0513 13:50:14.164154 1 controllermanager.go:497] Started "resourcequota"
I0513 13:50:14.171002 1 resource_quota_controller.go:276] Starting resource quota controller
I0513 13:50:14.171096 1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
I0513 13:50:14.171138 1 resource_quota_monitor.go:301] QuotaMonitor running
I0513 13:50:15.776814 1 controllermanager.go:497] Started "job"
I0513 13:50:15.771658 1 job_controller.go:143] Starting job controller
I0513 13:50:15.807719 1 controller_utils.go:1027] Waiting for caches to sync for job controller
I0513 13:50:23.065972 1 controllermanager.go:497] Started "csrcleaner"
I0513 13:50:23.047495 1 cleaner.go:81] Starting CSR cleaner controller
I0513 13:50:25.019036 1 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"39adc911-7582-11e9-a70e-000c2908c796", APIVersion:"v1", ResourceVersion:"1706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k8s-master_fbfa0502-7585-11e9-9939-000c2908c796 stopped leading
I0513 13:50:25.125784 1 leaderelection.go:263] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded
F0513 13:50:25.189307 1 controllermanager.go:260] leaderelection lost
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl logs -n kube-system kube-scheduler-k8s-master
I0513 14:16:04.350818 1 serving.go:319] Generated self-signed cert in-memory
W0513 14:16:06.203477 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0513 14:16:06.215933 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0513 14:16:06.215947 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0513 14:16:06.218951 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0513 14:16:06.218983 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0513 14:16:06.961417 1 server.go:142] Version: v1.14.1
I0513 14:16:06.974064 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0513 14:16:06.997875 1 authorization.go:47] Authorization is disabled
W0513 14:16:06.997889 1 authentication.go:55] Authentication is disabled
I0513 14:16:06.997908 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0513 14:16:06.998196 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
I0513 14:16:08.872649 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0513 14:16:08.973148 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0513 14:16:09.003227 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0513 14:16:25.814160 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
</code></pre>
<p>What is the reason for <code>kube-controller-manager</code> and <code>kube-scheduler</code> going into CrashLoopBackoff? And how can I make <code>kube-controller-manager</code> and <code>kube-scheduler</code> run well?</p>
| mio leon | <p>I have reproduced the steps you listed on a cloud VM and managed to make it work fine.</p>
<p>Got few ideas that might help:</p>
<ol>
<li><p>Be sure to meet <strong>all</strong> the prerequisites listed <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">here</a> </p></li>
<li><p>Install the most recent version of Docker following the guide from <a href="https://docs.docker.com/install/linux/docker-ce/debian/" rel="nofollow noreferrer">here</a>
(chose the proper OS that you use)</p></li>
<li><p>Install kubeadm useing the commands below:</p>
<pre><code> apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
</code></pre></li>
<li><p>Make sure you got the latest version of kubeadm by executing: <code>apt-get update && apt-get upgrade</code></p></li>
<li><p>Make sure you use the proper arguments alongside <code>kubeadm init</code></p></li>
<li><p>Don't forget to run:</p>
<ul>
<li><p><code>mkdir -p $HOME/.kube</code></p></li>
<li><p><code>sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config</code></p></li>
<li><p><code>sudo chown $(id -u):$(id -g) $HOME/.kube/config</code></p></li>
</ul></li>
</ol>
<p>after <code>kubeadm init</code> finishes (these commands are also part of the kubeadm init output).</p>
<ol start="7">
<li>Finally apply the.yaml files you listed in your question.</li>
</ol>
<p>Notice that by following above steps you will have <code>kubectl version</code>, <code>kubelet --version</code> and <code>kubectl get no -A</code> all in v1.14.3 and not v1.14.1 like you showed, which might be the case.</p>
<p>I hope it helps.</p>
| Wytrzymały Wiktor |
<p>Ref:
<a href="https://github.com/SeldonIO/seldon-core/blob/master/examples/models/sklearn_iris/sklearn_iris.ipynb" rel="nofollow noreferrer">https://github.com/SeldonIO/seldon-core/blob/master/examples/models/sklearn_iris/sklearn_iris.ipynb</a>
<a href="https://github.com/SeldonIO/seldon-core/tree/master/examples/models/sklearn_spacy_text" rel="nofollow noreferrer">https://github.com/SeldonIO/seldon-core/tree/master/examples/models/sklearn_spacy_text</a></p>
<p>#Steps Done</p>
<pre><code>1. kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80
2.kubectl create namespace john
3.kubectl config set-context $(kubectl config current-context) --namespace=john
4.kubectl create -f sklearn_iris_deployment.yaml
</code></pre>
<pre><code>cat sklearn_iris_deployment.yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: seldon-deployment-example
namespace: john
spec:
name: sklearn-iris-deployment
predictors:
- componentSpecs:
- spec:
containers:
- image: seldonio/sklearn-iris:0.1
imagePullPolicy: IfNotPresent
name: sklearn-iris-classifier
graph:
children: []
endpoint:
type: REST
name: sklearn-iris-classifier
type: MODEL
name: sklearn-iris-predictor
replicas: 1
</code></pre>
<p>kubectl get sdep -n john seldon-deployment-example -o json | jq .status</p>
<pre><code> "deploymentStatus": {
"sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c": {
"availableReplicas": 1,
"replicas": 1
}
},
"serviceStatus": {
"seldon-635d389a05411932517447289ce51cde": {
"httpEndpoint": "seldon-635d389a05411932517447289ce51cde.john:9000",
"svcName": "seldon-635d389a05411932517447289ce51cde"
},
"seldon-bb8b177b8ec556810898594b27b5ec16": {
"grpcEndpoint": "seldon-bb8b177b8ec556810898594b27b5ec16.john:5001",
"httpEndpoint": "seldon-bb8b177b8ec556810898594b27b5ec16.john:8000",
"svcName": "seldon-bb8b177b8ec556810898594b27b5ec16"
}
},
"state": "Available"
}
</code></pre>
<p>5.here iam using istio and as per this doc <a href="https://docs.seldon.io/projects/seldon-core/en/v1.1.0/workflow/serving.html" rel="nofollow noreferrer">https://docs.seldon.io/projects/seldon-core/en/v1.1.0/workflow/serving.html</a> i did the same</p>
<pre><code>Istio
Istio REST
Assuming the istio gateway is at <istioGateway> and with a Seldon deployment name <deploymentName> in namespace <namespace>:
A REST endpoint will be exposed at : http://<istioGateway>/seldon/<namespace>/<deploymentName>/api/v1.0/predictions
</code></pre>
<ol start="6">
<li></li>
</ol>
<p>curl -s http://localhost:8003/seldon/john/sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c/api/v0.1/predictions -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' -v</p>
<pre><code>* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8003 (#0)
> POST /seldon/johnson-az-videspan/sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c/api/v0.1/predictions HTTP/1.1
> Host: localhost:8003
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 48
>
* upload completely sent off: 48 out of 48 bytes
< HTTP/1.1 301 Moved Permanently
< location: https://localhost:8003/seldon/john/sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c/api/v0.1/predictions
< date: Fri, 23 Oct 2020 13:09:46 GMT
< server: istio-envoy
< connection: close
< content-length: 0
<
* Closing connection 0
</code></pre>
<p>the same thing happen in <strong>sklearn_spacy_text</strong> model too but i wonder the same models working perfectly while running it on docker.</p>
<p>please find the sample responce from docker</p>
<pre><code>curl -s http://localhost:5000/predict -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 5000 (#0)
> POST /predict HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.61.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 48
>
* upload completely sent off: 48 out of 48 bytes
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Content-Type: application/json
< Content-Length: 125
< Access-Control-Allow-Origin: *
< Server: Werkzeug/1.0.0 Python/3.7.4
< Date: Fri, 23 Oct 2020 11:18:31 GMT
<
{"data":{"names":["t:0","t:1","t:2"],"ndarray":[[0.9548873249364169,0.04505474761561406,5.7927447968952436e-05]]},"meta":{}}
* Closing connection 0
</code></pre>
<pre><code>curl -s http://localhost:5001/predict -H "Content-Type: application/json" -d '{"data": {"names": ["text"], "ndarray": ["Hello world this is a test"]}}'
{"data":{"names":["t:0","t:1"],"ndarray":[[0.6811839197596743,0.3188160802403257]]},"meta":{}}
</code></pre>
<p>can any one help to resolve this issue</p>
| Antony Johnson | <h2>Issue</h2>
<p>It appears that the request you make incorrectly tries to re-direct to an https protocol (port 443)</p>
<h2>Solution</h2>
<p><strong>Use</strong> https instead of http</p>
<pre><code>curl -s https://localhost:8003/seldon/john/sklearn-iris-deployment-sklearn-iris-predictor-0e43a2c/api/v0.1/predictions -H "Content-Type: application/json" -d '{"data":{"ndarray":[[5.964,4.006,2.081,1.031]]}}' -v
</code></pre>
<hr />
<p><strong>Use</strong> curl with -L flag, which instructs curl to follow redirects. In this case, the server returned a redirect response (301 Moved Permanently) for the HTTP request to <code>http://localhost:8003</code>. The redirect response instructs the client to send an additional request, this time using HTTPS, to <code>https://localhost:8003</code>.</p>
<p>More about it <a href="https://curl.haxx.se/docs/manpage.html" rel="nofollow noreferrer">here</a>.</p>
| Jakub |
<p>I am looking for some help/suggestions with kubernetes. I am trying to mount an NFS shared drive to the docker container running inside a kubernetes pod so that I can access the files inside that drive from code that runs inside the docker container. I was able to successfully mount the drive and I am able to see that based on the kubectl logs for the running pod.</p>
<p>But my problem is that only a service account has access to the contents of the folder I need to access and I am not sure how to use the service account to access the files. I am totally new to the the kubernetes world and trying to understand and learn how this kind of scenarios are handled. I used helm charts for configuring the NFS mount. </p>
| Sravan Kumar | <p>So generally mounting directly inside a Docker container is not the greatest solution. </p>
<p>What you should rather look at when mounting data, is to create a PersistentVolume and PersistentClaim in Kubernetes and mount that to the pod. This abstract Storage specifics from your application specifics. </p>
<p>You can find more information on the Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">blog</a>.</p>
<p>In <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim" rel="nofollow noreferrer">here</a> they explain about using NFS.</p>
| Christiaan Vermeulen |
<p>I'm having trouble understanding <code>helm</code>'s use of <code>helm --kube-context=microk8s install ...</code> should install into the context <code>microk8s</code> thus into my local microk8s cluster rather than the remote GKE cluster which I once connected to.</p>
<p>This however fails due to <code>Error: could not get Kubernetes config for context "microk8s": context "microk8s" does not exist</code> if I run e.g. <code>helm --kube-context=microk8s install --name mereet-kafka</code> after successfully running <code>helm init</code> and adding necessary repositories.</p>
<p>The context <code>microk8s</code> is present and enabled according to <code>kubectl config current-context</code>. I can even reproduce this by running <code>helm --kube-context=$(kubectl config current-context) install --name mereet-kafka</code> in order to avoid any typos.</p>
<p>Why can't <code>helm</code> use obviously present contexts?</p>
| Kalle Richter | <p>This looks like a kubernetes configuration problem more than an issue with helm itself.</p>
<p>There are few things that might help:</p>
<ol>
<li><p>Check the config file in <code>~/.kube/config</code></p>
<ul>
<li><code>kubectl config view</code></li>
</ul></li>
</ol>
<p>Is <code>current-context</code> set to: microk8s?</p>
<ol start="2">
<li><p>Try to use: </p>
<ul>
<li><p><code>kubectl config get-contexts</code></p></li>
<li><p><code>kubectl config set-context</code> </p></li>
<li><p><code>kubectl config use-context</code></p></li>
</ul></li>
</ol>
<p>with proper arguments <code>--server</code> <code>--user</code> <code>--cluster</code></p>
<ol start="3">
<li><p>Check if you are refering to the config from <code>~/.kube/config</code> and not your own private config from somewhere else. </p></li>
<li><p>Check if you have a <code>KUBECONFIG</code> environment variable (<code>echo $KUBECONFIG</code>)</p></li>
</ol>
<p>I hope it helps.</p>
| Wytrzymały Wiktor |
<p>Currently I try to setup a Nextcloud on Azure Kubernetes Service as an exercise. Basically the application seems running, but after connecting the Database, Nextcloud ending with something like...</p>
<blockquote>
<p>Please change the permissions of your storage to 0770 to prevent other people from accessing your data</p>
</blockquote>
<p>I guess cause I used a <code>azurefile</code> share as persistent volume. My pvc deployment looks like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-shared-storage-claim
labels:
app: nextcloud
spec:
accessModes:
- ReadWriteOnce
storageClassName: azurefile
resources:
requests:
storage: 5Gi
</code></pre>
<p>I've already researched on that topic and find ways to realize the use of permissions for pods with <code>securityContext</code>. Because I've only just started with Kubernetes on Azure I struggle a bit on binding my Deployment file for nextcloud with a pod, that applies the permissions.</p>
<p><em>To complete the post - here is the deployment file for the Nextcloud I used</em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-server-pod
template:
metadata:
labels:
pod-label: nextcloud-server-pod
spec:
containers:
- name: nextcloud
image: nextcloud:20-apache
volumeMounts:
- name: server-storage
mountPath: /var/www/html
subPath: server-data
volumes:
- name: server-storage
persistentVolumeClaim:
claimName: nextcloud-shared-storage-claim
---
apiVersion: v1
kind: Service
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-server-pod
ports:
- protocol: TCP
port: 80
</code></pre>
<p>I guess/hope that it's totally simple.</p>
| elludorado | <p>Posting this answer as community wiki since it might be helpful for the community. Feel free to expand.</p>
<p>As mentioned by @Nick Graham in the comments</p>
<blockquote>
<p>To modify the permissions on a mounted volume you’ll need to execute a script after the container starts up. Some images give you the option to copy scripts into a particular folder that are then executed at start up, check the docs to see if the image your using provides that functionality</p>
</blockquote>
<p>There are few <a href="https://stackoverflow.com/questions/43544370">examples</a>.</p>
<hr />
<p>Additionally according to this <a href="https://github.com/Azure/AKS/issues/225#issuecomment-371007021" rel="nofollow noreferrer">comment</a> you can try to specify this permissions in your storage class.</p>
| Jakub |
<p>I am using Airflow with Kubernetes executor and testing out locally (using minikube), While I was able to get it up and running, I cant seem to store my logs in S3. I have tried all solutions that are described and I am still getting the following error, </p>
<pre><code>*** Log file does not exist: /usr/local/airflow/logs/example_python_operator/print_the_context/2020-03-30T16:02:41.521194+00:00/1.log
*** Fetching from: http://examplepythonoperatorprintthecontext-5b01d602e9d2482193d933e7d2:8793/log/example_python_operator/print_the_context/2020-03-30T16:02:41.521194+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='examplepythonoperatorprintthecontext-5b01d602e9d2482193d933e7d2', port=8793): Max retries exceeded with url: /log/example_python_operator/print_the_context/2020-03-30T16:02:41.521194+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd00688a650>: Failed to establish a new connection: [Errno -2] Name or service not known'))
</code></pre>
<p>I implemented a custom Logging class as mentioned in this <a href="https://stackoverflow.com/questions/50222860/airflow-wont-write-logs-to-s3">answer</a> and still no luck.</p>
<ul>
<li>I use Puckel airflow 1.10.9</li>
<li>Stable Helm chart for airflow from <a href="https://github.com/helm/charts" rel="nofollow noreferrer">charts/stable/airflow/</a></li>
</ul>
<p>My <code>airflow.yaml</code> looks like this</p>
<pre><code>airflow:
image:
repository: airflow-docker-local
tag: 1
executor: Kubernetes
service:
type: LoadBalancer
config:
AIRFLOW__CORE__EXECUTOR: KubernetesExecutor
AIRFLOW__CORE__TASK_LOG_READER: s3.task
AIRFLOW__CORE__LOAD_EXAMPLES: True
AIRFLOW__CORE__FERNET_KEY: ${MASKED_FERNET_KEY}
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://postgres:airflow@airflow-postgresql:5432/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://postgres:airflow@airflow-postgresql:5432/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:airflow@airflow-redis-master:6379/0
# S3 Logging
AIRFLOW__CORE__REMOTE_LOGGING: True
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: s3://${AWS_ACCESS_KEY_ID}:${AWS_ACCESS_SECRET_KEY}@S3
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: s3://${BUCKET_NAME}/logs
AIRFLOW__CORE__S3_LOG_FOLDER: s3://${BUCKET_NAME}/logs
AIRFLOW__CORE__LOGGING_LEVEL: INFO
AIRFLOW__CORE__LOGGING_CONFIG_CLASS: log_config.LOGGING_CONFIG
AIRFLOW__CORE__ENCRYPT_S3_LOGS: False
# End of S3 Logging
AIRFLOW__WEBSERVER__EXPOSE_CONFIG: True
AIRFLOW__WEBSERVER__LOG_FETCH_TIMEOUT_SEC: 30
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: airflow-docker-local
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: 1
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: Never
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: airflow
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM: airflow
AIRFLOW__KUBERNETES__NAMESPACE: airflow
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: True
AIRFLOW__KUBERNETES__KUBE_CLIENT_REQUEST_ARGS: '{\"_request_timeout\":[60,60]}'
persistence:
enabled: true
existingClaim: ''
accessMode: 'ReadWriteMany'
size: 5Gi
logsPersistence:
enabled: false
workers:
enabled: true
postgresql:
enabled: true
redis:
enabled: true
</code></pre>
<p>I have tried setting up the Connection via UI and creating connection via <code>airflow.yaml</code> and nothing seems to work, I have been trying this for 3 days now with no luck, any help would be much appreciated.</p>
<p>I have attached the screenshot for reference, </p>
<p><a href="https://i.stack.imgur.com/onHc3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/onHc3.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/bnePj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnePj.png" alt="enter image description here"></a></p>
| midNight | <p>I am pretty certain this issue is because the s3 logging configuration has not been set on the worker pods. The worker pods don't get given configuration set using environment variables such as <code>AIRFLOW__CORE__REMOTE_LOGGING: True</code>. If you wish to set this variable in the worker pod then you must copy the variable and append <code>AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__</code> to the copied environment variable name: <code>AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__AIRFLOW__CORE__REMOTE_LOGGING: True</code>.</p>
<p>In this case you would need to duplicate all of your variables specifying config for s3 logging and append <code>AIRFLOW__KUBERNETES_ENVIRONMENT_VARIABLES__</code> to the copies.</p>
| Jacob Ward |
<p>i have a handful of dockerized microservices, each is listening for http requests on a certain port, and i have these deployments formalized as kubernetes yaml files</p>
<p>however, i can't figure out a working strategy to expose my deployments on the interwebs (in terms of kubernetes services)</p>
<p>each deployment has multiple replicas, and so i assume each deployment should have a matching load balancer service to expose it to the outside</p>
<p>now i can't figure out a strategy to sanely expose these microservices to the internet... here's what i'm thinking:</p>
<ol>
<li><p><strong>the whole cluster is exposed on a domain name, and services are subdomains</strong> </p>
<ul>
<li>say the cluster is available at <code>k8s.mydomain.com</code></li>
<li>each loadbalancer service (which exposes a corresponding microservice) should be accessible by a subdomain
<ul>
<li><code>auth-server.k8s.mydomain.com</code></li>
<li><code>profile-server.k8s.mydomain.com</code></li>
<li><code>questions-board.k8s.mydomain.com</code></li>
<li>so requests to each subdomain would be load balanced to the replicas of the matching deployment</li>
</ul></li>
<li>so how do i actually achieve this setup? is this desirable?
<ul>
<li>can i expose each load balancer as a subdomain? is this done automatically?</li>
<li>or do i need an ingress controller?</li>
<li>am i barking up the wrong tree?</li>
<li>i'm looking for general advice on how to expose a single app which is a mosaic of microservices</li>
</ul></li>
</ul></li>
<li><p><strong>each service is exposed on the same ip/domain, but each gets its own port</strong></p>
<ul>
<li>perhaps the whole cluster is accessible at <code>k8s.mydomain.com</code> again</li>
<li>can i map each port to a different load balancer?
<ul>
<li><code>k8s.mydomain.com:8000</code> maps to <code>auth-server-loadbalancer</code></li>
<li><code>k8s.mydomain.com:8001</code> maps to <code>profile-server-loadbalancer</code></li>
</ul></li>
<li>is this possible? it seems less robust and less desirable than strategy 1 above</li>
</ul></li>
<li><p><strong>each service is exposed on its own ip/domain?</strong></p>
<ul>
<li>perhaps each service specifies a static ip, and my domain has A records pointing each subdomain at each of these ip's in a manual way?</li>
<li>how do i know which static ip's to use? in production? in local dev?</li>
</ul></li>
</ol>
<p>maybe i'm conceptualizing this wrong? can a whole kubernetes cluster map to one ip/domain?</p>
<p>what's the simplest way to expose a bunch of microservies in kubernetes? on the other hand, what's the most robust/ideal way to expose microservices in production? do i need a different strategy for local development in minikube? (i was just going to edit <code>/etc/hosts</code> a lot)</p>
<p>thanks for any advice, cheers</p>
| ChaseMoskal | <p>Use an ingress:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress</a></p>
<p>With an ingress, you can assign subdomains to different services, or you can serve all the services under different context roots with some url rewriting.</p>
<p>I don't suggest exposing services using different ports. Nonstandard ports have other problems.</p>
| Burak Serdar |
<p>I am trying to understand the Node-Controller in Kubernetes. Kubernetes <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#heartbeats" rel="nofollow noreferrer">documentation</a> mentions that node heartbeats are done using NodeStatus and LeaseObject updates. Someone, please explain why both mechanisms are needed for monitoring node health.
Does Kubernetes master internally use a job/cronjob for a node health check processing?</p>
| Kiran | <p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#lease-v1-coordination-k8s-io" rel="nofollow noreferrer">Lease</a> is a lightweight resource, which improves the performance of the node heartbeats as the cluster scales.</p>
<p>The Lease objects are tracked as a way of helping the heartbeats to continue functioning efficiently as a cluster scales up. According to the docs, this would be their primary function relating to heartbeats.</p>
<p>Whereas the <code>NodeStatus</code> is used for Heartbeats by the kubelet, <code>NodeStatus</code> is also an important signal for other controllers in k8s.</p>
<p>For example: the k8s scheduler is responsible for scheduling pods on nodes. It tries to find the best fit for a node to optimize memory, cpu, and other usage on the node. It wouldn't however want to schedule a pod on a node with <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-status" rel="nofollow noreferrer">node status</a> condition set to <code>NetworkUnavailable: true</code> or some other condition which would make the pod unsuitable to run on that node.</p>
<p>If there is a signal or signals that you don't know or understand, there is a good chance there is a controller that uses that field or signal to accomplish its logic.</p>
<p><strong>EDIT:</strong></p>
<p>The node-controller is a part of the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager</a>:</p>
<blockquote>
<p>The Kubernetes controller manager is a daemon that embeds the core
control loops shipped with Kubernetes. In applications of robotics and
automation, a control loop is a non-terminating loop that regulates
the state of the system. In Kubernetes, a controller is a control loop
that watches the shared state of the cluster through the apiserver and
makes changes attempting to move the current state towards the desired
state. Examples of controllers that ship with Kubernetes today are the
replication controller, endpoints controller, namespace controller,
and serviceaccounts controller.</p>
</blockquote>
<p>Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.</p>
<p><strong>EDIT_2</strong>:</p>
<p>Based on your latest comments, we have 2 additional points to address:</p>
<blockquote>
<ol>
<li>"how the node-controller processes the node health check"</li>
</ol>
</blockquote>
<p>While implementing k8s, you probably don't need to know this level of detail. All the details which should be useful for you are already in the <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager" rel="nofollow noreferrer">linked public docs</a>. There is no need to worry about that but I understand that it brought the more practical question:</p>
<blockquote>
<ol start="2">
<li>I am not sure how much load a big cluster can generate.</li>
</ol>
</blockquote>
<p>This is where the <a href="https://kubernetes.io/docs/setup/best-practices/cluster-large/" rel="nofollow noreferrer">Considerations for large clusters</a> comes to help. It will show you how to handle big clusters and which tools are there at your disposal when it comes to managing them.</p>
| Wytrzymały Wiktor |
<p>I have been trying to debug a very odd delay in my K8S deployments. I have tracked it down to the simple reproduction below. What it appears is that if I set an initialDelaySeconds on a startup probe or leave it 0 and have a single failure, then the probe doesn't get run again for a while and ends up with atleast a 1-1.5 minute delay getting into Ready:true state.</p>
<p>I am running locally with Ubutunu 18.04 and microk8s v1.19.3 with the following versions:</p>
<ul>
<li>kubelet: v1.19.3-34+a56971609ff35a</li>
<li>kube-proxy: v1.19.3-34+a56971609ff35a</li>
<li>containerd://1.3.7</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microbot
name: microbot
spec:
replicas: 1
selector:
matchLabels:
app: microbot
strategy: {}
template:
metadata:
labels:
app: microbot
spec:
containers:
- image: cdkbot/microbot-amd64
name: microbot
command: ["/bin/sh"]
args: ["-c", "sleep 3; /start_nginx.sh"]
#args: ["-c", "/start_nginx.sh"]
ports:
- containerPort: 80
startupProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 0 # 5 also has same issue
periodSeconds: 1
failureThreshold: 10
successThreshold: 1
##livenessProbe:
## httpGet:
## path: /
## port: 80
## initialDelaySeconds: 0
## periodSeconds: 10
## failureThreshold: 1
resources: {}
restartPolicy: Always
serviceAccountName: ""
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: microbot
labels:
app: microbot
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: microbot
</code></pre>
<p>The issue is that if I have any delay in the startupProbe or if there is an initial failure, the pod gets into Initialized:true state but had Ready:False and ContainersReady:False. It will not change from this state for 1-1.5 minutes. I haven't found a pattern to the settings.</p>
<p>I left in the comment out settings as well so you can see what I am trying to get to here. What I have is a container starting up that has a service that will take a few seconds to get started. I want to tell the startupProbe to wait a little bit and then check every second to see if we are ready to go. The configuration seems to work, but there is a baked in delay that I can't track down. Even after the startup probe is passing, it does not transition the pod to Ready for more than a minute.</p>
<p>Is there some setting elsewhere in k8s that is delaying the amount of time before a Pod can move into Ready if it isn't Ready initially?</p>
<p>Any ideas are greatly appreciated.</p>
| Allen | <p>Actually I made a mistake in comments, you can use <code>initialDelaySeconds</code> in startupProbe, but you should rather use <code>failureThreshold</code> and <code>periodSeconds</code> instead.</p>
<hr />
<p>As mentioned <a href="https://medium.com/dev-genius/understanding-kubernetes-probes-5daaff67599a" rel="noreferrer">here</a></p>
<h2>Kubernetes Probes</h2>
<blockquote>
<p>Kubernetes supports readiness and liveness probes for versions ≤ 1.15. Startup probes were added in 1.16 as an alpha feature and graduated to beta in 1.18 (WARNING: 1.16 deprecated several Kubernetes APIs. Use this migration guide to check for compatibility).
All the probe have the following parameters:</p>
<ul>
<li>initialDelaySeconds : number of seconds to wait before initiating
liveness or readiness probes</li>
<li>periodSeconds: how often to check the probe</li>
<li>timeoutSeconds: number of seconds before marking the probe as timing
out (failing the health check)</li>
<li>successThreshold : minimum number of consecutive successful checks
for the probe to pass</li>
<li>failureThreshold : number of retries before marking the probe as
failed. For liveness probes, this will lead to the pod restarting.
For readiness probes, this will mark the pod as unready.</li>
</ul>
</blockquote>
<p>So why should you use <code>failureThreshold</code> and <code>periodSeconds</code>?</p>
<blockquote>
<p>consider an application where it occasionally needs to download large amounts of data or do an expensive operation at the start of the process. Since <strong>initialDelaySeconds</strong> is a static number, we are forced to always take the worst-case scenario (or extend the failureThreshold that may affect long-running behavior) and wait for a long time even when that application does not need to carry out long-running initialization steps. With startup probes, we can instead configure <strong>failureThreshold</strong> and <strong>periodSeconds</strong> to model this uncertainty better. For example, setting <strong>failureThreshold</strong> to 15 and <strong>periodSeconds</strong> to 5 means the application will get 15 (fifteen) x 5 (five) = 75s to startup before it fails.</p>
</blockquote>
<p>Additionally if you need more informations take a look at this <a href="https://medium.com/swlh/fantastic-probes-and-how-to-configure-them-fef7e030bd2f" rel="noreferrer">article</a> on medium.</p>
<hr />
<p>Quoted from kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/?fbclid=IwAR2oddWIe4ElP_R8MFw5YIqdNABmQCYZfwhG57pEfP_KAt9kgPILm12hMI0#define-startup-probes" rel="noreferrer">documentation</a> about <strong>Protect slow starting containers with startup probes</strong></p>
<blockquote>
<p>Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds long enough to cover the worse case startup time.</p>
<p>So, the previous example would become:</p>
</blockquote>
<pre><code>ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10
startupProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 30
periodSeconds: 10
</code></pre>
<blockquote>
<p>Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. If the startup probe never succeeds, the container is killed after 300s and subject to the pod's restartPolicy.</p>
</blockquote>
| Jakub |
<p>I have 3 node kubernetes cluster with 2 linux ( 1 master & 1 worker ) and 1 windows server 2019 core virtual machines. I tried to deploy windows application on that but is giving error <code>network: read /run/flannel/subnet.env: The handle is invalid.</code></p>
<p>I tried by doing this: </p>
<pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>I can deploy linux applications only problem with windows applications. Pods are scheduling to the windows worker node but none of them are running, all are in container creating state. When I check the logs above is the error for every pod. </p>
<p>Below is the output of <code>.\flanneld.exe</code> on Windows VM:</p>
<pre><code>> I0410 15:07:01.699217 11704 main.go:514] Determining IP address of default interface I0410 15:07:02.023840 11704 main.go:527] Using
> interface with name vEthernet (Ethernet0) and address <IP Address>
> I0410 15:07:02.023840 11704 main.go:544] Defaulting external address
> to interface address (<IP Address>) E0410 15:07:02.026800 11704
> main.go:605] Couldn't fetch previous FLANNEL_SUBNET from subnet file
> at /run/flannel/subnet.env: read /run/flannel/subnet.env: The handle
> is invalid. I0410 15:07:02.026800 11704 main.go:244] Created subnet
> manager: Etcd Local Manager with Previous Subnet: None I0410
> 15:07:02.027804 11704 main.go:247] Installing signal handlers E0410
> 15:07:04.034674 11704 main.go:382] Couldn't fetch network config:
> client: etcd cluster is unavailable or misconfigured; error #0: dial
> tcp 127.0.0.1:2379: connectex: No connection could be made because the
> target machine actively refused it. ; error #1: dial tcp
> 127.0.0.1:4001: i/o timeout timed out E0410 15:08:14.027848 11704 main.go:382] Couldn't fetch network config: client: etcd cluster is
> unavailable or misconfigured; error #0: dial tcp 127.0.0.1:2379: i/o
> timeout ; error #1: dial tcp 127.0.0.1:4001: i/o timeout timed out
> E0410 15:08:17.053635 11704 main.go:382] Couldn't fetch network
> config: client: etcd cluster is unavailable or misconfigured; error
> #0: dial tcp 127.0.0.1:2379: i/o timeout ; error #1: dial tcp 127.0.0.1:4001: i/o timeout
</code></pre>
<p>Output of <code>C:\run\flannel\subnet.exe</code>:</p>
<pre><code>> FLANNEL_NETWORK=10.244.0.0/16
> FLANNEL_SUBNET=10.244.1.1/24
> FLANNEL_MTU=1450
> FLANNEL_IPMASQ=true
</code></pre>
| Priyanka | <p>I have resolved it by redeploying flannel on windows because first when I deployed it /run/flannel/subnet.env was not there then I manually created /run/flannel/subnet.env and kept the info as I mentioned above. Later I tried to deploy windows application on cluster again got an error flanneld service is misconfigured. so i have redeployed it.</p>
| Priyanka |
<p>I am new to kubernetes and experimenting with volumes. I have a docker image which declares 2 volumes as in :</p>
<pre><code>VOLUME ["/db/mongo/data" , "/db/mongo/log"]
</code></pre>
<p>I am using a StatefulSet , wherein I have 2 volume mounts, as in --</p>
<pre><code>volumeMounts:
- name: mongo-vol
mountPath: << path1 >>
subPath: data
- name: mongo-vol
mountPath: << path2 >>
subPath: log
</code></pre>
<p>My question is i) should path1 & path2 be mentioned as "/db/mongo/data" and "/db/mongo/log" respectively ?? </p>
<p>ii) Or it can be any path where the volumes would be mounted inside the container, and "/db/mongo/data" & "/db/mongo/log" container paths would be automatically mapped to those mount points ?</p>
<p>I tried reading up the documentation and tried both options but some confusion still remains. Appreciate some help here.</p>
| Rajesh | <p>Your both volume mounts reference to the same volume <code>mongo-vol</code>. That tells me this is a volume containing the <code>data</code> and <code>log</code> directories. You should use <code>/db/mongo/log</code> and <code>/db/mongo/data</code> as your <code>mountPath</code>s, and specify the <code>subPath</code> as <code>log</code> and <code>data</code> respectively. That will mount the volume referenced by <code>mongo-vol</code>, and mount <code>data</code> and <code>log</code> directories in that volume on to those directories.</p>
<p>If you had two seperate volumes, a <code>mongo-data</code> and <code>mongo-log</code>, then you would mount them the same way, without the <code>subPath</code>, because you are not referencing sub-directories under the volume.</p>
| Burak Serdar |
<p>I would like to add a vault to my kubernetes to store JWT, database passwords etc...</p>
<p>I'm using vault from Hashicorp and I followed this documentation: <a href="https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-secret-store-driver" rel="nofollow noreferrer">https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-secret-store-driver</a></p>
<p>My Secret Provider Class and mu ServiceAccount look like :</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ServiceAccount
apiVersion: v1
metadata:
name: application-sa
namespace: application-dev
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: application-vault-database
namespace: application-dev
spec:
provider: vault
secretObjects:
- data:
- key: password
objectName: db-password
secretName: dbpass
type: Opaque
parameters:
vaultAddress: "https://127.0.0.1:8200"
roleName: "database"
objects: |
- objectName: "db-password"
secretPath: "secret/data/db-pass"
secretKey: "password"
</code></pre>
<p>and my postgresql database deployment looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: application-postgresql-pvc
namespace: application-dev
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-postgresql
namespace: application-dev
spec:
replicas: 1
selector:
matchLabels:
app: application-postgresql
template:
metadata:
labels:
app: application-postgresql
spec:
serviceAccountName: application-sa
volumes:
- name: data
persistentVolumeClaim:
claimName: application-postgresql-pvc
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "application-vault-database"
containers:
- name: postgres
image: postgres:14.5
env:
- name: POSTGRES_USER
value: application
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dbpass
key: password
ports:
- containerPort: 5432
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
- name: data
mountPath: /var/lib/postgresql/data
subPath: postgres
resources:
requests:
memory: '512Mi'
cpu: '500m'
limits:
memory: '1Gi'
cpu: '1'
---
apiVersion: v1
kind: Service
metadata:
name: application-postgresql
namespace: application-dev
spec:
selector:
app: application-postgresql
ports:
- port: 5432
</code></pre>
<p>But I'm getting the following error when I start my database pod:</p>
<pre><code>MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod application-dev/application-postgresql-7db74cf6b-8b2q4, err: rpc error: code = Unknown desc = error making mount request: couldn't read secret "db-password": failed to login: Post "https://127.0.0.1:8200/v1/auth/kubernetes/login": dial tcp 127.0.0.1:8200: connect: connection refused
</code></pre>
<p>**What I tried: **</p>
<p>regarding my kubernetes config I have :</p>
<p>`kubectl config view``</p>
<pre class="lang-bash prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://xxx.api.k8s.fr-par.scw.cloud:6443
name: k8s-application
contexts:
- context:
cluster: k8s-application
user: k8s-application-admin
name: admin@k8s-application
current-context: admin@k8s-application
kind: Config
preferences: {}
users:
- name: k8s-application-admin
user:
token: REDACTED
</code></pre>
<p>the server is: <a href="https://xxx.api.k8s.fr-par.scw.cloud:6443" rel="nofollow noreferrer">https://xxx.api.k8s.fr-par.scw.cloud:6443</a></p>
<p>So I assumed that I had to change my vault kubernetes config to :</p>
<pre class="lang-bash prettyprint-override"><code>vault write auth/kubernetes/config \
> kubernetes_host="https://xxx.api.k8s.fr-par.scw.cloud:6443"
</code></pre>
<p>instead of <code>$KUBERNETES_PORT_443_TCP_ADDR</code></p>
<p>For info <code>$KUBERNETES_PORT_443_TCP_ADDR</code> is <code>10.32.0.1</code></p>
<p>I also tried to the <code>vaultAddress</code> in the SPC to "http://vault.default:8200" like in the documentation</p>
<p>Then I got Post "http://vault.default:8200/v1/auth/kubernetes/login": dial tcp: lookup vault.default on 10.32.0.10:53: no such host</p>
<p>So i guess connection refused of the original conf mean that the host "https://127.0.0.1:8200" is correct but that something is wrong with the kubernetes auth?</p>
<p>What do you think?</p>
<p>Regards</p>
| Benjamin Barbé | <p>thanks to @Srishti Khandelwal</p>
<p>I needed to kubectl get service -n namespace</p>
<p>and use the name in my config :</p>
<p><code>http://vault-service-name.namespace:port</code></p>
| Benjamin Barbé |
<p>I have multiple applications that run using Node.js and pg (node-postgres).</p>
<p>The issue i have is that every app is getting the error "Connection terminated unexpectedly" every hour. Here is the error :</p>
<pre><code>> node ./dist/app.js
App Started
events.js:174
throw er; // Unhandled 'error' event
^
Error: Connection terminated unexpectedly
at Connection.con.once (/app/node_modules/pg/lib/client.js:255:9)
at Object.onceWrapper (events.js:286:20)
at Connection.emit (events.js:198:13)
at Socket.<anonymous> (/app/node_modules/pg/lib/connection.js:139:10)
at Socket.emit (events.js:203:15)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
Emitted 'error' event at:
at connectedErrorHandler (/app/node_modules/pg/lib/client.js:202:10)
at Connection.con.once (/app/node_modules/pg/lib/client.js:272:9)
at Object.onceWrapper (events.js:286:20)
[... lines matching original stack trace ...]
at process._tickCallback (internal/process/next_tick.js:63:19)
</code></pre>
<p>Here is how i connect my Client to the database :</p>
<p>Database.ts:</p>
<pre><code>import { Client, QueryResult } from 'pg';
export default class DatabaseModule {
private client: Client;
constructor() {
this.client = new Client({
connectionString: process.env.DATABASE_URL
});
}
public init(): Promise<any> {
return this.client.connect();
}
}
</code></pre>
<p>app.ts:</p>
<pre><code>Promise.all([
express.init(),
database.init()
])
.then(() => {
console.log("App Started");
[load routes...];
})
.catch((error) => {
console.error(error);
process.exit(1);
});
</code></pre>
<p>All works fine on local but not on production.</p>
<p>On production we are running every app as micro services in Google Kubernetes Engine. Is there any config in K8s that may cause this connection loss every hour ? (even if the Client is idle or not idle, this error happens)</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-service-57c9f99767-wnm47 2/2 Running 96 4d
</code></pre>
<p>As you can see, my app has 96 restarts: 4 days * 24 hours = 96 => error every hour that crashs the pod.</p>
<p>We are using a postgreSQL server hosted by Google Cloud SQL, and every app in K8s has an access to it with a local address.</p>
<p>EDIT:</p>
<p>I just found this in the Google Cloud SQL documentation :
<code>WebSockets are always available to your application without any additional setup. Once a WebSockets connection is established, it will time out after one hour.</code></p>
<p>So the error was generated from the usage of pg.Client with a persistent connection to the SQL server. I will try to use pg.Pool().
Here is the explanation of why i should use pool instead of client : <a href="https://stackoverflow.com/a/48751665/12052533">https://stackoverflow.com/a/48751665/12052533</a></p>
| mmoussa | <p>I found the problem :</p>
<p>In the Google Cloud SQL documentation : <code>WebSockets are always available to your application without any additional setup. Once a WebSockets connection is established, it will time out after one hour.</code></p>
<p>The error was generated by the usage of pg.Client() because i had a persistent connection into my database which is a bad practice. A client shall connect to the database then end its connection after it finished executing a query.</p>
<p>I will use pg.Pool() as it will generate clients and it is better for multiple requests.
After the generation of a client i juste have to release all my clients.</p>
<p>I removed the database.init() and modified the database.query() function like the following :</p>
<pre class="lang-js prettyprint-override"><code> public query(command: string, args?: Array<any>): Promise<QueryResult> {
if (args === undefined)
args = [];
return this.pool.connect()
.then((client: Client) => {
return this.queryClient(client, command, args)
})
.then((result: QueryResult) => {
return result;
})
.catch((error) => {
throw error;
});
}
private queryClient(client: Client, command: string, args?: Array<any>): Promise<QueryResult> {
return client.query(command, args)
.then((result: QueryResult) => {
client.release();
return result;
}).catch((error) => {
client.release();
throw error;
})
}
</code></pre>
| mmoussa |
<p>I am trying to rate limit number GRPC connections based on a token included in the Authorization header. I tried the following settings in the Nginx configmap and Ingress annotation but Nginx rate limiting is not working.</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: default
data:
http-snippet: |
limit_req_zone $http_authorization zone=zone-1:20m rate=10r/m;
limit_req_zone $http_token zone=zone-2:20m rate=10r/m;
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/configuration-snippet: |
limit_req zone=zone-1;
limit_req_log_level notice;
limit_req_status 429;
</code></pre>
<p>I try to have Nginx Ingress Controller to rate limit the GRPC/HTTP2 stream connection based on the value in the $http_authorization variable. I have modified the Nginx log_format to log the $http_authorization value and observe that Nginx receives the value. The problem I am facing is that for some reason the rate limiting rule doesn't get triggered.</p>
<p>Is this the correct approach?</p>
<p>Any help and feedback would be much appreciated!</p>
<p>Thanks</p>
| Bobby H | <p>Hello Bobby_H and welcome to Stack Overflow!</p>
<p>When using Nginx Ingress on Kubernetes you can set up your rate limits with <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting" rel="nofollow noreferrer">these annotations</a>:</p>
<blockquote>
<ul>
<li><p><code>nginx.ingress.kubernetes.io/limit-connections:</code> number of concurrent connections allowed from a single IP address. A 503 error
is returned when exceeding this limit.</p>
</li>
<li><p><code>nginx.ingress.kubernetes.io/limit-rps:</code> number of requests accepted from a given IP each second. The burst limit is set to this limit
multiplied by the burst multiplier, the default multiplier is 5. When
clients exceed this limit, limit-req-status-code default: 503 is
returned.</p>
</li>
<li><p><code>nginx.ingress.kubernetes.io/limit-rpm:</code> number of requests accepted from a given IP each minute. The burst limit is set to this limit
multiplied by the burst multiplier, the default multiplier is 5. When
clients exceed this limit, limit-req-status-code default: 503 is
returned.</p>
</li>
<li><p><code>nginx.ingress.kubernetes.io/limit-burst-multiplier:</code> multiplier of the limit rate for burst size. The default burst multiplier is 5, this
annotation override the default multiplier. When clients exceed this
limit, limit-req-status-code default: 503 is returned.</p>
</li>
<li><p><code>nginx.ingress.kubernetes.io/limit-rate-after:</code> initial number of kilobytes after which the further transmission of a response to a
given connection will be rate limited. This feature must be used with
proxy-buffering enabled.</p>
</li>
<li><p><code>nginx.ingress.kubernetes.io/limit-rate:</code> number of kilobytes per second allowed to send to a given connection. The zero value disables
rate limiting. This feature must be used with proxy-buffering enabled.</p>
</li>
<li><p><code>nginx.ingress.kubernetes.io/limit-whitelist:</code> client IP source ranges to be excluded from rate-limiting. The value is a comma
separated list of CIDRs.</p>
</li>
</ul>
</blockquote>
<p>Nginx implements the <a href="https://en.wikipedia.org/wiki/Leaky_bucket" rel="nofollow noreferrer">leaky bucket</a> algorithm, where incoming requests are buffered in a FIFO queue, and then consumed at a limited rate. The burst value defines the size of the queue, which allows an exceeding number of requests to be served beyond the base limit. When the queue becomes full, the following requests will be rejected with an error code returned.</p>
<p><a href="http://nginx.org/en/docs/http/ngx_http_limit_req_module.html" rel="nofollow noreferrer">Here</a> you will find all important parameters to configure your rate limiting.</p>
<p>The number of expected successful requests can be calculated like this:</p>
<pre><code>successful requests = (period * rate + burst) * nginx replica
</code></pre>
<p>so it is important to notice that the number of nginx replicas will also multiply the number of successful requests. Also, notice that Nginx ingress controller sets burst value at 5 times the limit. You can check those parameters at <code>nginx.conf</code> after setting up your desired annotations. For example:</p>
<pre><code>limit_req_zone $limit_cmRfaW5ncmVzcy1yZC1oZWxsby1sZWdhY3k zone=ingress-hello-world_rps:5m rate=5r/s;
limit_req zone=ingress-hello-world_rps burst=25 nodelay;
limit_req_zone $limit_cmRfaW5ncmVzcy1yZC1oZWxsby1sZWdhY3k zone=ingress-hello-world_rpm:5m rate=300r/m;
limit_req zone=ingress-hello-world_rpm burst=1500 nodelay;
</code></pre>
<p>There are two limitations that I would also like to underline:</p>
<ul>
<li><p>Requests are counted by client IP, which might not be accurate, or not fit your business needs such as rate-limiting by user identity.</p>
</li>
<li><p>Options like burst and delay are not configurable.</p>
</li>
</ul>
<p>I strongly recommend to go through the below sources also to have a more in-depth explanation regarding this topic:</p>
<ul>
<li><p><a href="https://www.freecodecamp.org/news/nginx-rate-limiting-in-a-nutshell-128fe9e0126c/" rel="nofollow noreferrer">NGINX rate-limiting in a nutshell</a></p>
</li>
<li><p><a href="https://www.nginx.com/blog/rate-limiting-nginx/" rel="nofollow noreferrer">Rate Limiting with NGINX and NGINX Plus</a></p>
</li>
</ul>
| Wytrzymały Wiktor |
<p>How can I set a fix cluster port for a nodePort service. I need to validate the incoming port inside the pod, hence required the same port to be forwarded to the pod when message reached inside the currently it picks a random port and node ip to send the packet into pod from the service.</p>
| tuban | <p>You can specify a fixed port for the nodeport when you define it:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p>
<blockquote>
<p>If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that’s inside the range configured for NodePort use.</p>
</blockquote>
| Burak Serdar |
<p>I have a newly created AWS EKS 1.18 Cluster, applications are deployed on it, everything is fine, test and load test are successful, namely, my HPA and metrics-server is working fine.</p>
<p>But when I make a deployment to a service, metrics-server is giving
<code>unable to fetch pod metrics for pod xxx: no metrics known for pod</code> for the newly deployed pod, then problem is being resolved and everything is fine again.</p>
<p>My question is, is this an expected behaviour for metrics-server? Or should I check my configs again?</p>
<p>Thank you very much.</p>
| Oguzhan Aygun | <p>There is a <a href="https://github.com/kubernetes-sigs/metrics-server/issues/299#issuecomment-550030920" rel="nofollow noreferrer">comment</a> about that on github:</p>
<blockquote>
<p>Metrics Server is expected to report "no metrics known for pod" until cache will be populated. Cache can be empty on freshly deployed metrics-server or can miss values for newly deployed pods.</p>
</blockquote>
<p>So if I understand correctly it's working as <strong>expected</strong>. I assume this problem is being solved after 60s as by <a href="https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-often-metrics-are-scraped" rel="nofollow noreferrer">default</a> metrics are scraped every 60s.</p>
| Jakub |
<p>I was trying to uninstall and reinstall Istio from k8s cluster following the steps:
<img src="https://user-images.githubusercontent.com/10824322/101385666-b85d7000-3881-11eb-932f-da65955eb0dd.png" alt="image" /></p>
<p>But I made a mistake that I deleted the namespace before deleting the istio-control-plane: <code>kubectl delete istiooperator istio-control-plane -n istio-system</code>. Then when I try to delete the <code>istio-control-plane</code> again, it froze.</p>
<p>I tried to remove the finalizer using the following steps but it said <code>Error from server (NotFound): istiooperators.install.istio.io "istio-control-plane" not found</code></p>
<pre><code>kubectl get istiooperator -n istio-system -o json > output.json
nano output.json # and remove finalizer
kubectl replace --raw "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane/finalize" -f output.json
</code></pre>
<p>Here is the content of <code>kubectl get istiooperator -n istio-system -o json</code>:</p>
<pre><code>{
"apiVersion": "v1",
"items": [
{
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{},\"name\":\"istio-control-plane\",\"namespace\":\"istio-system\"},\"spec\":{\"addonComponents\":{\"prometheus\":{\"enabled\":false},\"tracing\":{\"enabled\":false}},\"hub\":\"hub.docker.prod.walmart.com/istio\",\"profile\":\"default\",\"values\":{\"global\":{\"defaultNodeSelector\":{\"beta.kubernetes.io/os\":\"linux\"}}}}}\n"
},
"creationTimestamp": "2020-12-05T23:39:34Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-12-07T16:41:41Z",
"finalizers": [
],
"generation": 2,
"name": "istio-control-plane",
"namespace": "istio-system",
"resourceVersion": "11750055",
"selfLink": "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane",
"uid": "fda8ee4f-54e7-45e8-91ec-c328fad1a86f"
},
"spec": {
"addonComponents": {
"prometheus": {
"enabled": false
},
"tracing": {
"enabled": false
}
},
"hub": "hub.docker.prod.walmart.com/istio",
"profile": "default",
"values": {
"global": {
"defaultNodeSelector": {
"beta.kubernetes.io/os": "linux"
}
}
}
},
"status": {
"componentStatus": {
"Base": {
"status": "HEALTHY"
},
"IngressGateways": {
"status": "HEALTHY"
},
"Pilot": {
"status": "HEALTHY"
}
},
"status": "HEALTHY"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
</code></pre>
<p>Any ideas on how can I uninstall <code>istio-control-plane</code> manually?</p>
| snowneji | <p>You can use below command to change istio operator finalizer and delete it, it's a <code>jq/kubectl</code> oneliner made by @Rico <a href="https://stackoverflow.com/a/52825601/11977760">here</a>. I have tried also with <code>kubectl patch</code> but it didn't work.</p>
<pre><code>kubectl get istiooperator -n istio-system istio-control-plane -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
</code></pre>
<p>Additionally I have used <code>istioctl operator remove</code></p>
<pre><code>istioctl operator remove
Removing Istio operator...
Removed Deployment:istio-operator:istio-operator.
Removed Service:istio-operator:istio-operator.
Removed ServiceAccount:istio-operator:istio-operator.
Removed ClusterRole::istio-operator.
Removed ClusterRoleBinding::istio-operator.
✔ Removal complete
</code></pre>
<p>Results from <code>kubectl get</code></p>
<pre><code>kubectl get istiooperator istio-control-plane -n istio-system
Error from server (NotFound): namespaces "istio-system" not found
</code></pre>
| Jakub |
<p>When I am not connected to the VPN, minikube is starting as expected:</p>
<pre><code>PS C:\Windows\system32> minikube start
* minikube v1.9.2 on Microsoft Windows 10 Enterprise 10.0.18363 Build 18363
* Using the hyperv driver based on existing profile
* Starting control plane node m01 in cluster minikube
* Updating the running hyperv "minikube" VM ...
* Preparing Kubernetes v1.18.0 on Docker 19.03.8 ...
E0408 01:00:31.223159 17528 kubeadm.go:331] Overriding stale ClientConfig host https://192.168.137.249:8443 with https://172.17.118.34:8443
* Enabling addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
</code></pre>
<p>Once I am connectong to the VPN antri triing to start the minikube, it fails with an error:</p>
<pre><code>PS C:\Windows\system32> minikube start
* minikube v1.9.2 on Microsoft Windows 10 Enterprise 10.0.18363 Build 18363
* Using the hyperv driver based on existing profile
* Starting control plane node m01 in cluster minikube
* Updating the running hyperv "minikube" VM ...
! StartHost failed, but will try again: provision: IP not found
* Updating the running hyperv "minikube" VM ...
*
X Failed to start hyperv VM. "minikube start" may fix it.: provision: IP not found
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
</code></pre>
| Andrei Maimas | <p>I was facing a similar issue while running the following command:</p>
<pre><code>minikube start --vm-driver="hyperv" --hyperv-virtual-switch="minikube"
</code></pre>
<p>Then I went through some github and stackoverflow threads and was able to resolve my issue by running following command:</p>
<pre><code>minikube delete
minikube start --vm-driver="hyperv"
</code></pre>
<p>In my case, passing 'hyperv-virtual-switch' argument was causing the issue.</p>
<p>I would suggest you to check if your virtual switch is configured with your vpn network or not.</p>
<p>If not, then you can perform the following steps to do so:</p>
<ol>
<li>Go to network and sharing center settings of windows:</li>
</ol>
<p><a href="https://i.stack.imgur.com/v3Ivz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v3Ivz.png" alt="network and sharing center" /></a></p>
<ol start="2">
<li>Select your network and open the properties window for it. Go to the "Sharing" tab and allow your configured virtual switch over there.</li>
</ol>
<p><a href="https://i.stack.imgur.com/n6OA5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n6OA5.png" alt="Network properties" /></a></p>
| Manan Prajapati |
<p>My Kubernetes user is not admin in the cluster. So I just cannot create a cluster role binding for filebeat service account. I am using auto discover in filebeat. Can someone help how can I achieve this without clusterrole.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
setup.dashboards.enabled: false
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 1
filebeat.modules:
- module: system
syslog:
enabled: true
#var.paths: ["/var/log/syslog"]
auth:
enabled: true
#var.paths: ["/var/log/authlog"]
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
equals:
kubernetes.namespace: microsrv-test
config:
- type: docker
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
containers:
ids:
- "${data.kubernetes.container.id}"
processors:
- drop_event:
when.or:
- and:
- regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: error
- and:
- not:
regexp:
message: '^\d+\.\d+\.\d+\.\d+ '
- equals:
fileset.name: access
- add_cloud_metadata:
- add_kubernetes_metadata:
- add_docker_metadata:
output.elasticsearch:
hosts: ["elasticsearch:9200"]
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.3.2
imagePullPolicy: Always
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: KIBANA_HOST
value: kibana
- name: KIBANA_PORT
value: "5601"
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: data
emptyDir: {}
---
</code></pre>
<p>Cluster Roles and role bindings</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
</code></pre>
<p>I have tried creating non cluster role and rolebinding as below,</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: filebeat
namespace: logging
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: Role
name: filebeat
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>but I am getting error.</p>
<blockquote>
<p>Performing a resource sync err kubernetes api: Failure 403 pods is
forbidden: User "system:serviceaccount:xxxxx:filebeat" cannot list
resource "pods" in API group "" at the cluster scope for *v1.PodList|</p>
</blockquote>
| riteshmaurya | <p>Unfortunately, it will not work the way you want it and the error you are getting indicates it perfectly:</p>
<pre><code>Performing a resource sync err kubernetes api: Failure 403 pods is forbidden: User "system:serviceaccount:xxxxx:filebeat" cannot list resource "pods" in API group "" at the cluster scope for *v1.PodList|
</code></pre>
<p>Notice the most important part which is: <code>at the cluster scope</code>. You could also check whether an action is allowed by executing the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">kubectl auth can-i</a> command. More about that can be found in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">Authorization Overview</a>.</p>
<p>This brings us to the differences between <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Role and ClusterRole</a>:</p>
<blockquote>
<p>An RBAC <code>Role</code> or <code>ClusterRole</code> contains rules that represent a set of
permissions. Permissions are purely additive (there are no "deny"
rules).</p>
<p>A <code>Role</code> always sets permissions <strong>within a particular namespace</strong>;
when you create a <code>Role</code>, you have to specify the namespace it belongs
in.</p>
<p><code>ClusterRole</code>, by contrast, <strong>is a non-namespaced resource</strong>. The
resources have different names (<code>Role</code> and <code>ClusterRole</code>) because a
Kubernetes object always has to be either namespaced or not
namespaced; it can't be both.</p>
<p><code>ClusterRoles</code> have several uses. You can use a <code>ClusterRole</code> to:</p>
<ul>
<li><p>define permissions on namespaced resources and be granted within individual namespace(s)</p>
</li>
<li><p>define permissions on namespaced resources and be granted across all namespaces</p>
</li>
<li><p>define permissions on cluster-scoped resources</p>
</li>
</ul>
<p>If you want to define a role within a namespace, use a Role; if you
want to define a role cluster-wide, use a <code>ClusterRole</code>.</p>
</blockquote>
<p>And between <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">RoleBinding and ClusterRoleBinding</a>:</p>
<blockquote>
<p>A role binding grants the permissions defined in a role to a user or
set of users. It holds a list of subjects (users, groups, or service
accounts), and a reference to the role being granted. A <code>RoleBinding</code>
grants permissions within a specific namespace whereas a
<code>ClusterRoleBinding</code> grants that access cluster-wide.</p>
<p>A <code>RoleBinding</code> may reference any Role in the same namespace.
Alternatively, a <code>RoleBinding</code> can reference a <code>ClusterRole</code> and bind that
<code>ClusterRole</code> to the namespace of the <code>RoleBinding</code>. If you want to bind a
<code>ClusterRole</code> to all the namespaces in your cluster, you use a
<code>ClusterRoleBinding</code>.</p>
</blockquote>
<p>So it is impossible to get the cluster scope permissions by using <code>Role</code> and <code>RoleBinding</code>.</p>
<p>You will most likely have to ask your Admin to help you solve this issue.</p>
| Wytrzymały Wiktor |
<p>In my firm our Kubernetes Cluster was recently updated to 1.22+ and we are using AKS. So I had to change the manifest of our ingress yaml file which was using : networking.k8s.io/v1beta1, to be compliant to the new apiVersion : networking.k8s.io/v1</p>
<p>This is the earlier manifest for the ingress file :</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
#{{- range .paths }}
#- path: {{ . }}
# backend:
# serviceName: {{ $fullName }}
# servicePort: {{ $svcPort }}
#{{- end }}
- path: /callista/?(.*)
backend:
serviceName: amro-amroingress
servicePort: 8080
{{- end }}
{{- end }}
</code></pre>
<p>and after my changes it looks like this:</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1
{{- end }}
kind: Ingress
metadata:
name: {{ include "amroingress.fullname" . }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /callista/?(.*)
pathType: Prefix
backend:
service:
name: amro-amroingres
port:
number: 8080
{{- end }}
{{- end }}
</code></pre>
<p>But, after I made the changes and tried to deploy using helm, I receive this error:
<code>Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"</code></p>
<p>I am not sure why this error occurs even though the ingress manifest has changed and I have been stuck at this for a few days now. I am new to kubernetes and ingress in general, any help will be massively appreciated.</p>
| Saurav Saha | <p>The API resources on the Control plane are upgreaded but the ones in helm stored manifest (within a Secret resource) are old.</p>
<p>Here is the resolution:</p>
<pre><code>$ helm plugin install https://github.com/helm/helm-mapkubeapis
$ helm mapkubeapis my-release-name --namespace ns
</code></pre>
<p>After this run a <code>helm upgrade</code> again.</p>
| shariqmaws |
<p>We have a service in our cluster that we call via ssh (test environment etc.). In this container we have different environment variables when we connect with ssh or we connect with kubectl.</p>
<p>Can someone explain me what else is set here with the kubectl exec command?</p>
<p>As an example a small excerpt from both environments.</p>
<p><strong>kubectl exec: (printenv | grep KU)</strong></p>
<pre><code>KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.4.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.4.0.1
KUBERNETES_SERVICE_HOST=10.4.0.1
KUBERNETES_PORT=tcp://10.4.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
</code></pre>
<p><strong>ssh into the same container: (printenv | grep KU)</strong></p>
<pre><code>dev-xxxxx:~$ printenv | grep KU
dev-xxxxx:~$
</code></pre>
| alexohneander | <p>The <code>kubectl exec</code> command allows you to remotely run arbitrary commands inside an existing container of a pod. <code>kubectl exec</code> isn’t much different from using <code>SSH</code> to execute commands on a remote system. <code>SSH</code> and <code>kubectl</code> should both work well with 99% of CLI applications. The only difference I could find when it comes to environment variables is that:</p>
<ul>
<li><p><code>kubectl</code> will always set the environment variables provided to the container at startup</p>
</li>
<li><p><code>SSH</code> relies mostly on the system login shell configuration (but can also accept user’s environment via <a href="https://superuser.com/questions/48783/how-can-i-pass-an-environment-variable-through-an-ssh-command">PermitUserEnvironment or SendEnv/AcceptEnv</a>)</p>
</li>
</ul>
<p>Answering your question:</p>
<blockquote>
<p>Can someone explain me what else is set here with the kubectl exec
command?</p>
</blockquote>
<p>They should result with the same output (assuming that you have typed both commands correctly and execute them on the same container).</p>
<p>Below you will find some useful resources regarding the <code>kubectl exec</code> command:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">Get a Shell to a Running Container</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer">kubectl-commands#exec docs</a></p>
</li>
<li><p><a href="https://erkanerol.github.io/post/how-kubectl-exec-works/" rel="nofollow noreferrer">How does 'kubectl exec' work?</a></p>
</li>
</ul>
<p><strong>EDIT:</strong></p>
<p>If you wish to learn some more regarding the differences between <code>kubectl exec</code> and <code>SSH</code> I recommend <a href="https://goteleport.com/blog/ssh-vs-kubectl/" rel="nofollow noreferrer">this article</a>. It covers the topics of:</p>
<ul>
<li><p>Authn/z</p>
</li>
<li><p>Shell UX</p>
</li>
<li><p>Non-shell features, and</p>
</li>
<li><p>Performance</p>
</li>
</ul>
| Wytrzymały Wiktor |
<p>I have an example istio cluster on AKS with the default ingress gateway. Everything works as expected I'm just trying to understand how. The Gateway is defined like so:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
namespace: some-config-namespace
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- uk.bookinfo.com
- eu.bookinfo.com
tls:
httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https-443
protocol: HTTPS
hosts:
- uk.bookinfo.com
- eu.bookinfo.com
tls:
mode: SIMPLE # enables HTTPS on this port
serverCertificate: /etc/certs/servercert.pem
privateKey: /etc/certs/privatekey.pem
</code></pre>
<p>Reaching the site on <a href="https://uk.bookinfo.com" rel="nofollow noreferrer">https://uk.bookinfo.com</a> works fine. However when I look at the LB and Service that goes to the ingressgateway pods I see this:</p>
<pre><code>LB-IP:443 -> CLUSTER-IP:443 -> istio-ingressgateway:8443
</code></pre>
<pre><code>kind: Service
spec:
ports:
- name: http2
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30804
- name: https
protocol: TCP
port: 443
targetPort: 8443
nodePort: 31843
selector:
app: istio-ingressgateway
istio: ingressgateway
clusterIP: 10.2.138.74
type: LoadBalancer
</code></pre>
<p>Since the targetPort for the istio-ingressgateway pods is <strong>8443</strong> then how does the Gateway definition work which defines a port number as <strong>443</strong>?</p>
| Chaos | <p>As mentioned <a href="https://stackoverflow.com/a/61452441/11977760">here</a></p>
<blockquote>
<p>port: The port of this service</p>
<p>targetPort: The target port on the pod(s) to forward traffic to</p>
</blockquote>
<p>As far as I know <code>targetPort: 8443</code> points to envoy sidecar, so If I understand correctly envoy listen on 8080 for http and 8443 for https.</p>
<p>There is an example in envoy <a href="https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy?fbclid=IwAR3CnWf8HpHko-434T_n69pnZDFXQMvjUnVa38MHf9hjU-JVmuTtlmwsZNE" rel="nofollow noreferrer">documentation</a>.</p>
<p>So it goes like this:</p>
<pre><code>LB-IP:443 -> CLUSTER-IP:443 -> istio-ingressgateway:443 -> envoy-sidecar:8443
LB-IP:80 -> CLUSTER-IP:80 -> istio-ingressgateway:80 -> envoy-sidecar:8080
</code></pre>
<hr />
<p>For example, for http if you check your ingress-gateway pod with netstat without any gateway configured there isn't anything listening on port 8080:</p>
<pre><code>kubectl exec -ti istio-ingressgateway-86f88b6f6-r8mjt -n istio-system -c istio-proxy -- /bin/bash
istio-proxy@istio-ingressgateway-86f88b6f6-r8mjt:/$ netstat -lnt | grep 8080
</code></pre>
<p>Let's create a http gateway now with below yaml.</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-gw
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
<p>And check with netstat again:</p>
<pre><code>kubectl exec -ti istio-ingressgateway-86f88b6f6-r8mjt -n istio-system -c istio-proxy -- /bin/bash
istio-proxy@istio-ingressgateway-86f88b6f6-r8mjt:/$ netstat -lnt | grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
</code></pre>
<p>As you can see we have configured gateway on port 80, but inside the ingressgateway we can see that it's listening on port 8080.</p>
| Jakub |
<p>I'm looking for a CA solution that I can use w/ the webapps running in my private domain. I'm using nginx ingress controller to route to different applications based on path, and I'm using self signed certs to secure the apps w/ https. I want to start using a CA, something that I can run directly on the cluster, that'll handle the signing so that I don't have to distribute the certs manually. Any ideas? What's the goto solution for this scenario?</p>
| craftytech | <p>There are probably multiple solutions for this, but one is the cert-manager:</p>
<p><a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager</a></p>
<p>You can install it and create a CA issuer with your CA. Then you can create certificates using k8s yaml manifests, and the cert-manager takes care of creating the secrets.</p>
| Burak Serdar |
<p>I have a Stacked master K8s cluster (<code>etcd</code> is also local/internal) with three master and 9 worker nodes.
And my cluster version is currently <code>1.12.3</code>, while going through <code>etcd</code> commands, i tried listing the <code>etcd</code> member, executing </p>
<pre class="lang-sh prettyprint-override"><code>ETCDCTL_API=3 etcdctl member list
</code></pre>
<p>, and found that the client Url's of master2 and master3 is wrong. </p>
<p>Below is the image,
<img src="https://i.stack.imgur.com/9C5Kx.png" alt=""></p>
<p>As per my understanding ip for peers and client should be same, but as I can see IP is <code>127.0.0.1</code> in case of <code>master2</code> and <code>master3</code>.</p>
<p>When I check the endpoint status I get below error as,</p>
<blockquote>
<p><strong>Failed to get the status of endpoint :2379 (context deadline exceeded)</strong></p>
</blockquote>
<p>while I am successfully getting the status for <code>master1</code>,</p>
<p><img src="https://i.stack.imgur.com/0ou8a.png" alt=""></p>
<p>Could anyone please help me out in solving this. </p>
<p>Things I tried:</p>
<p>1) Edited the manifest file, etcd pods got restarted, but still nothing changed when I listed the member.</p>
<p>2) I have also successfully removed and added master3 in the etcd cluster, and this worked (IP's got corrected and getting the status of master3), but when I did the same for master2 getting error as </p>
<blockquote>
<p><strong>"error validating peerURLs {{ID: xyz, PeerUrls:xyz, clienturl:xyz},{&ID:xyz......}}: member count is unequal"</strong></p>
</blockquote>
| Utkarsh Singh | <p>Editing etcd manifest file and correcting the IP worked for me.
Previously it wasn't working because there was one etcd.yml.bkp file present in the manifest folder (probably i took the backup of etcd manifest there it self before upgrading) and found that etcd pods referring to that yml file, removing that yml file from manifest folder resolved the issue.</p>
<p>Also found IP mentioned in the kube-apiserver.yml files was incorrect, for correcting it tried below two methods both worked:</p>
<ul>
<li>Manually edited the file and corrected the IP</li>
<li>Or, We can generate a new manifest file for kube-api server executing kubeadm init
phase control-plane apiserver --kubernetes-version 1.14.5</li>
</ul>
| Utkarsh Singh |
<p>I'm running Traefik on a Kubernetes cluster to manage Ingress, which has been running ok for a long time.
I recently implemented <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">Cluster-Autoscaling</a>, which works fine except that on one Node (newly created by the Autoscaler) Traefik won't start. It sits in CrashLoopBackoff, and when I log the Pod I get: <code>[date] [time] command traefik error: field not found, node: redirect</code>.
Google found no relevant results, and the error itself is not very descriptive, so I'm not sure where to look.
My best guess is that it has something to do with the <a href="https://docs.traefik.io/middlewares/redirectregex/" rel="nofollow noreferrer">RedirectRegex</a> Middleware configured in Traefik's config file:</p>
<pre><code> [entryPoints.http.redirect]
regex = "^http://(.+)(:80)?/(.*)"
replacement = "https://$1/$3"
</code></pre>
<p>Traefik actually works still - I can still access all of my apps from their urls in my browser, even those which are on the node with the dead Traefik Pod.
The other Traefik Pods on other Nodes still run happily, and the Nodes are (at least in theory) identical.</p>
| Conagh | <p>After further googling, I found <a href="https://www.reddit.com/r/selfhosted/comments/d5mbd6/traefik_issue_field_not_found/" rel="noreferrer">this</a> on Reddit. Turns out Traefik updated a few days ago to v2.0, which is not backwards compatible.
Only this pod had the issue, because it was the only one for which a new (v2.0) image was pulled (being the only recently created Node).
I reverted to v1.7 until I have time to fix it properly. Had update the Daemonset to use v1.7, then kill the Pod so it could be recreated from the old image.</p>
| Conagh |
<p>I'm creating a new EKS Kubernetes Cluster on AWS.</p>
<p>When I deploy my workloads (migrating from an existing cluster) Kubelet stopps posting node status and all worker nodes become "NotReady" within a minute.</p>
<p>I was assuming that a misconfiguration within my cluster should not make the nodes crash - but apperently it does.</p>
<p>Can a misconfiguration within my cluster really make the AWS EKS Worker Nodes "NotReady"? Are there some rules of thumb under what circumstances this can happen? CPU Load to high? Pods in kube-system crashing? </p>
| stackoverflowjakob | <p>This is a community wiki answer based on the solution from comments and posted for better visibility. Feel free to expand it.</p>
<p>As suggested by @gusto2 the problem was with the kubelet pod that was unable to call the API server. @stackoverflowjakob late confirmed that the connection between worker and master node was broken due to <a href="https://aws.amazon.com/vpc/?vpc-blogs.sort-by=item.additionalFields.createdDate&vpc-blogs.sort-order=desc" rel="nofollow noreferrer">VPC</a> misconfiguration and it was discovered by checking <a href="https://aws.amazon.com/console/" rel="nofollow noreferrer">AWS Console</a> -> EKS status.</p>
| Wytrzymały Wiktor |
<p>I have a problem with the proper configuration of communication between my services on Kubernetes(minikube) with Istio installed.</p>
<p>I'm trying to send <code>POST</code> request from my service to elasticsearch, but all the time I'm receive:</p>
<pre><code>POST /_bulk?timeout=1m HTTP/1.1" 503 UH "-" "-" 0 19 0 - "-" "Apache-HttpAsyncClient/4.1.4 (Java/11.0.9.1)" "1a290357-7b18-9692-9392-d0298ed3276c" "elasticsearch:9200" "-" - - 10.102.10.19:9200 172.18.0.12:39194 - default
</code></pre>
<p><code>Istioctl analyze</code> doesn't shows any problems. I also disabled mtls.</p>
<p>Do you have any idea what could be wrong? I don't understand why there is UH (unhealthy), because elasticsearch works and the Kiali dashboard also displays it as healthy.</p>
<p>My deployments + services:</p>
<p><strong>Elasticsearch</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
tier: database
spec:
selector:
app: elasticsearch
ports:
- name: "http-9200"
port: 9200
targetPort: 9200
- name: "tcp-9300"
port: 9300
targetPort: 9300
selector:
app: elasticsearch
tier: database
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
service: elasticsearch
template:
metadata:
labels:
service: elasticsearch
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
ports:
- containerPort: 9200
name: "http-9200"
- containerPort: 9300
name: "tcp-9300"
env:
- name: cluster.name
value: elasticsearch-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p><strong>My-Service</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: scrappers-service
labels:
name: scrappers-service
spec:
ports:
- nodePort: 30164
name: "http-8080"
port: 8080
targetPort: 8080
selector:
app: scrappers-service
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: scrappers-service
labels:
name: scrappers-service
spec:
selector:
matchLabels:
app: scrappers-service
replicas: 1
template:
metadata:
labels:
app: scrappers-service
spec:
containers:
- image: example/scrappers:master
imagePullPolicy: Never
name: scrappers-service
ports:
- containerPort: 8080
</code></pre>
| Ice | <p>As mentioned <a href="https://stackoverflow.com/a/64965037/11977760">here</a></p>
<blockquote>
<p>I decided to use the solution described by elasticsearch. I mean elasticseach-operator. I applied all steps and it just works without any bigger problems.</p>
</blockquote>
<p>So the solution would be to follow elasticsearch <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-service-mesh-istio.html" rel="nofollow noreferrer">documentation</a>, which use below annotations to make it work.</p>
<pre><code>annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: ""
traffic.sidecar.istio.io/excludeInboundPorts: ""
</code></pre>
<hr />
<blockquote>
<p>To make the validating webhook work under Istio, you need to exclude the inbound port 9443 from being proxied. This can be done by editing the template definition of the elastic-operator StatefulSet to add the following annotations to the operator Pod:</p>
</blockquote>
<pre><code>[...]
spec:
template:
metadata:
annotations:
traffic.sidecar.istio.io/excludeInboundPorts: "9443"
traffic.sidecar.istio.io/includeInboundPorts: '*'
[...]
</code></pre>
<blockquote>
<p>If you have configured Istio in <strong>permissive mode</strong>, examples defined elsewhere in the ECK documentation will continue to work without requiring any modifications. However, if you have enabled <strong>strict mutual TLS</strong> authentication between services either via global (MeshPolicy) or namespace-level (Policy) configuration, the following modifications to the resource manifests are necessary for correct operation.</p>
</blockquote>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-istio
spec:
version: 7.10.0
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: default
count: 3
podTemplate:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "*"
traffic.sidecar.istio.io/excludeOutboundPorts: "9300"
traffic.sidecar.istio.io/excludeInboundPorts: "9300"
spec:
automountServiceAccountToken: true
</code></pre>
<blockquote>
<p>If you <strong>do not have automatic mutual TLS</strong> enabled, you may need to create a Destination Rule to allow the operator to communicate with the Elasticsearch cluster. A communication issue between the operator and the managed Elasticsearch cluster can be detected by looking at the operator logs to see if there are any errors reported with the text 503 Service Unavailable.</p>
</blockquote>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: elastic-istio
spec:
host: "elastic-istio-es-http.default.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
</code></pre>
<hr />
<p>There are related github issues:</p>
<ul>
<li><a href="https://github.com/istio/istio/issues/14662" rel="nofollow noreferrer">https://github.com/istio/istio/issues/14662</a></li>
<li><a href="https://github.com/elastic/cloud-on-k8s/issues/2770" rel="nofollow noreferrer">https://github.com/elastic/cloud-on-k8s/issues/2770</a></li>
</ul>
| Jakub |
<p>I am trying to deploy this kubernetes deployment; however, when ever I do: <code>kubectl apply -f es-deployment.yaml</code> it throws the error: <code>Error: `selector` does not match template `labels</code><br>
I have already tried to add the selector, matchLabels under the specs section but it seems like that did not work. Below is my yaml file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
name: elasticsearchconnector
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
spec:
selector:
matchLabels:
app: elasticsearchconnector
containers:
- env:
- [env stuff]
image: confluentinc/cp-kafka-connect:latest
name: elasticsearchconnector
ports:
- containerPort: 28082
resources: {}
volumeMounts:
- mountPath: /etc/kafka-connect
name: elasticsearchconnector-hostpath0
- mountPath: /etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- mountPath: /etc/kafka
name: elasticsearchconnector-hostpath2
restartPolicy: Always
volumes:
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect
name: elasticsearchconnector-hostpath0
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak
name: elasticsearchconnector-hostpath2
status: {}
</code></pre>
| James Ukilin | <p>Your labels and selectors are misplaced.</p>
<p>First, you need to specify which pods the deployment will control:</p>
<pre><code>spec:
replicas: 1
selector:
matchLabels:
app: elasticsearchconnector
</code></pre>
<p>Then you need to label the pod properly:</p>
<pre><code> template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
app: elasticsearchconnector
spec:
containers:
</code></pre>
| Burak Serdar |
<p>While installing Kubernetes with, I'm stuck at CNI plugin installation and configuration part. I have installed Flannel but I see error in kubelet logs due to which coredns pods are in pending state.</p>
<p>OS: Centos7
k8s version: 1.16
Kubeadm is being used to setup the cluster.</p>
<p>I had installed the plugin using: kubectl apply -f <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</a></p>
<p>This is error I can see in Kubelet logs:</p>
<pre><code>Sep 21 04:47:29 peteelizalde2c kubelet: W0921 04:47:29.897759 17817 cni.go:202] Error validating CNI config &{cbr0 false [0xc000fb3ee0 0xc000fb3f60] [123 10 32 32 34 110 97 109 101 34 58 32 34 99 98 114 48 34 44 10 32 32 34 112 108 117 103 105 110 115 34 58 32 91 10 32 32 32 32 123 10 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 102 108 97 110 110 101 108 34 44 10 32 32 32 32 32 32 34 100 101 108 101 103 97 116 101 34 58 32 123 10 32 32 32 32 32 32 32 32 34 104 97 105 114 112 105 110 77 111 100 101 34 58 32 116 114 117 101 44 10 32 32 32 32 32 32 32 32 34 105 115 68 101 102 97 117 108 116 71 97 116 101 119 97 121 34 58 32 116 114 117 101 10 32 32 32 32 32 32 125 10 32 32 32 32 125 44 10 32 32 32 32 123 10 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 112 111 114 116 109 97 112 34 44 10 32 32 32 32 32 32 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 32 123 10 32 32 32 32 32 32 32 32 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 32 116 114 117 101 10 32 32 32 32 32 32 125 10 32 32 32 32 125 10 32 32 93 10 125 10]}: [plugin flannel does not support config version ""]
Sep 21 04:47:29 peteelizalde2c kubelet: W0921 04:47:29.897824 17817 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
Sep 21 04:47:32 peteelizalde2c kubelet: E0921 04:47:32.007379 17817 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
</code></pre>
<p>Here is the pods:</p>
<pre><code>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-n4h5x 0/1 Pending 0 114m
kube-system coredns-5644d7b6d9-t2q54 0/1 Pending 0 114m
kube-system etcd-ip-10-29-89-124 1/1 Running 0 113m
kube-system kube-apiserver-ip-10-29-89-124 1/1 Running 0 113m
kube-system kube-controller-manager-ip-10-29-89-124 1/1 Running 0 113m
kube-system kube-flannel-ds-amd64-dqpzj 1/1 Running 0 110m
kube-system kube-proxy-vzlqb 1/1 Running 0 114m
kube-system kube-scheduler-ip-10-29-89-124 1/1 Running 0 113m
</code></pre>
<p>There is a file in <code>/etc/cni/net.d</code> named <code>10-flannel.conflist</code>.
Its contents are:</p>
<pre><code>{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
</code></pre>
| Himanshu C | <p>The accepted solution is incomplete and will cause headache down the line.</p>
<p>The proper solution to make this change permanent is to edit the ConfigMap created by flannel in your Kubernetes cluster. Otherwise, the file will be recreated the next time the flannel pod volumes are populated with the ConfigMap (e.g. on node reboot).</p>
<p>Use <code>kubectl edit cm -n kube-system kube-flannel-cfg</code> to edit the ConfigMap provided by flannel, and add the missing line:</p>
<pre><code> 5 apiVersion: v1
6 data:
7 cni-conf.json: |
8 {
9 "name": "cbr0",
10 "cniVersion": "0.2.0",
11 "plugins": [
</code></pre>
<p>Reboot the node, or alternatively make the change manually in <code>/etc/cni/net.d/10-flannel.conflist</code> and do <code>systemctl restart kubelet</code> afterwards to skip the reboot.</p>
| eirikrye |
<p>None of the pods resolve public domains or any internal pods. The resolv.conf points to an ip that doesn't belong to coredns</p>
<pre><code>IP of coredns: 192.168.208.7
</code></pre>
<pre><code>#cat etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:5
</code></pre>
<p>What should i change to fix this dns issue?</p>
| Jeel | <p>There are several steps you should take when <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">Debugging DNS Resolution</a>:</p>
<blockquote>
<p>This page provides hints on diagnosing DNS problems.</p>
</blockquote>
<p>Try in that order:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#check-the-local-dns-configuration-first" rel="nofollow noreferrer">Check the local DNS configuration</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#check-if-the-dns-pod-is-running" rel="nofollow noreferrer">Check if the DNS pod is running</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#check-for-errors-in-the-dns-pod" rel="nofollow noreferrer">Check for errors in the DNS pod</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#is-dns-service-up" rel="nofollow noreferrer">Check if DNS service is up</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#are-dns-endpoints-exposed" rel="nofollow noreferrer">Are DNS endpoints exposed?</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#are-dns-queries-being-received-processed" rel="nofollow noreferrer">Are DNS queries being received/processed?</a></p>
</li>
<li><p>and the <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues" rel="nofollow noreferrer">Known issues</a></p>
</li>
</ul>
<p>Sometimes it's just the matter of restarting the coredns deployment:</p>
<pre><code>kubectl -n kube-system rollout restart deployment coredns
</code></pre>
<p>Also, there is always an option to <a href="https://kubernetes.io/docs/tasks/administer-cluster/coredns/" rel="nofollow noreferrer">install</a> it fresh.</p>
| Wytrzymały Wiktor |
<p>I am running Istio 1.6.0.
I wanted to add some custom headers to all the outbound responses originating from my service. So I was trying to use lua <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">envoyfilter</a> to achieve that. However, I don't see my proxy getting properly configured.</p>
<p>The envoy filter config that I'm trying to use is</p>
<pre><code>kind: EnvoyFilter
metadata:
name: lua-filter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"@type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:logInfo(" ========= XXXXX ========== ")
response_handle:headers():add("X-User-Header", "worked")
end
</code></pre>
<p>I do have my ingress-gateway pods running in the <code>istio-system</code> namespace</p>
<pre><code>❯ kgp -l istio=ingressgateway -n istio-system
NAME READY STATUS RESTARTS AGE
ingress-gateway-b4b5cffc9-wz75r 1/1 Running 0 3d12h
ingress-gateway-b4b5cffc9-znx9b 1/1 Running 0 28h
</code></pre>
<p>I was hoping that I would see <code>X-User-Header</code> when I curl for my service.
Unfortunately, I'm not seeing any custom headers.</p>
<p>I tried checking the <code>proxy-configs</code> of the ingress-gateway pod in the istio-system, and I don't see the <code>envoy.lua</code> configured at all. I'm not sure whether I'm debugging it correctly.</p>
<pre><code> istioctl proxy-config listener ingress-gateway-b4b5cffc9-wz75r.istio-system -n istio-system --port 443 -o json | grep "name"
"name": "0.0.0.0_443",
"name": "istio.stats",
"name": "envoy.tcp_proxy",
"name": "istio.stats",
"name": "envoy.tcp_proxy",
"name": "envoy.listener.tls_inspector",
</code></pre>
<p>Please let me know what is that I'm missing or incorrectly configured.
Any advice on how to debug further also would be really helpful.</p>
<p>Thank you so much.</p>
| PDP | <p>As far as I checked on my istio cluster with version 1.6.3 and 1.6.4 your example works just fine. Take a look at below code from my cluster.</p>
<p>I checked it with curl</p>
<pre><code>$ curl -s -I -X HEAD x.x.x.x/
HTTP/1.1 200 OK
server: istio-envoy
date: Mon, 06 Jul 2020 08:35:37 GMT
content-type: text/html
content-length: 13
last-modified: Thu, 02 Jul 2020 12:11:16 GMT
etag: "5efdcee4-d"
accept-ranges: bytes
x-envoy-upstream-service-time: 2
x-user-header: worked
</code></pre>
<p><strong>AND</strong></p>
<p>I checked it with config_dump in istio ingress-gateway pod.</p>
<p>I exec there with</p>
<pre><code>kubectl exec -ti istio-ingressgateway-78db9f457d-xfhl7 -n istio-system -- /bin/bash
</code></pre>
<p>Results from config_dump</p>
<pre><code>curl 0:15000/config_dump | grep X-User-Header
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 128k 0 128k 0 0 9162k 0 --:--:-- --:--:-- --:--:-- 9162k
"inline_code": "function envoy_on_response(response_handle)\n response_handle:logInfo(\" ========= XXXXX ========== \")\n response_handle:headers():add(\"X-User-Header\", \"worked\")\nend\n"
</code></pre>
<p>So as you can see it works, header is added to request and function is active in istio ingress gateway.</p>
<hr />
<p>Could you try to check it again with above curl, check istio ingress-gateway tcp_dump and let me know if it works for you?</p>
| Jakub |
<p>Please see my images below:</p>
<p><a href="https://i.stack.imgur.com/K03a6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K03a6.png" alt="enter image description here"></a></p>
<p>I then run this:</p>
<pre><code>kubectl run my-app --image=iansimage:latest --port=5000
</code></pre>
<p>and this:</p>
<pre><code>kubectl expose deployment my-app --type=LoadBalancer --port=8080 --target-port=5000
</code></pre>
<p>However, I then see this:</p>
<p><a href="https://i.stack.imgur.com/q94e9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q94e9.png" alt="enter image description here"></a></p>
<p>Notice the warning in the above screenshot: "Error response from daemon: pull access denied for iansimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied". </p>
<p>Why is Kubectl trying to find iansimage:latest on the internet? iansimage:latest is a local image I created as per my last question: <a href="https://stackoverflow.com/questions/60932424/create-an-image-from-a-dockerfile/60932772?noredirect=1#comment107806770_60932772">Create an image from a Dockerfile</a></p>
<p>Please note that I am new to Kubernetes so this may be simple?</p>
<p><strong>Update</strong></p>
<p>Following on from Burak Serdars's comment. Say I have a com,and like this, which would nomally build an image: <code>docker build -t "app:latest" .</code></p>
<p>How would I build this image inside a Kubernetes pod?</p>
| w0051977 | <p>"Latest" is a special tag, it means that Docker always check if the downloaded image is the latest available searching the registry.
Retag your image with other tag than latest, like this :</p>
<blockquote>
<p>docker tag iansimage:latest iansimage:v1</p>
</blockquote>
<p>Then change your Yaml and use iansimage:v1<br>
That solve your problem.</p>
| Enzo |
<p>I have minikube installed on Windows10, and I'm trying to work with <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="noreferrer">Ingress Controller</a></p>
<p>I'm doing:</p>
<blockquote>
<p>$ minikube addons enable ingress</p>
</blockquote>
<pre><code>* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
- Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
* Verifying ingress addon...
* The 'ingress' addon is enabled
</code></pre>
<blockquote>
<p>minikube addons list</p>
</blockquote>
<pre><code> minikube addons list
|-----------------------------|----------|--------------|
| ADDON NAME | PROFILE | STATUS |
|-----------------------------|----------|--------------|
| ambassador | minikube | disabled |
| auto-pause | minikube | disabled |
| csi-hostpath-driver | minikube | disabled |
| dashboard | minikube | disabled |
| default-storageclass | minikube | enabled ✅ |
| efk | minikube | disabled |
| freshpod | minikube | disabled |
| gcp-auth | minikube | disabled |
| gvisor | minikube | disabled |
| helm-tiller | minikube | disabled |
| ingress | minikube | enabled ✅ |
| ingress-dns | minikube | disabled |
| istio | minikube | disabled |
| istio-provisioner | minikube | disabled |
| kubevirt | minikube | disabled |
| logviewer | minikube | disabled |
| metallb | minikube | disabled |
| metrics-server | minikube | disabled |
| nvidia-driver-installer | minikube | disabled |
| nvidia-gpu-device-plugin | minikube | disabled |
| olm | minikube | disabled |
| pod-security-policy | minikube | disabled |
| registry | minikube | disabled |
| registry-aliases | minikube | disabled |
| registry-creds | minikube | disabled |
| storage-provisioner | minikube | enabled ✅ |
| storage-provisioner-gluster | minikube | disabled |
| volumesnapshots | minikube | disabled |
|-----------------------------|----------|--------------|
</code></pre>
<p><strong>Note</strong>:
I ran <code>minikube tunnel</code> after the addon was enabled</p>
<p>But can't see the nginx controller anywhere:</p>
<blockquote>
<p>$ kubectl get pods -n kube-system</p>
</blockquote>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-8gkwj 1/1 Running 0 2m35s
etcd-minikube 1/1 Running 0 2m48s
kube-apiserver-minikube 1/1 Running 0 2m48s
kube-controller-manager-minikube 1/1 Running 0 2m48s
kube-proxy-jq4wm 1/1 Running 0 2m35s
kube-scheduler-minikube 1/1 Running 0 2m48s
storage-provisioner 1/1 Running 2 2m47s
</code></pre>
<blockquote>
<p>$ kubectl get pods</p>
</blockquote>
<pre><code>No resources found in default namespace.
</code></pre>
| zbeedatm | <p>As already discussed in the comments the Ingress Controller will be created in the <code>ingress-nginx</code> namespace instead of the <code>kube-system</code> namespace. Other than that the rest of the tutorial should work as expected.</p>
| Wytrzymały Wiktor |
<p>Currently, we are building Docker images with an entrypoint and passing this image to a Kubernetes deployment.</p>
<p>Is there any way to pass the entrypoint directly to Kubernetes dynamically so that it starts spring boot applications?</p>
<p>What are the different ways of passing this entrypoint directly in Kubernetes?</p>
<pre><code>### Runtime image ###
FROM openjdk:8-jre-alpine
#Set working dir
WORKDIR /test
# Copy the executable JAR
COPY /*.jar /test/
# Run the app
ENTRYPOINT java -Djava.security.egd=file:/dev/./urandom -Djsversion=1.0 -D<variable>=<service-1> -D<variable>=<service-2> -jar *.jar
</code></pre>
| magic | <p>You can use <code>command</code> in k8s deployment manifest:</p>
<pre><code>containers:
- name: mycontainer
env:
- name: NAME
value: VALUE
command: [ "java"]
args: ["-jar", "-D..."]
</code></pre>
| Burak Serdar |
<p>I'm trying to set up SSO between our (regular, not AKS) kubernetes clusters and Azure AD.
Since I don't know how to forward the token to the kube-dashboard, I'm just currently trying with kubectl binary installed on my computer.
It works when no groups are involved, but we want to filter by security group (accounts on AAD are synced from our onprem Active Directory), no kube RBAC involved.</p>
<p>Setup is inspired by <a href="https://medium.com/@olemarkus/using-azure-ad-to-authenticate-to-kubernetes-eb143d3cce10" rel="nofollow noreferrer">https://medium.com/@olemarkus/using-azure-ad-to-authenticate-to-kubernetes-eb143d3cce10</a> and <a href="https://learn.microsoft.com/fr-fr/azure/aks/azure-ad-integration" rel="nofollow noreferrer">https://learn.microsoft.com/fr-fr/azure/aks/azure-ad-integration</a> :</p>
<ul>
<li>web app for kube api server configured to expose its API (add scope etc...) with app ID : <em>abc123</em></li>
<li>native app for client kubectl configured with addition of api permission from the web app, with app ID : <em>xyz456</em></li>
<li>kube api server yaml manifest , I add :</li>
</ul>
<p><code>- --oidc-client-id=spn:abc123</code></p>
<p><code>- --oidc-issuer-url=https://sts.windows.net/OurAADTenantID</code></p>
<ul>
<li>config kubectl binary : </li>
</ul>
<pre><code>kubectl config set-cluster test-legacy-2 --server=https://192.168.x.y:4443 --certificate-authority=/somelocation/ca.pem
</code></pre>
<pre><code>kubectl config set-credentials [email protected] --auth-provider=azure --auth-provider-arg=environment=AzurePublicCloud --auth-provider-arg=client-id=xyz456 --auth-provider-arg=tenant-id=OurAADTenantID --auth-provider-arg=apiserver-id=abc123
</code></pre>
<p>Also in the Azure client app manifest, had to specify :</p>
<p><code>"allowPublicClient":true,</code></p>
<p><code>"oauth2AllowIdTokenImplicitFlow":true</code></p>
<p>Otherwise had error "<em>Failed to acquire a token: acquiring a new fresh token: waiting for device code authentication to complete: autorest/adal/devicetoken: Error while retrieving OAuth token: Unknown Error"</em>.
Found on <a href="https://github.com/MicrosoftDocs/azure-docs/issues/10326" rel="nofollow noreferrer">https://github.com/MicrosoftDocs/azure-docs/issues/10326</a></p>
<p>Issues start when trying to filter on some security group that I find in the JWT as per <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens</a></p>
<p>I am receiving a format error even though the JWT Azure sends me does contain the groups in the right format (json array of strings)</p>
<p>Config :</p>
<ul>
<li>In azure web app manifest to have the groups in my JWT : </li>
</ul>
<p><code>"groupMembershipClaims": "SecurityGroup",</code></p>
<ul>
<li>kube api server yaml manifest :</li>
</ul>
<p><code>- --oidc-groups-claim=groups</code></p>
<p><code>- --oidc-required-claim=groups=bbc2eedf-79cd-4505-9fb4-39856ed3790e</code></p>
<p>with the string here being the GUID of my target security group.</p>
<p>I am receiving <code>error: You must be logged in to the server (Unauthorized)</code> on output of kubectl and the kube api server logs provide me this <code>authentication.go:62] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, oidc: parse claim groups: json: cannot unmarshal array into Go value of type string]]</code></p>
<p>But I don't understand why it is not happy cause when I decode the JWT I do have</p>
<pre><code>"groups": [
"00530f35-0013-4237-8947-6e3f6a7895ca",
"bbc2eedf-79cd-4505-9fb4-39856ed3790e",
"17dff614-fd68-4a38-906c-69561daec8b7"
],
</code></pre>
<p>which to my knowledge is a well-formatted json array of strings...</p>
<p>Why does the api server complain about the JWT ?</p>
| GuiFP | <p><a href="https://github.com/kubernetes/kubernetes/blob/27cf50d85edb91357d56fd762271974e7a7254bc/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go#L620-L634" rel="nofollow noreferrer">Ok so, Required claims must be a string, not an array of strings</a></p>
<p>But I found a workaround.</p>
<ul>
<li>Don't use oidc-groups-claim and oidc-required-claim</li>
<li>In Azure, go to the Properties of the API server App.</li>
<li>Select Yes in "User assignment required"</li>
<li>In "Users and groups" add the specific Security Group you want to filter on</li>
<li>To test : Remove yourself from the Security Group</li>
<li>Wait for the token to expire (in my case it was 1 hour)</li>
<li>You can't log in anymore</li>
</ul>
| GuiFP |
<p>I install with this</p>
<pre><code>istioctl install --set profile=demo
</code></pre>
<p>and I got this error</p>
<pre><code>2020-06-23T06:53:12.111697Zerrorinstallerfailed to create "PeerAuthentication/istio-s
ystem/grafana-ports-mtls-disabled": Timeout: request did not complete within requested timeout 30s
✘ Addons encountered an error: failed to create "PeerAuthentication/istio-system/grafana-ports-mtls-
disabled": Timeout: request did not complete within requested timeout 30s
- Pruning removed resources
Error: failed to apply manifests: errors occurred during operation
</code></pre>
| Possathon Chitpidakorn | <p>I assume there is something wrong either with</p>
<ul>
<li>istioctl install and aws</li>
<li>your cluster</li>
</ul>
<p>You could try to create new eks cluster and check if it works, if it´s not I would suggest to open new thread on <a href="https://github.com/istio/istio/issues" rel="nofollow noreferrer">istio github</a>.</p>
<hr />
<p>If you have same problem as @Possathon Chitpidakorn, you can use istio operator as a <strong>workaround</strong> to install istio, more about it below.</p>
<h2><a href="https://istio.io/latest/blog/2019/introducing-istio-operator/#the-operator-api" rel="nofollow noreferrer">istio operator</a></h2>
<blockquote>
<p>Every operator implementation requires a custom resource definition (CRD) to define its custom resource, that is, its API. Istio’s operator API is defined by the IstioControlPlane CRD, which is generated from an IstioControlPlane proto. The API supports all of Istio’s current configuration profiles using a single field to select the profile. For example, the following IstioControlPlane resource configures Istio using the demo profile:</p>
</blockquote>
<pre><code>apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
metadata:
namespace: istio-operator
name: example-istiocontrolplane
spec:
profile: demo
</code></pre>
<blockquote>
<p>You can then customize the configuration with additional settings. For example, to disable telemetry:</p>
</blockquote>
<pre><code>apiVersion: install.istio.io/v1alpha2
kind: IstioControlPlane
metadata:
namespace: istio-operator
name: example-istiocontrolplane
spec:
profile: demo
telemetry:
enabled: false
</code></pre>
<h2>How to <a href="https://istio.io/latest/docs/setup/install/standalone-operator/" rel="nofollow noreferrer">install istio with istio operator</a></h2>
<blockquote>
<p>Prerequisites</p>
</blockquote>
<ul>
<li>Perform any necessary <a href="https://istio.io/latest/docs/setup/platform-setup/" rel="nofollow noreferrer">platform-specific setup</a>.</li>
<li>Check the <a href="https://istio.io/latest/docs/ops/deployment/requirements/" rel="nofollow noreferrer">Requirements for Pods and Services</a>.</li>
<li>Install the <a href="https://istio.io/latest/docs/ops/diagnostic-tools/istioctl/" rel="nofollow noreferrer">istioctl command</a>.</li>
</ul>
<blockquote>
<p>Deploy the <strong>Istio operator:</strong></p>
</blockquote>
<pre><code>istioctl operator init
</code></pre>
<p>This command runs the operator by creating the following resources in the istio-operator namespace:</p>
<ul>
<li>The operator custom resource definition</li>
<li>The operator controller deployment</li>
<li>A service to access operator metrics</li>
<li>Necessary Istio operator RBAC rules</li>
</ul>
<blockquote>
<p>See the available istioctl operator init flags to control which namespaces the >controller and Istio are installed into and the installed Istio image sources and versions.</p>
</blockquote>
<hr />
<p>You can alternatively deploy the operator using Helm:</p>
<pre><code>$ helm template manifests/charts/istio-operator/ \
--set hub=docker.io/istio \
--set tag=1.6.3 \
--set operatorNamespace=istio-operator \
--set istioNamespace=istio-system | kubectl apply -f -
</code></pre>
<p>Note that you need to download <a href="https://istio.io/latest/docs/setup/getting-started/#download" rel="nofollow noreferrer">the Istio release</a> to run the above command.</p>
<hr />
<blockquote>
<p>To install the Istio demo configuration profile using the operator, run the following command:</p>
</blockquote>
<pre><code>$ kubectl create ns istio-system
$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: demo
EOF
</code></pre>
<blockquote>
<p>The controller will detect the IstioOperator resource and then install the Istio components corresponding to the specified (demo) configuration.</p>
</blockquote>
| Jakub |
<p>I am new to K8s autoscaling. I have a stateful application I am trying to find out which autoscaling method works for me. According to the documentation:</p>
<blockquote>
<p>if pods don't have the correct resources set, the Updater component
of VPA kills them so that they can be recreated by their controllers
with the updated requests.</p>
</blockquote>
<p>I want to know the downtime for kills the existing pod and creating the new ones. Or at least how can I measure it for my application?</p>
<p>I am comparing the HPA and VPA approaches for my application.</p>
<p>the follow-up question is - how long does it take in HPA to create a new pod in scaling up?</p>
| SamiraM | <p>There are few things to clear out here:</p>
<ul>
<li><p>VPA does not create nodes, <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#cluster-autoscaler" rel="nofollow noreferrer">Cluster Autoscaler</a> is used for that. Vertical Pod Autoscaler allocates more (or less) CPUs and memory to existing pods and CA scales your node clusters based on the number of pending pods.</p>
</li>
<li><p>Whether to use HPA, VPA, CA, or some combination, depends on the needs of your application. Experimentation is the most reliable way to find which option works best for you, so it might take a few tries to find the right setup. HPA and VPA depend on metrics and some historic data. CA is recommended if you have a good understanding of your pods and containers needs.</p>
</li>
<li><p>HPA and VPA should not be used together to evaluate CPU/Memory. However, VPA can be used to evaluate CPU or Memory whereas HPA can be used to evaluate external metrics (like the number of HTTP requests or the number of active users, etc). Also, you can use VPA together with CA.</p>
</li>
<li><p>It's hard to evaluate the exact time needed for VPA to adjust and restart pods as well as for HPA to scale up. The difference between best case scenario and worse case one relies on many factors and can make a significant gap in time. You need to rely on metrics and observations in order to evaluate that.</p>
</li>
<li><p><a href="https://github.com/kubernetes-sigs/metrics-server#kubernetes-metrics-server" rel="nofollow noreferrer">Kubernetes Metrics Server</a> collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.</p>
</li>
</ul>
<p>Below are some useful sources that would help you understand and choose the right solution for you:</p>
<ul>
<li><p><a href="https://medium.com/nerd-for-tech/autoscaling-in-kubernetes-hpa-vpa-ab61a2177950" rel="nofollow noreferrer">AutoScaling in Kubernetes ( HPA / VPA )</a></p>
</li>
<li><p><a href="https://www.replex.io/blog/kubernetes-in-production-best-practices-for-cluster-autoscaler-hpa-and-vpa" rel="nofollow noreferrer">Kubernetes Autoscaling in Production: Best Practices for Cluster Autoscaler, HPA and VPA</a></p>
</li>
<li><p><a href="https://platform9.com/blog/kubernetes-autoscaling-options-horizontal-pod-autoscaler-vertical-pod-autoscaler-and-cluster-autoscaler/" rel="nofollow noreferrer">Kubernetes Autoscaling Options: Horizontal Pod Autoscaler, Vertical Pod Autoscaler and Cluster Autoscaler</a></p>
</li>
</ul>
<p><strong>EDIT:</strong></p>
<p>Scaling up is a time sensitive operation. You should consider the average time it can take your pods to scale up. Two example scenarios:</p>
<ol>
<li>Best case scenario - 4 minutes:</li>
</ol>
<ul>
<li>30 seconds : Target metrics values updated: 30-60 seconds</li>
<li>30 seconds : HPA checks on metrics values: 30 seconds</li>
<li>< 2 seconds: pods created and goes into pending state - 1 second</li>
<li>< 2 seconds : CA sees the pending pods and fires up the calls to provision nodes - 1 second</li>
<li>3 minutes: Cloud provider provision the nodes & K8 waits for them till they are ready: up to 10 minutes (depends on multiple factors)</li>
</ul>
<ol start="2">
<li>(Reasonable) Worst case scenario - 12 minutes:</li>
</ol>
<ul>
<li>60 seconds : Target metrics values updated</li>
<li>30 seconds : HPA checks on metrics values</li>
<li>< 2 seconds : pods created and goes into pending state</li>
<li>< 2 seconds : CA sees the pending pods and fires up the calls to provision nodes</li>
<li>10 minutes : Cloud provider provision the nodes & K8 waits for them till they are ready minutes (depends on multiple factors, such provider latency, OS latency, boot strapping tools, etc.)</li>
</ul>
<p>Again, it is hard to estimate the exact time it would take so observation and metrics are the key here.</p>
| Wytrzymały Wiktor |
<p>I have installed on my K8S <a href="https://cert-manager.io" rel="nofollow noreferrer">https://cert-manager.io</a> and have created cluster issuer:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: digitalocean-dns
namespace: cert-manager
data:
# insert your DO access token here
access-token: secret
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: [email protected]
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
#- "*.service.databaker.io"
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
</code></pre>
<p>also have created a certificate: </p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: hello-cert
spec:
secretName: hello-cert-prod
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: "*.tool.databaker.io"
dnsNames:
- "*.tool.databaker.io"
</code></pre>
<p>and it was successfully created:</p>
<pre><code>Normal Requested 8m31s cert-manager Created new CertificateRequest resource "hello-cert-2824719253"
Normal Issued 7m22s cert-manager Certificate issued successfully
</code></pre>
<p>To figure out, if the certificate is working, I have deployed a service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: hello.tool.databaker.io
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
</code></pre>
<p>But it does not work properly.</p>
<p><a href="https://i.stack.imgur.com/plF2L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/plF2L.png" alt="enter image description here"></a></p>
<p>What am I doing wrong?</p>
| softshipper | <p>You haven't specified the secrets containing your certificate:</p>
<pre><code>spec:
tls:
- hosts:
- hello.tool.databaker.io
secretName: <secret containing the certificate>
rules:
...
</code></pre>
| Burak Serdar |
<p>I have deployed mysql in kubernetes. The pods are up and running. But when I tried to create db, table and insert data, all these operations seems to be very slow. Here is the yaml files I used for deployment. Can you look into the yaml and suggest me what could be the reason for it.</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: "mysql"
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.20
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mysql-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rbd-default
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
</code></pre>
<p>I tried creating database after I exec into the pod, the operation took 40 sec to complete. When I tried connecting it to visual studio and perform same operation it took me more than 4 minutes. I think 40 sec itself is too long. However fetching data just took 300 ms from visual studio.
I connected it to visual studio using IP and node port</p>
| Gill Varghese Sajan | <p>Thank you all for spending time to answer the question. I think I solved the issue. It was basically the storage class that I used which was causing the issue. Once I updated it to rbd-fast, the response got much faster.</p>
| Gill Varghese Sajan |
<p>I have a Kubernetes Cluster where the same application is running a few times but with different namespaces. Imagine</p>
<pre><code>ns=app1 name=app1
ns=app2 name=app2
ns=app3 name=app3
[...]
ns=app99 name=app99
</code></pre>
<p>Now I need to execute a cronjob every 10 minutes in all of those Pods.
The path is the same everytime.</p>
<p>Is there a 'best way' to achieve this?</p>
<p>I was thinking of a kubectl image running as 'CronJob' kind and something like this:</p>
<pre><code>kubectl get pods -A -o=custom-columns='DATA:spec.containers[?(@.image=="registry.local/app-v1")].image' | xargs -i kubectl exec {} /usr/bin/scrub.sh
</code></pre>
<p>but I am pretty sure this is not the right way to go about this.</p>
| LucidEx | <p>As mentioned by me and @Argha Sadhu one of the options would be to create cronjobs for all the pods, but it would generate 100 pods every 10 minutes, so as @LucidEx mentioned that would be fine with storage in the cloud, but not that much in his environment.</p>
<blockquote>
<p>Concerning the storage it would be fine if it was some storage in a cloud I don't have to care about, but since it's a shared ceph storage with all it's overheads (especially ram and cpu) when you claim a volume and the need to have them zero'd on delete creating/deleting 100 storage claims every 10 minutes just isn't viable in my environment. – LucidEx</p>
</blockquote>
<hr />
<p>Another options can be found at this older <a href="https://stackoverflow.com/questions/41192053/cron-jobs-in-kubernetes-connect-to-existing-pod-execute-script">stackoverflow question</a>, similar question was asked here.</p>
<p>As @LucidEx mentioned</p>
<blockquote>
<p>I'll probably roll with a bash loop/routine instead of that python code snippet but will go with that approach.</p>
</blockquote>
<p>This python code snippet is <a href="https://stackoverflow.com/a/56760319/11977760">here</a>.</p>
| Jakub |
<p>Is it possible in Kubernetes to mount a file from a ConfigMap into an directory that already has other files? E.g.</p>
<p>Base image filesystem:</p>
<pre><code>/app/
main/
main.py
test.py
</code></pre>
<p>ConfigMap contains one file, mounted.py, which should be mounted in /app/main/ alongside main.py and test.py.</p>
<p>Desired filesystem after deployment:</p>
<pre><code>/app/
main/
main.py
test.py
mounted.py
</code></pre>
<p>What I have seen so far is that the ConfigMap is mounted to a new directory, so the closest I have come is like this:</p>
<pre><code>/app/
main/
main.py
test.py
mounted/
mounted.py
</code></pre>
<p>If I mount the ConfigMap to /app/main, then it clobbers the existing files from the base image. E.g.</p>
<pre><code>/app/
main/
mounted.py
</code></pre>
<p>Is there a good way to inject a single file like that? (Actually multiple files in my real use case.) I would prefer not to have separate directories for everything that will be injected by the ConfigMaps, since it deviates from the standard program architecture.</p>
| Rusty Lemur | <p>Use <code>subPath</code>:</p>
<pre><code>volumeMounts:
- name: config
mountPath: /app/main/mounted.py
subPath: mounted.py
</code></pre>
<p>The <code>mountPath</code> shows where it should be mounted, and <code>subPath</code> points to an entry inside the volume.</p>
<p>Please be aware of the limitation: when using subPath the configMap updates won't be reflected in your file. See the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#configmap" rel="nofollow noreferrer">official documentation</a> for details.</p>
| Burak Serdar |
<p>When I run <code>kubectl delete raycluster <raycluster-name></code>, sometimes this command hangs. It looks like this is because Kubernetes finalizers for the raycluster are preventing deletion of the resource until some condition is met. Indeed, I see the raycluster gets marked with a deletion timestamp like below:</p>
<pre><code> creationTimestamp: "2022-02-17T06:06:40Z"
deletionGracePeriodSeconds: 0
deletionTimestamp: "2022-02-17T18:51:16Z"
finalizers:
- kopf.zalando.org/KopfFinalizerMarker
</code></pre>
<p>Looking at the logs, if termination happens correctly, I should see termination requests on the operator logs:</p>
<pre><code>2022-02-16 16:57:26,326 VINFO scripts.py:853 -- Send termination request to `"/home/ray/anaconda3/lib/python3.9/site-packages/ray/core/src/ray/thirdparty/redis/src/redis-server *:50343" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ""` (via SIGTERM)
2022-02-16 16:57:26,328 VINFO scripts.py:853 -- Send termination request to `/home/ray/anaconda3/lib/python3.9/site-packages/ray/core/src/ray/raylet/raylet --raylet_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --store_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --object_manager_port=0 --min_worker_port=10002 --max_worker_port=19999 --node_manager_port=0 --node_ip_address=10.1.0.34 --redis_address=10.1.0.34 --redis_port=6379 --maximum_startup_concurrency=1 --static_resource_list=node:10.1.0.34,1.0,memory,367001600,object_store_memory,137668608 "--python_worker_command=/home/ray/anaconda3/bin/python /home/ray/anaconda3/lib/python3.9/site-packages/ray/workers/setup_worker.py /home/ray/anaconda3/lib/python3.9/site-packages/ray/workers/default_worker.py --node-ip-address=10.1.0.34 --node-manager-port=RAY_NODE_MANAGER_PORT_PLACEHOLDER --object-store-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --raylet-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --redis-address=10.1.0.34:6379 --temp-dir=/tmp/ray --metrics-agent-port=45522 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 RAY_WORKER_DYNAMIC_OPTION_PLACEHOLDER --redis-password=5241590000000000" --java_worker_command= "--cpp_worker_command=/home/ray/anaconda3/lib/python3.9/site-packages/ray/cpp/default_worker --ray_plasma_store_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --ray_raylet_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --ray_node_manager_port=RAY_NODE_MANAGER_PORT_PLACEHOLDER --ray_address=10.1.0.34:6379 --ray_redis_password=5241590000000000 --ray_session_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116 --ray_logs_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/logs --ray_node_ip_address=10.1.0.34 RAY_WORKER_DYNAMIC_OPTION_PLACEHOLDER" --native_library_path=/home/ray/anaconda3/lib/python3.9/site-packages/ray/cpp/lib --redis_password=5241590000000000 --temp_dir=/tmp/ray --session_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116 --log_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/logs --resource_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/runtime_resources --metrics-agent-port=45522 --metrics_export_port=43650 --object_store_memory=137668608 --plasma_directory=/dev/shm --ray-debugger-external=0 "--agent_command=/home/ray/anaconda3/bin/python -u /home/ray/anaconda3/lib/python3.9/site-packages/ray/dashboard/agent.py --node-ip-address=10.1.0.34 --redis-address=10.1.0.34:6379 --metrics-export-port=43650 --dashboard-agent-port=45522 --listen-port=0 --node-manager-port=RAY_NODE_MANAGER_PORT_PLACEHOLDER --object-store-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --raylet-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --temp-dir=/tmp/ray --session-dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116 --runtime-env-dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/runtime_resources --log-dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/logs --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --redis-password=5241590000000000"` (via SIGTERM)
</code></pre>
<p>However, in the case above where the finalizer condition is not met, I don't see the termination requests in the logs:</p>
<pre><code>Demands:
(no resource demands)
ray,ray:2022-02-16 17:21:10,145 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status_legacy' b"Cluster status: 2 nodes\n - MostDelayedHeartbeats: {'10.244.0.11': 0.17503762245178223, '10.244.1.33': 0.17499160766601562, '10.244.0.12': 0.17495203018188477}\n - NodeIdleSeconds: Min=3926 Mean=3930 Max=3937\n - ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory\n - TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0\nWorker node types:\n - rayWorkerType: 2" True None
ray,ray:2022-02-16 17:21:10,145 DEBUG legacy_info_string.py:24 -- Cluster status: 2 nodes
- MostDelayedHeartbeats: {'10.244.0.11': 0.17503762245178223, '10.244.1.33': 0.17499160766601562, '10.244.0.12': 0.17495203018188477}
- NodeIdleSeconds: Min=3926 Mean=3930 Max=3937
- ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory
- TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0
Worker node types:
- rayWorkerType: 2
ray,ray:2022-02-16 17:21:10,220 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,245 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,268 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,285 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:189 -- Cluster resources: [{'CPU': 1.0, 'node:10.244.0.11': 1.0, 'object_store_memory': 135078297.0, 'memory': 375809638.0}, {'node:10.244.1.33': 1.0, 'memory': 375809638.0, 'object_store_memory': 137100902.0, 'CPU': 1.0}, {'object_store_memory': 134204620.0, 'CPU': 1.0, 'node:10.244.0.12': 1.0, 'memory': 375809638.0}]
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:190 -- Node counts: defaultdict(<class 'int'>, {'rayHeadType': 1, 'rayWorkerType': 2})
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:201 -- Placement group demands: []
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:247 -- Resource demands: []
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:248 -- Unfulfilled demands: []
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:252 -- Final unfulfilled: []
ray,ray:2022-02-16 17:21:10,440 DEBUG resource_demand_scheduler.py:271 -- Node requests: {}
ray,ray:2022-02-16 17:21:10,488 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status' b'{"load_metrics_report": {"usage": {"object_store_memory": [0.0, 406383819.0], "memory": [0.0, 1127428914.0], "node:10.244.0.11": [0.0, 1.0], "CPU": [0.0, 3.0], "node:10.244.1.33": [0.0, 1.0], "node:10.244.0.12": [0.0, 1.0]}, "resource_demand": [], "pg_demand": [], "request_demand": [], "node_types": [[{"memory": 375809638.0, "CPU": 1.0, "node:10.244.0.11": 1.0, "object_store_memory": 135078297.0}, 1], [{"object_store_memory": 137100902.0, "node:10.244.1.33": 1.0, "memory": 375809638.0, "CPU": 1.0}, 1], [{"object_store_memory": 134204620.0, "memory": 375809638.0, "node:10.244.0.12": 1.0, "CPU": 1.0}, 1]], "head_ip": null}, "time": 1645060869.937817, "monitor_pid": 68, "autoscaler_report": {"active_nodes": {"rayHeadType": 1, "rayWorkerType": 2}, "pending_nodes": [], "pending_launches": {}, "failed_nodes": []}}' True None
ray,ray:2022-02-16 17:21:15,493 DEBUG gcs_utils.py:238 -- internal_kv_get b'autoscaler_resource_request' None
ray,ray:2022-02-16 17:21:15,640 INFO autoscaler.py:304 --
======== Autoscaler status: 2022-02-16 17:21:15.640853 ========
Node status
---------------------------------------------------------------
Healthy:
1 rayHeadType
2 rayWorkerType
Pending:
(no pending nodes)
Recent failures:
(no failures)
Resources
---------------------------------------------------------------
Usage:
0.0/3.0 CPU
0.00/1.050 GiB memory
0.00/0.378 GiB object_store_memory
Demands:
(no resource demands)
ray,ray:2022-02-16 17:21:15,683 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status_legacy' b"Cluster status: 2 nodes\n - MostDelayedHeartbeats: {'10.244.0.11': 0.14760899543762207, '10.244.1.33': 0.14756131172180176, '10.244.0.12': 0.1475226879119873}\n - NodeIdleSeconds: Min=3932 Mean=3936 Max=3943\n - ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory\n - TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0\nWorker node types:\n - rayWorkerType: 2" True None
ray,ray:2022-02-16 17:21:15,684 DEBUG legacy_info_string.py:24 -- Cluster status: 2 nodes
- MostDelayedHeartbeats: {'10.244.0.11': 0.14760899543762207, '10.244.1.33': 0.14756131172180176, '10.244.0.12': 0.1475226879119873}
- NodeIdleSeconds: Min=3932 Mean=3936 Max=3943
- ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory
- TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0
Worker node types:
- rayWorkerType: 2
ray,ray:2022-02-16 17:21:15,775 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,799 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,833 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,850 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,962 DEBUG resource_demand_scheduler.py:189 -- Cluster resources: [{'memory': 375809638.0, 'node:10.244.0.11': 1.0, 'CPU': 1.0, 'object_store_memory': 135078297.0}, {'CPU': 1.0, 'node:10.244.1.33': 1.0, 'object_store_memory': 137100902.0, 'memory': 375809638.0}, {'memory': 375809638.0, 'node:10.244.0.12': 1.0, 'CPU': 1.0, 'object_store_memory': 134204620.0}]
ray,ray:2022-02-16 17:21:15,962 DEBUG resource_demand_scheduler.py:190 -- Node counts: defaultdict(<class 'int'>, {'rayHeadType': 1, 'rayWorkerType': 2})
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:201 -- Placement group demands: []
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:247 -- Resource demands: []
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:248 -- Unfulfilled demands: []
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:252 -- Final unfulfilled: []
ray,ray:2022-02-16 17:21:16,032 DEBUG resource_demand_scheduler.py:271 -- Node requests: {}
ray,ray:2022-02-16 17:21:16,081 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status' b'{"load_metrics_report": {"usage": {"memory": [0.0, 1127428914.0], "object_store_memory": [0.0, 406383819.0], "CPU": [0.0, 3.0], "node:10.244.0.11": [0.0, 1.0], "node:10.244.1.33": [0.0, 1.0], "node:10.244.0.12": [0.0, 1.0]}, "resource_demand": [], "pg_demand": [], "request_demand": [], "node_types": [[{"node:10.244.0.11": 1.0, "object_store_memory": 135078297.0, "CPU": 1.0, "memory": 375809638.0}, 1], [{"object_store_memory": 137100902.0, "node:10.244.1.33": 1.0, "CPU": 1.0, "memory": 375809638.0}, 1], [{"object_store_memory": 134204620.0, "node:10.244.0.12": 1.0, "CPU": 1.0, "memory": 375809638.0}, 1]], "head_ip": null}, "time": 1645060875.4946475, "monitor_pid": 68, "autoscaler_report": {"active_nodes": {"rayHeadType": 1, "rayWorkerType": 2}, "pending_nodes": [], "pending_launches": {}, "failed_nodes": []}}' True None
</code></pre>
<p>Reading through documentations, I found 2 workarounds:</p>
<pre><code>1. Use kubectl patch to remove the finalizer
2. Kill and restart the operator, this lifts the finalizer condition
</code></pre>
<p>However, I am not sure if either method is sustainable because:</p>
<pre><code>1. After I run kubectl patch, I can't seem to create new rayclusters with the same name.
This requires me to kill and restart the operator.
2. If I restart the operator to bring down a raycluster,
I am afraid this will affect other rayclusters that are currently running.
</code></pre>
<p>I am looking to understand the following:</p>
<ol>
<li>What happens if I restart the ray operator while other rayclusters are active?</li>
<li>What is the finalizer condition here and can I disable it?</li>
<li>Some suitable workarounds</li>
</ol>
| Kun Hwi Ko | <p>Assuming the operator is running when you try to delete the resource, the hanging behavior is a bug.
Would you mind filing a <a href="https://github.com/ray-project/ray/issues/new?assignees=&labels=bug%2Ctriage&template=bug-report.yml&title=%5BBug%5D%20" rel="nofollow noreferrer">bug report</a> on the Ray GitHub with reproduction details?</p>
<p>Edit: I'm one of the maintainers of Ray's Kubernetes support.
Feel to tag me with "@DmitriGekhtman" in the bug report.</p>
| Dmitri Gekhtman |
<p>I have got a Azure AKS cluster running on Azure cloud. It is accessed by frontend and mobile via Azure API Management. My Front end app is outside of the AKS.</p>
<p>Is it possible to use Azure Dev Spaces in this setup to test my changes in the isolated environment?</p>
<p>I've created a new namespace in the AKS and created a separate deployment slot for testing environment on the forntend app, but I can't figure out how to create an isolated routing on Azure API management.</p>
<p>As a result I'd like to have an isolated environment which shares most of the containers on AKS, but uses my local machine to host one service which is under testing at the moment.</p>
| Kuba Matjanowski | <p>I assume you intend to use Dev Spaces routing through a <code>space.s.</code> prefix on your domain name. For this to work, you ultimately need a <code>Host</code> header that includes such a prefix as part of the request to the Dev Spaces ingress controller running in your AKS cluster.</p>
<p>It sounds like in your case, you are running your frontend as an Azure Web App and backend services in AKS. Therefore your frontend would need to include the necessary logic to do one of two things:</p>
<ul>
<li>Allow the slot instance to customize the space name to use (e.g. it might call the AKS backend services using something like <code>testing.s.default.myservice.azds.io</code>)</li>
<li>Read the <code>Host</code> header from the frontend request and propagate it to the backend request.</li>
</ul>
<p>In either case, you will probably need to configure Azure API Management to correctly propagate appropriate requests to the testing slot you have created. I don't know enough about how API Management configures routing rules to help on this part, but hopefully I've been able to shed some light on the Dev Spaces part.</p>
| Stephen P. |
<p>I'm having a problem migrating my pure Kubernetes app to an Istio managed. I'm using Google Cloud Platform (GCP), Istio 1.4, Google Kubernetes Engine (GKE), Spring Boot and JAVA 11.</p>
<p>I had the containers running in a pure GKE environment without a problem. Now I started the migration of my Kubernetes cluster to use Istio. Since then I'm getting the following message when I try to access the exposed service.</p>
<p><strong>upstream connect error or disconnect/reset before headers. reset reason: connection failure</strong></p>
<p>This error message looks like a really generic. I found a lot of different problems, with the same error message, but no one was related to my problem.</p>
<p>Bellow the version of the Istio:</p>
<pre><code>client version: 1.4.10
control plane version: 1.4.10-gke.5
data plane version: 1.4.10-gke.5 (2 proxies)
</code></pre>
<p>Bellow my yaml files:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
account: tree-guest
name: tree-guest-service-account
---
apiVersion: v1
kind: Service
metadata:
labels:
app: tree-guest
service: tree-guest
name: tree-guest
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: tree-guest
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tree-guest
version: v1
name: tree-guest-v1
spec:
replicas: 1
selector:
matchLabels:
app: tree-guest
version: v1
template:
metadata:
labels:
app: tree-guestaz
version: v1
spec:
containers:
- image: registry.hub.docker.com/victorsens/tree-quest:circle_ci_build_00923285-3c44-4955-8de1-ed578e23c5cf
imagePullPolicy: IfNotPresent
name: tree-guest
ports:
- containerPort: 8080
serviceAccount: tree-guest-service-account
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tree-guest-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tree-guest-virtual-service
spec:
hosts:
- "*"
gateways:
- tree-guest-gateway
http:
- match:
- uri:
prefix: /v1
route:
- destination:
host: tree-guest
port:
number: 8080
</code></pre>
<p>To apply the yaml file I used the following argument:</p>
<pre><code>kubectl apply -f <(istioctl kube-inject -f ./tree-guest.yaml)
</code></pre>
<p>Below the result of the Istio proxy argument, after deploying the application:</p>
<pre><code>istio-ingressgateway-6674cc989b-vwzqg.istio-system SYNCED SYNCED SYNCED SYNCED
istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5 tree-guest-v1-774bf84ddd-jkhsh.default SYNCED SYNCED SYNCED SYNCED istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5
</code></pre>
<p>If someone have a tip about what is going wrong, please let me know. I'm stuck in this problem for a couple of days.</p>
<p>Thanks.</p>
| Victor | <p>As @Victor mentioned the problem here was the wrong yaml file.</p>
<blockquote>
<p>I solve it. In my case the yaml file was wrong. I reviewed it and the problem now is solved. Thank you guys., – Victor</p>
</blockquote>
<p>If you're looking for yaml samples I would suggest to take a look at <a href="https://github.com/istio/istio/tree/master/samples" rel="noreferrer">istio github samples</a>.</p>
<hr />
<p>As <code>503 upstream connect error or disconnect/reset before headers. reset reason: connection failure</code> occurs very often I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.</p>
<p>Examples with 503 error:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/58509666/istio-503s-between-public-gateway-and-service">Istio 503:s between (Public) Gateway and Service</a></li>
<li><a href="https://stackoverflow.com/questions/59174478/istio-egress-gateway-gives-http-503-error">IstIO egress gateway gives HTTP 503 error</a></li>
<li><a href="https://stackoverflow.com/questions/60074732/istio-ingress-gateway-with-tls-termination-returning-503-service-unavailable">Istio Ingress Gateway with TLS termination returning 503 service unavailable</a></li>
<li><a href="https://stackoverflow.com/questions/59560394/how-to-terminate-ssl-at-ingress-gateway-in-istio">how to terminate ssl at ingress-gateway in istio?</a></li>
<li><a href="https://stackoverflow.com/questions/54160215/accessing-service-using-istio-ingress-gives-503-error-when-mtls-is-enabled?rq=1">Accessing service using istio ingress gives 503 error when mTLS is enabled</a></li>
</ul>
<p>Common cause of 503 errors from istio documentation:</p>
<ul>
<li><a href="https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes" rel="noreferrer">https://istio.io/docs/ops/best-practices/traffic-management/#avoid-503-errors-while-reconfiguring-service-routes</a></li>
<li><a href="https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule" rel="noreferrer">https://istio.io/docs/ops/common-problems/network-issues/#503-errors-after-setting-destination-rule</a></li>
<li><a href="https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications" rel="noreferrer">https://istio.io/latest/docs/concepts/traffic-management/#working-with-your-applications</a></li>
</ul>
<p>Few things I would check first:</p>
<ul>
<li>Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <code><protocol>[-<suffix>]</code> as mentioned in <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/#manual-protocol-selection" rel="noreferrer">istio
documentation</a>.</li>
<li>Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.</li>
<li>Check if istio works, I would recommend to apply <a href="https://istio.io/latest/docs/examples/bookinfo/" rel="noreferrer">bookinfo application</a> example and check if it works as expected.</li>
<li>Check if your namespace is <a href="https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/" rel="noreferrer">injected</a> with <code>kubectl get namespace -L istio-injection</code></li>
<li>If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.</li>
</ul>
| Jakub |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.