prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm having some trouble getting the Nginx ingress controller working in my Kubernetes cluster. I have created the nginx-ingress deployments, services, roles, etc., according to <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p>
<p>I also deployed a simple <code>hello-world</code> app which listens on port <code>8080</code></p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hello-world
namespace: default
spec:
selector:
matchLabels:
name: hello-world
template:
metadata:
labels:
name: hello-world
spec:
containers:
- name: hello-world
image: myrepo/hello-world
resources:
requests:
memory: 200Mi
cpu: 150m
limits:
cpu: 300m
ports:
- name: http
containerPort: 8080
protocol: TCP
</code></pre>
<p>And created a service for it</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
namespace: default
name: hello-world
spec:
selector:
app: hello-world
ports:
- name: server
port: 8080
</code></pre>
<p>Finally, I created a TLS secret (<code>my-tls-secret</code>) and deployed the nginx ingress per the instructions. For example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-world
namespace: default
spec:
rules:
- host: hello-world.mydomain.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: server
tls:
- hosts:
- hello-world.mydomain.com
secretName: my-tls-cert
</code></pre>
<p>However, I am unable to ever reach my application, and in the logs I see</p>
<pre><code>W0103 19:11:15.712062 6 controller.go:826] Service "default/hello-world" does not have any active Endpoint.
I0103 19:11:15.712254 6 controller.go:172] Configuration changes detected, backend reload required.
I0103 19:11:15.864774 6 controller.go:190] Backend successfully reloaded.
</code></pre>
<p>I am not sure why it says <code>Service "default/hello-world" does not have any active Endpoint</code>. I have used a similar service definition for the traefik ingress controller without any issues.</p>
<p>I'm hoping I'm missing something obvious with the nginx ingress. Any help you can provide would be appreciated! </p>
| <p>I discovered what I was doing wrong. In my application definition I was using <code>name</code> as my selector</p>
<pre><code> selector:
matchLabels:
name: hello-world
template:
metadata:
labels:
name: hello-world
</code></pre>
<p>Whereas in my service I was using <code>app</code></p>
<pre><code> selector:
app: hello-world
</code></pre>
<p>After updating my service to use <code>app</code>, it worked</p>
<pre><code> selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
</code></pre>
|
<p>I've created an elasticsearch service to apply it like backend to jaeger tracing, using this <a href="https://github.com/jaegertracing/jaeger-kubernetes#backing-storage" rel="nofollow noreferrer">guide</a>, all over Kubernetes GCP cluster. </p>
<p>I have the elasticsearch service:</p>
<pre><code>~/w/jaeger-elasticsearch ❯❯❯ kubectl get service elasticsearch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 8m
~/w/jaeger-elasticsearch ❯❯❯
</code></pre>
<p>And their respective pod called elasticsearch-0</p>
<pre><code>~/w/jaeger-elasticsearch ❯❯❯ kubectl get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 37m
jaeger-agent-cnw9m 1/1 Running 0 2h
jaeger-agent-dl5n9 1/1 Running 0 2h
jaeger-agent-zzljk 1/1 Running 0 2h
jaeger-collector-9879cd76-fvpz4 1/1 Running 0 2h
jaeger-query-5584576487-dzqkd 1/1 Running 0 2h
~/w/jaeger-elasticsearch ❯❯❯ kubectl get pod elasticsearch-0
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 38m
~/w/jaeger-elasticsearch ❯❯❯
</code></pre>
<p>I've enter to my pod configuration on GCP, and I can see that my elasticsearch-0 pod have limited resources:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
elasticsearch'
creationTimestamp: 2019-01-03T09:11:10Z
generateName: elasticsearch-
</code></pre>
<p>And then, I want to assign it specific CPU request and CPU limit <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit" rel="nofollow noreferrer">according to the documentation</a>, and then, I proceed to modufy the pod manifest, adding the following directives:</p>
<p><code>- cpu "2"</code> in the <code>args</code> section:</p>
<pre><code>args:
- -cpus
- "2"
</code></pre>
<p>And I am including a <code>resources:requests</code> field in the container resource, in order to specify a request of 0.5 CPU and I've include a <code>resources:limits</code> in order to specify a CPU limit of this way: </p>
<pre><code> limits:
cpu: "1"
requests:
cpu: "0.5"
</code></pre>
<p>My complete pod manifest is this (See numerals 1,2,3,4 and 5 numerals commented with # symbol):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
elasticsearch'
creationTimestamp: 2019-01-03T09:11:10Z
generateName: elasticsearch-
labels:
app: jaeger-elasticsearch
controller-revision-hash: elasticsearch-8684f69799
jaeger-infra: elasticsearch-replica
statefulset.kubernetes.io/pod-name: elasticsearch-0
name: elasticsearch-0
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: elasticsearch
uid: 86578784-0f36-11e9-b8b1-42010aa60019
resourceVersion: "2778"
selfLink: /api/v1/namespaces/default/pods/elasticsearch-0
uid: 82d3be2f-0f37-11e9-b8b1-42010aa60019
spec:
containers:
- args:
- -Ehttp.host=0.0.0.0
- -Etransport.host=127.0.0.1
- -cpus # 1
- "2" # 2
command:
- bin/elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
imagePullPolicy: Always
name: elasticsearch
readinessProbe:
exec:
command:
- curl
- --fail
- --silent
- --output
- /dev/null
- --user
- elastic:changeme
- localhost:9200
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 4
resources: # 3
limits:
cpu: "1" # 4
requests:
cpu: "0.5" # 5
# container has a request of 0.5 CPU
#cpu: 100m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-96vwj
readOnly: true
dnsPolicy: ClusterFirst
hostname: elasticsearch-0
nodeName: gke-jaeger-persistent-st-default-pool-81004235-h8xt
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
subdomain: elasticsearch
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: {}
name: data
- name: default-token-96vwj
secret:
defaultMode: 420
secretName: default-token-96vwj
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2019-01-03T09:11:10Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-01-03T09:11:40Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-01-03T09:11:10Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
imageID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7
lastState: {}
name: elasticsearch
ready: true
restartCount: 0
state:
running:
startedAt: 2019-01-03T09:11:13Z
hostIP: 10.166.0.2
phase: Running
podIP: 10.36.0.10
qosClass: Burstable
startTime: 2019-01-03T09:11:10Z
</code></pre>
<p>But when I apply my pod manifest file, I get the following output:</p>
<pre><code>Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
.
.
.
for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
~/w/jaeger-elasticsearch ❯❯❯
</code></pre>
<p>The complete output of my <code>kubectl apply</code> command is this:</p>
<pre><code>~/w/jaeger-elasticsearch ❯❯❯ kubectl apply -f elasticsearch-0.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{\"kubernetes.io/limit-ranger\":\"LimitRanger plugin set: cpu request for container elasticsearch\"},\"creationTimestamp\":\"2019-01-03T09:11:10Z\",\"generateName\":\"elasticsearch-\",\"labels\":{\"app\":\"jaeger-elasticsearch\",\"controller-revision-hash\":\"elasticsearch-8684f69799\",\"jaeger-infra\":\"elasticsearch-replica\",\"statefulset.kubernetes.io/pod-name\":\"elasticsearch-0\"},\"name\":\"elasticsearch-0\",\"namespace\":\"default\",\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"blockOwnerDeletion\":true,\"controller\":true,\"kind\":\"StatefulSet\",\"name\":\"elasticsearch\",\"uid\":\"86578784-0f36-11e9-b8b1-42010aa60019\"}],\"resourceVersion\":\"2778\",\"selfLink\":\"/api/v1/namespaces/default/pods/elasticsearch-0\",\"uid\":\"82d3be2f-0f37-11e9-b8b1-42010aa60019\"},\"spec\":{\"containers\":[{\"args\":[\"-Ehttp.host=0.0.0.0\",\"-Etransport.host=127.0.0.1\",\"-cpus\",\"2\"],\"command\":[\"bin/elasticsearch\"],\"image\":\"docker.elastic.co/elasticsearch/elasticsearch:5.6.0\",\"imagePullPolicy\":\"Always\",\"name\":\"elasticsearch\",\"readinessProbe\":{\"exec\":{\"command\":[\"curl\",\"--fail\",\"--silent\",\"--output\",\"/dev/null\",\"--user\",\"elastic:changeme\",\"localhost:9200\"]},\"failureThreshold\":3,\"initialDelaySeconds\":5,\"periodSeconds\":5,\"successThreshold\":1,\"timeoutSeconds\":4},\"resources\":{\"limits\":{\"cpu\":\"1\"},\"requests\":{\"cpu\":\"0.5\"}},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\",\"volumeMounts\":[{\"mountPath\":\"/data\",\"name\":\"data\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"default-token-96vwj\",\"readOnly\":true}]}],\"dnsPolicy\":\"ClusterFirst\",\"hostname\":\"elasticsearch-0\",\"nodeName\":\"gke-jaeger-persistent-st-default-pool-81004235-h8xt\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"serviceAccount\":\"default\",\"serviceAccountName\":\"default\",\"subdomain\":\"elasticsearch\",\"terminationGracePeriodSeconds\":30,\"tolerations\":[{\"effect\":\"NoExecute\",\"key\":\"node.kubernetes.io/not-ready\",\"operator\":\"Exists\",\"tolerationSeconds\":300},{\"effect\":\"NoExecute\",\"key\":\"node.kubernetes.io/unreachable\",\"operator\":\"Exists\",\"tolerationSeconds\":300}],\"volumes\":[{\"emptyDir\":{},\"name\":\"data\"},{\"name\":\"default-token-96vwj\",\"secret\":{\"defaultMode\":420,\"secretName\":\"default-token-96vwj\"}}]},\"status\":{\"conditions\":[{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-01-03T09:11:10Z\",\"status\":\"True\",\"type\":\"Initialized\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-01-03T09:11:40Z\",\"status\":\"True\",\"type\":\"Ready\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-01-03T09:11:10Z\",\"status\":\"True\",\"type\":\"PodScheduled\"}],\"containerStatuses\":[{\"containerID\":\"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030\",\"image\":\"docker.elastic.co/elasticsearch/elasticsearch:5.6.0\",\"imageID\":\"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7\",\"lastState\":{},\"name\":\"elasticsearch\",\"ready\":true,\"restartCount\":0,\"state\":{\"running\":{\"startedAt\":\"2019-01-03T09:11:13Z\"}}}],\"hostIP\":\"10.166.0.2\",\"phase\":\"Running\",\"podIP\":\"10.36.0.10\",\"qosClass\":\"Burstable\",\"startTime\":\"2019-01-03T09:11:10Z\"}}\n"},"creationTimestamp":"2019-01-03T09:11:10Z","resourceVersion":"2778","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"$setElementOrder/containers":[{"name":"elasticsearch"}],"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"name":"elasticsearch","resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}}}]},"status":{"$setElementOrder/conditions":[{"type":"Initialized"},{"type":"Ready"},{"type":"PodScheduled"}],"conditions":[{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"Initialized"},{"lastTransitionTime":"2019-01-03T09:11:40Z","type":"Ready"},{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"podIP":"10.36.0.10","startTime":"2019-01-03T09:11:10Z"}}
to:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "elasticsearch-0", Namespace: "default"
Object: &{map["kind":"Pod" "apiVersion":"v1" "metadata":map["selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0" "generateName":"elasticsearch-" "namespace":"default" "resourceVersion":"11515" "creationTimestamp":"2019-01-03T10:29:53Z""labels":map["controller-revision-hash":"elasticsearch-8684f69799" "jaeger-infra":"elasticsearch-replica" "statefulset.kubernetes.io/pod-name":"elasticsearch-0" "app":"jaeger-elasticsearch"] "annotations":map["kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"] "ownerReferences":[map["controller":%!q(bool=true) "blockOwnerDeletion":%!q(bool=true) "apiVersion":"apps/v1" "kind":"StatefulSet" "name":"elasticsearch" "uid":"86578784-0f36-11e9-b8b1-42010aa60019"]] "name":"elasticsearch-0" "uid":"81cba2ad-0f42-11e9-b8b1-42010aa60019"] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "serviceAccountName":"default" "securityContext":map[] "subdomain":"elasticsearch" "schedulerName":"default-scheduler" "tolerations":[map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'\u012c' "key":"node.kubernetes.io/not-ready"] map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'\u012c' "key":"node.kubernetes.io/unreachable"]] "volumes":[map["name":"data" "emptyDir":map[]] map["name":"default-token-96vwj" "secret":map["secretName":"default-token-96vwj" "defaultMode":'\u01a4']]] "dnsPolicy":"ClusterFirst" "serviceAccount":"default" "nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt" "hostname":"elasticsearch-0" "containers":[map["name":"elasticsearch" "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "readinessProbe":map["exec":map["command":["curl" "--fail" "--silent" "--output" "/dev/null" "--user" "elastic:changeme" "localhost:9200"]] "initialDelaySeconds":'\x05' "timeoutSeconds":'\x04' "periodSeconds":'\x05' "successThreshold":'\x01' "failureThreshold":'\x03'] "terminationMessagePath":"/dev/termination-log" "imagePullPolicy":"Always" "command":["bin/elasticsearch"] "args":["-Ehttp.host=0.0.0.0" "-Etransport.host=127.0.0.1"] "resources":map["requests":map["cpu":"100m"]] "volumeMounts":[map["name":"data" "mountPath":"/data"] map["name":"default-token-96vwj" "readOnly":%!q(bool=true) "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"]] "terminationMessagePolicy":"File"]]] "status":map["qosClass":"Burstable" "phase":"Running" "conditions":[map["type":"Initialized" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"] map["type":"Ready" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:30:17Z"] map["type":"PodScheduled" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"]] "hostIP":"10.166.0.2" "podIP":"10.36.0.11" "startTime":"2019-01-03T10:29:53Z" "containerStatuses":[map["name":"elasticsearch" "state":map["running":map["startedAt":"2019-01-03T10:29:55Z"]] "lastState":map[] "ready":%!q(bool=true) "restartCount":'\x00' "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7" "containerID":"docker://e7f629b79da33b482b38fdb990717b3d61d114503961302e2e8feccb213bbd4b"]]]]}
for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
~/w/jaeger-elasticsearch ❯❯❯
</code></pre>
<p>How to can I modify my pod yaml file in order to assign it more resources and solve the <code>kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch'</code> message? </p>
| <p>Here's an article/guide on how to work with the limit-ranger and its default values [1]</p>
<p>[1]<a href="https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b" rel="nofollow noreferrer">https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b</a></p>
|
<p>I am following along <a href="https://bani.com.br/2018/09/istio-sidecar-injection-enabling-automatic-injection-adding-exceptions-and-debugging/" rel="nofollow noreferrer">this article</a> and try this on GKE. After adding the argument <code>- --log_output_level=default:debug</code> the change seems accepted as I get <code>deployment.extensions/istio-sidecar-injector edited
</code>, but I how do I know for sure? </p>
<p>The output of
<code>
pod=$(kubectl -n istio-system get pods -l istio=sidecar-injector -o jsonpath='{.items[0].metadata.name}')
</code>
and then <code>
kubectl -n istio-system logs -f $pod
</code> is the same as before and when I do (again)<code>kubectl -n istio-system edit deployment istio-sidecar-injector</code> the added argument is not there...</p>
| <p>Depends on how installed Istio on GKE. There are multiple ways to install Istio from GKE.</p>
<p>If you're installing from <a href="http://cloud.google.com/istio" rel="nofollow noreferrer">http://cloud.google.com/istio</a> which installs a Google-managed version of istio to your cluster, editing like <code>kubectl -n istio-system edit deployment istio-sidecar-injector</code> is a really bad idea, because Google will either revert it or the next version will wipe your modifications (so don't do it).</p>
<p>If you're installing yourself from Istio open source release, Istio is distributed as a Helm chart, and has bunch of kubernetes .yaml manifests. You can go edit those YAML manifests –or update Helm values.yaml files to add that argument. Then you can perform the Istio installation with the updated values.</p>
<p>If you're interested in getting help debugging istio, please get to a contributor community forum like Istio on Rocket Chat: <a href="https://istio.rocket.chat/" rel="nofollow noreferrer">https://istio.rocket.chat/</a> .</p>
|
<p>I have a gce airflow (composer) cluster with a bunch of workers:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
airflow-redis-0 1/1 Running 0 7h
airflow-scheduler 2/2 Running 0 7h
airflow-sqlproxy 1/1 Running 0 8h
airflow-worker 50/50 Running 0 7h
composer-fluentd-daemon 1/1 Running 0 7h
composer-fluentd-daemon 1/1 Running 0 7h
</code></pre>
<p>I also have a bunch of unique persistent NFS volumes that have data that need processing. Is there a way to dynamically mount a different NFS volume to each of the respective workers.</p>
<p>Alternatively, is it possible for the DockerOperator called within the worker to mount the NFS volume pertaining to its specific workload.</p>
<p>In theory the workflow would be: <code>Spin up 1x worker per Dataset</code> > <code>Get Dataset</code> > <code>Run Dataset through Model</code> > <code>Dump results</code></p>
<p>One way to accomplish this would be to download the Dataset to the given pod that is processing it; however, these Datasets are several hundred gb per and will need to be processed many times against different models.</p>
<p>Eventually we plan on putting all of this data in BigTable, but I need to show a proof in concept using volumes with a few hundred gb of data before we get the green light to spin up a BigTable cluster with multiple tb of data in it.</p>
<p>Input appreciated. Telling me im doing it wrong with a better solution is also a viable answer.</p>
| <p>Deployment, by definition, uses a set of identical replicas as pods (i.e. ReplicaSet). Therefore all pods of a deployment will have the PodSpec, pointing to the same volume.</p>
<p>Sounds like you need to write some custom logic yourself to orchestrate spinning up new workloads (i.e. Jobs) with different volumes.</p>
<p>You can do this by simply deploying a bash script that calls into kubectl (by default, kubectl inside a pod can work directly) in a loop. Or you can write something that uses Kubernetes API and makes some API calls to discover the new volumes, create workloads to process them (and then maybe clean up the volumes).</p>
|
<p>Guess you have deployed a service which's using certificates in order to create tls/https communications.</p>
<p>So, I need to deploy java client containers which have to trust with these certificates.</p>
<p>Nevertheless, java looks up in truststores in order to check whether the certificate is valid.</p>
<p>As you can see, I'm not able to create an image using these certificates since they are unknown in build time.</p>
<p>I mean, I'm not able to create this kind of <code>Dockerfile</code> snippet, due to <code>/var/run/secrets/kubernetes.io/certs/tls.crt</code> is not located on build-time.</p>
<pre><code>RUN keytool -import -alias vault -storepass changeit -keystore truststore.jks -noprompt -trustcacerts -file /var/run/secrets/kubernetes.io/certs/tls.crt
</code></pre>
<p>So, how can I populate these truststores filled with these certificates when containers/pods are deployed/started?</p>
<p>I hope I've explained so well.</p>
| <p>RedHat has a tutorial on how to do this on OpenShift:</p>
<p><a href="https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/" rel="noreferrer">https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/</a></p>
<p>It uses OpenShifts built in CA to actually generate and supply the certificate, so if using vanilla k8s you'll need to do that yourself, but once you have the certificate in a file on the pod, the method is exactly the same.</p>
|
<p>I have a gce airflow (composer) cluster with a bunch of workers:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
airflow-redis-0 1/1 Running 0 7h
airflow-scheduler 2/2 Running 0 7h
airflow-sqlproxy 1/1 Running 0 8h
airflow-worker 50/50 Running 0 7h
composer-fluentd-daemon 1/1 Running 0 7h
composer-fluentd-daemon 1/1 Running 0 7h
</code></pre>
<p>I also have a bunch of unique persistent NFS volumes that have data that need processing. Is there a way to dynamically mount a different NFS volume to each of the respective workers.</p>
<p>Alternatively, is it possible for the DockerOperator called within the worker to mount the NFS volume pertaining to its specific workload.</p>
<p>In theory the workflow would be: <code>Spin up 1x worker per Dataset</code> > <code>Get Dataset</code> > <code>Run Dataset through Model</code> > <code>Dump results</code></p>
<p>One way to accomplish this would be to download the Dataset to the given pod that is processing it; however, these Datasets are several hundred gb per and will need to be processed many times against different models.</p>
<p>Eventually we plan on putting all of this data in BigTable, but I need to show a proof in concept using volumes with a few hundred gb of data before we get the green light to spin up a BigTable cluster with multiple tb of data in it.</p>
<p>Input appreciated. Telling me im doing it wrong with a better solution is also a viable answer.</p>
| <p>The workflow you describe better matches the model of a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> than a plain (long-running) Pod. You would need to create a separate job spec for each task pointing at its respective data, and if your cluster is doing other work, you would need to take care that your bulk-data-processing pods don't overwhelm the available compute resources.</p>
<p>Do you actually have distinct NFS volumes (server names/exported directories), or just many file trees in a single NFS volume? If the latter, another path that could work well for you is to set up a queueing system like <a href="http://www.rabbitmq.com" rel="nofollow noreferrer">RabbitMQ</a> and load all of the paths into a queue there. You would then write a long-running process that serially reads a single item off the queue, does whatever required work on it, writes its result, commits the work item, and repeats (in a single thread). Then you'd scale that up using a Deployment to the desired amount of parallel processing.</p>
<p>In any case, your output suggests you're working directly with bare pods and have tried to scale the workers by having many parallel worker containers in a single pod. It's better to use one of the higher-level controllers (most often a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>) and use its <code>replicas:</code> control to launch multiple copies of the described pod. Among other things, this will let you spread load across multiple nodes, and will let you do a rolling update where pods get incrementally restarted, avoiding an outage if you change the underlying image or other details of the pod.</p>
|
<p>I am using nginx to proxy requests to multiple Headless services of StatefulSets in a kubernetes cluster. The problem I am having now is that whenever the service IP changes, the nginx does not resolve the service endpoint to updated IP address but still using the outdated cached IP address. I have tried to use the variable in proxy_pass in nginx configuration but it's to no avail at all. Both on my local cluster as well as deployed on AWS EKS. Here is a snippet of my nginx configuration:</p>
<pre><code>upstream svc-foo {
server svc-foo:8080;
keepalive 1024;
}
server {
resolver 127.0.0.1 [::1]:5353 valid=10s;
set $foo http://svc-foo;
location /foo/ {
proxy_pass $foo;
proxy_http_version 1.1;
}
}
</code></pre>
<p>I expect no downtime when I update the service which causes the service IP to change. Any insight and advice is appreciated.</p>
| <p>Best way is to use an DNS sidecar on your nginx pod as below:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: issue-795
name: nginx-config
data:
nginx.conf: |-
user nginx;
worker_processes 1;
events {
worker_connections 4096; ## Default: 1024
}
http {
server { # php/fastcgi
listen 80;
resolver 127.0.0.1:53 ipv6=off valid=10s;
set $upstream http://backend:8080;
location / {
proxy_pass $upstream;
proxy_http_version 1.1;
}
}
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: issue-795
name: proxy
spec:
replicas: 1
template:
metadata:
labels:
app: proxy
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: dnsmasq
image: "janeczku/go-dnsmasq:release-1.0.7"
args:
- --listen
- "127.0.0.1:53"
- --default-resolver
- --append-search-domains
volumes:
- name: nginx-config
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
namespace: issue-795
name: backend
spec:
ports:
- port: 80
targetPort: 8080
clusterIP: None
selector:
app: backend
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: backend
namespace: issue-795
spec:
serviceName: "backend"
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: echoserver
image: gcr.io/google_containers/echoserver:1.4
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
|
<p>Let's say I have a pod with a configMap (or secret) volume.
ConfigMap (or secret) object is present during the pod's creation, but I delete the configMap(or secret) object on the master, while the pod is running.
What is the expected behavior? Is it documented anywhere?</p>
<p>Is the running pod terminated?
Are the configMap (or secret) files deleted and pod continues to run?</p>
<p>This is the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically" rel="noreferrer">documentation</a> I could find about updates, doesn't mention anything about deletions.</p>
<blockquote>
<p>When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet
sync period + ttl of ConfigMaps cache in kubelet.</p>
</blockquote>
| <p>Nothing happens to your workloads running. Once they get scheduled by the kube-scheduler on the master(s) and then by the kubelet on the node(s), <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noreferrer">ConfigMaps</a>, <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Secrets</a>, etc get stored on the local filesystem of the node. The default is something like this:</p>
<pre><code># ConfigMaps
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~configmap/configmapname/
# Secret
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~secret/secret-token/
</code></pre>
<p>These actually end up being mounted somewhere in the container on a path that you specify on the pod spec.</p>
<p>When you delete the object in Kubernetes it actually gets deleted from its data store (etcd). Supposed that your pods need to be restarted for whatever reason, they will no be able to restart.</p>
<p>Short answer, nothing happens to your workloads running but if your pods need to be restarted they won't be able to restart.</p>
|
<p>Has anyone been able to consistently replicate SIGSEGVs on the JRE using different hardware and different JRE versions? Note (potentially a big note): I am running the process in a Docker container deployed on Kubernetes.</p>
<p>Sample error:</p>
<pre><code># A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fea64dd9d01, pid=21, tid=0x00007fe8dfbfb700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_191-b12) (build 1.8.0_191-b12)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.191-b12 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# J 8706 C2 com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextFieldName()Ljava/lang/String; (493 bytes) @ 0x00007fea64dd9d01 [0x00007fea64dd9b60+0x1a1]
</code></pre>
<p>I'm currently managing a high I/O process that has many threads doing I/O and serialization: downloading CSVs and JSONs, reading CSVs, writing JSONs into CSVs, and loading CSVs into MySQL. I do this thousands of times during the application's run cycle. I use nothing but commonly-used libraries (Jackson, jOOQ) and "normal" code: specifically, I did not write custom code that uses the JNI. </p>
<p>Without fail, the JVM will SIGSEGV during each run cycle. It seems to SIGSERV in various parts of the code base, but never on a GC thread or any other well-known threads. The "problematic frame" is always compiled code.</p>
<p>Testing specs:</p>
<ul>
<li>Multiple different hardware instances in AWS.</li>
<li>Tested using Java 8 191 and 181. Ubuntu 16.04. </li>
<li>This process is running in a container (Docker) and deployed on Kubernetes.</li>
<li>Docker version: <code>17.03.2-ce</code> </li>
</ul>
<p>Here's the full log:
<a href="https://gist.github.com/navkast/9c95f56ce818d76276684fa5bb9a6864" rel="nofollow noreferrer">https://gist.github.com/navkast/9c95f56ce818d76276684fa5bb9a6864</a></p>
| <p>Based on your comment, this is likely a case where your container limits are lower than your heap space + space needed for GC.</p>
<p>Some insights on how to run the JVM in a container <a href="https://jaxenter.com/nobody-puts-java-container-139373.html" rel="nofollow noreferrer">here</a>.</p>
<p>You didn't post any pod specs but you can also take a look a setting limits on your <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">Kubernetes pods</a>.</p>
|
<p>I'd like to receive a notification whenever there are any changes to the kubernetes cluster. Pods are created/deleted, etc. This can be a in a form of a webhook, or a message in a pub/sub, etc anything that can be used in autonomous manner. </p>
<p>Running the kubernetes cluster in gcp.</p>
| <p>You can get such events with creating hooks in linux compatible languages (such as go,python etc.) to relevant k8s object's watch endpoint. i.e.<code>/apis/apps/v1/watch/namespaces/{namespace}/deployments/{name}</code> is the watch endpoint of <code>Deployment</code> object. You may find the watch endpoint of the object you desire from the API reference of k8s-api-server. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/</a></p>
|
<p>In Docker I can use the <code>command: --default-authentication-plugin=mysql_native_password</code> in <code>docker-compose</code> file. How do I pass this while creating a MySQL Deployment?</p>
<p>I am using MySQL8</p>
| <p>It could look like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8
env:
- name: MYSQL_ROOT_PASSWORD
value: XXXXXXXXXXXXXXXX
args: ["--default-authentication-plugin=mysql_native_password"]
ports:
- containerPort: 3306
</code></pre>
|
<p>I had recently began to walk in to the Kubernetes world, there is a lot of information and most of time I get really confused;
Then I'd like to ask how I can choose the better way to administrate an infrastructure will like this:</p>
<p><a href="https://i.stack.imgur.com/9S27C.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9S27C.jpg" alt="enter image description here"></a></p>
<p>The node and the Kubernetes master are isolated in different DMZ, the port and the direction will be opened as described in the picture.</p>
<p>Which way you will use to let the pods going to be executed on the right node? (for example, an Nginx pod go only on Web1 and/or web2 and a pgSQL pod go only executed on DB1 and/or DB2) use the server labeling system is good enough or have better solution to manage this?</p>
<p>A second doubt I have is the service have to be reachable from the external word directly from the node, so if I want to use replicas, the Web1 and Web2 should listen on same IP address the service is exposed, I suppose, or could this be managed via the kube-proxy? At the moment I thinking about configure a distributed switch between the nodes and set the external ip address of the pod one of the IP attached to the distributed switch.</p>
<p>It is also a good solution or have some better way to do it? </p>
| <blockquote>
<p>Which way you will use to let the pods going to be executed on the right node?</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">nodeSelector</a>, simplest way because you add a label to the node <code>kubectl label nodes k8s-node-1 disktype=ssd</code> which can be verified by <code>kubectl get nodes --show-labels</code> and inside pod <code>yaml</code> under <code>spec</code> you add:</p>
<pre><code>nodeSelector:
disktype: ssd
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">Node affinity</a>, is more complex and more expressive as you are no longer limited to exact matches. <strong>Keep in mind this is still a beta feature.</strong></p>
<blockquote>
<p>A second doubt I have is the service have to be reachable from the external word directly from the node, so if I want to use replicas, the Web1 and Web2 should listen on same IP address the service is exposed, I suppose, or could this be managed via the kube-proxy?</p>
</blockquote>
<p>Here, I think you will need to use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">Type LoadBalancer</a> as a <code>Service</code>, most cloud providers have they own internal LoadBalancer <a href="https://kubernetes.io/docs/concepts/services-networking/service/#service-tabs-1" rel="nofollow noreferrer">GCP</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/service/#service-tabs-2" rel="nofollow noreferrer">AWS</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/service/#service-tabs-3" rel="nofollow noreferrer">Azure</a>. There is also <a href="https://github.com/google/metallb" rel="nofollow noreferrer">MetalLB</a> which is implementation for bare metal Kubernetes clusters.</p>
<p>Hope this helps You.</p>
<p><strong>EDIT:</strong></p>
<p>OP recommends using <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-isolation-restriction" rel="nofollow noreferrer">Node restriction</a> which in his example a better solution to let pods running dynamically on a sub-set of node in the cluster.</p>
|
<p>I'm trying to iterate over a list in a helm template, and add a suffix to each member.
I currently have this block of code that does exactly that: </p>
<pre><code>{{- range $host := .Values.ingress.hosts }}
{{- $subdomain := initial (initial (splitList "." $host)) | join "." }}
{{- $topLevelDomain := last (splitList "." $host) }}
{{- $secondLevelDomain := last (initial (splitList "." $host)) }}
- host: {{- printf " %s-%s.%s.%s" $subdomain $environment $secondLevelDomain $topLevelDomain | trimSuffix "-" }}
{{- end }}
</code></pre>
<p>Since I need to do the exact same manipulation twice in the same file, I want to create a new list, called <code>$host-with-env</code>, that will contain the suffix I'm looking for. That way I can only perform this operation once.<br>
Problem is - I've no idea how to create an empty list in helm - so I can't append items from the existing list into the new one.<br>
Any idea how can I achieve this?<br>
I'm also fine with altering the existing list but every manipulation I apply to the list seems to apply to the scope of the foreach I apply to it.
Any ideas how to go about this?</p>
| <p>It might not be quite clear what result are you trying to achieve, it will be helpful to add your input, like your values.yaml and the desired output. However, I added an example that answers your question.</p>
<p>Inspired by <a href="https://stackoverflow.com/a/49147648/7568391">this answer</a>, you can use <strong>dictionary</strong>. </p>
<p>This code will add suffix to all <code>.Values.ingress.hosts</code> and put them into <code>$hostsWithEnv</code> dictionary into a list, which can be accessed by <code>myhosts</code> key</p>
<p><strong>values.yaml</strong></p>
<pre><code>ingress:
hosts:
- one
- two
</code></pre>
<p><strong>configmap.yaml</strong></p>
<pre><code>{{- $hostsWithEnv := dict "myhosts" (list) -}}
{{- range $host := .Values.ingress.hosts -}}
{{- $var := printf "%s.domain.com" $host | append $hostsWithEnv.myhosts | set $hostsWithEnv "myhosts" -}}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
{{- range $hostsWithEnv.myhosts}}
- host: {{- printf " %s" . | trimSuffix "-" }}
{{- end }}
</code></pre>
<p><strong>output</strong></p>
<pre><code>$ helm install --debug --dry-run .
[debug] Created tunnel using local port: '62742'
...
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
- host: one.domain.com
- host: two.domain.com
</code></pre>
|
<p>I'm trying to do a blue green deployment with kubernetes , I have followed it , <a href="https://www.ianlewis.org/en/bluegreen-deployments-kubernetes" rel="noreferrer">https://www.ianlewis.org/en/bluegreen-deployments-kubernetes</a> , that is ok.
I have added a liveness probe to execute a healthcheck , </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flask-1.3
spec:
replicas: 2
template:
metadata:
labels:
name: app
version: "1.3"
spec:
containers:
- name: appflask
image: 192.168.99.100:5000/fapp:1.2
livenessProbe:
httpGet:
path: /index2
port: 5000
failureThreshold: 1
periodSeconds: 1
initialDelaySeconds: 1
ports:
- name: http
containerPort: 5000
</code></pre>
<p>the path "index2" doesnt exist , I want to test a failed deployment. the problem is when I execute:</p>
<pre><code> kubectl get pods -o wide
</code></pre>
<p>for some seconds one of the pods is in state "RUNNING"</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flask-1.3-6c644b8648-878qz 0/1 CrashLoopBackOff 6 6m19s 10.244.1.250 node <none> <none>
flask-1.3-6c644b8648-t6qhv 0/1 CrashLoopBackOff 7 6m19s 10.244.2.230 nod2e <none> <none>
</code></pre>
<p>after some seconds one pod is RUNNING when live is failing always:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
IP NODE NOMINATED NODE READINESS GATES
flask-1.3-6c644b8648-878qz 1/1 Running 7 6m20s 10.244.1.250 node <none> <none>
flask-1.3-6c644b8648-t6qhv 0/1 CrashLoopBackOff 7 6m20s 10.244.2.230 nod2e <none> <none>
</code></pre>
<p>And after RUNNING it back to CrashLoopBackOff, the question is , why for some seconds it keeps RUNNING if the livenesprobe go to fail always?</p>
<p>thanks in advance</p>
| <p>You should be looking at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">Readiness probe</a> instead, or both of them.</p>
<blockquote>
<p>Readiness and liveness probes can be used in parallel for the same container. Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="nofollow noreferrer">Liveness probe</a> checks if your application is in a healthy state in your <strong>already running pod</strong>.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">Readiness probe</a> will actually check if your pod is ready to receive traffic. Thus, if there is no <strong>/index2</strong> endpoint, it will never appear as <strong>Running</strong></p>
|
<p>How can I pass the entire JSON string to a Helm chart value?</p>
<p>I have <code>values.yml</code> where the config value should contain entire JSON with a configuration of an application</p>
<pre><code>...
config: some JSON here
...
</code></pre>
<p>and I need to pass this value to a secret template and then mount it as a volume to a Kubernetes pod.</p>
<pre><code>{{- $env := default "integration" .Values.env}}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-{{ $env }}
type: Opaque
data:
config.json: {{ .Values.config | b64enc | quote }}
</code></pre>
<p>However the obvious approach of passing single quoted string like <code>'{"redis": "localhost:6379"}'</code> fails because Helm for some reason deletes all double quotes in the string (even if I escape them) so I end up with <code>{redis: localhost:6379}</code> which is not a valid JSON.</p>
<p>Is there any other possibility how to pass configuration to the pod all at once without loading template files with <code>tpl</code> function and making all needed fields accessible via <code>values.yml</code> separately?</p>
| <p>If <code>.Values.config</code> contains json then you can use it in your templated secret with</p>
<pre><code>{{ .Values.config | toJson | b64enc | quote }}
</code></pre>
<p>It may seem strange to use <code>toJson</code> to convert JSON to JSON but helm doesn't natively treat it as JSON until you tell it to. See the SO question <a href="https://stackoverflow.com/questions/52930405/how-do-i-use-json-variables-in-a-yaml-file-helm">How do I use json variables in a yaml file (Helm)</a> for an example of doing this.</p>
|
<p>Has anyone been able to consistently replicate SIGSEGVs on the JRE using different hardware and different JRE versions? Note (potentially a big note): I am running the process in a Docker container deployed on Kubernetes.</p>
<p>Sample error:</p>
<pre><code># A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fea64dd9d01, pid=21, tid=0x00007fe8dfbfb700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_191-b12) (build 1.8.0_191-b12)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.191-b12 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# J 8706 C2 com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextFieldName()Ljava/lang/String; (493 bytes) @ 0x00007fea64dd9d01 [0x00007fea64dd9b60+0x1a1]
</code></pre>
<p>I'm currently managing a high I/O process that has many threads doing I/O and serialization: downloading CSVs and JSONs, reading CSVs, writing JSONs into CSVs, and loading CSVs into MySQL. I do this thousands of times during the application's run cycle. I use nothing but commonly-used libraries (Jackson, jOOQ) and "normal" code: specifically, I did not write custom code that uses the JNI. </p>
<p>Without fail, the JVM will SIGSEGV during each run cycle. It seems to SIGSERV in various parts of the code base, but never on a GC thread or any other well-known threads. The "problematic frame" is always compiled code.</p>
<p>Testing specs:</p>
<ul>
<li>Multiple different hardware instances in AWS.</li>
<li>Tested using Java 8 191 and 181. Ubuntu 16.04. </li>
<li>This process is running in a container (Docker) and deployed on Kubernetes.</li>
<li>Docker version: <code>17.03.2-ce</code> </li>
</ul>
<p>Here's the full log:
<a href="https://gist.github.com/navkast/9c95f56ce818d76276684fa5bb9a6864" rel="nofollow noreferrer">https://gist.github.com/navkast/9c95f56ce818d76276684fa5bb9a6864</a></p>
| <p>From the full log:</p>
<blockquote>
<p>siginfo: si_signo: 11 (SIGSEGV), si_code: 0 (SI_USER)</p>
</blockquote>
<p>This means a <code>kill()</code> was issued. This is not a JVM issue. Something is killing the process deliberately. Probably due to running out of memory.</p>
|
<p>I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:</p>
<pre><code>docker build -t myimage -f myDockerFile .
</code></pre>
<p>(the above successfully creates an image in the docker local registry)</p>
<pre><code>kubectl run myapp --image=myimage:latest
</code></pre>
<p>(as far as I understand, this is the same as using the kubectl create deployment command)</p>
<p>The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
</code></pre>
<p>I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?</p>
| <p>I just had the exact same problem. Boils down to the <code>imagePullPolicy</code>:</p>
<pre><code>PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
</code></pre>
<p>Specifically, the part that says: <em>Defaults to Always if :latest tag is specified</em>.</p>
<p>That means, you created a local image, but, because you use the <code>:latest</code> it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:</p>
<pre><code>kubectl run myapp --image=myimage:latest --image-pull-policy Never
</code></pre>
<p>or</p>
<pre><code>kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
</code></pre>
|
<p>I am trying to create a service account using helm on Kubernetes as described here:</p>
<p><a href="https://tutorials.kevashcraft.com/k8s/install-helm/" rel="nofollow noreferrer">https://tutorials.kevashcraft.com/k8s/install-helm/</a></p>
<p>When I execute the following line:</p>
<p>kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'</p>
<p>I get an error:</p>
<p>Error from server (BadRequest): invalid character 's' looking for beginning of object key string</p>
<p>Can someone give me some guidance as to what is wrong?</p>
<p>Thanks!</p>
| <p>Try <code>kubectl patch deploy --namespace kube-system tiller-deploy -p "{\"spec\":{\"template\":{\"spec\":{\"serviceAccount\":\"tiller\"}}}}"</code> i.e. using outer double-quotes and escaping the inner double-quotes. There's a <a href="https://github.com/minishift/minishift/issues/1273#issuecomment-323127597" rel="nofollow noreferrer">github issue</a> where somebody hit the same error in a different context and was able to resolve it like this.</p>
<p>Edit: MrTouya determined that in this case what worked was <code>kubectl patch deploy --namespace kube-system tiller-deploy -p '{\"spec\":{\"template\":{\"spec\":{\"serviceAccount\":\"tiller\"}}}}'</code></p>
|
<p>In <code>Dockerfile</code> I have mentioned volume like:</p>
<p><code>COPY src/ /var/www/html/</code> but somehow my code changes don't appear like it used to only with Docker. Unless I remove Pods, it does not appear. How to sync it?</p>
<p>I am using minikube.</p>
<p><strong>webserver.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: learningk8s_website
imagePullPolicy: Never
ports:
- containerPort: 80
</code></pre>
| <p>When your container spec says:</p>
<pre><code>image: learningk8s_website
imagePullPolicy: Never
</code></pre>
<p>The second time you <code>kubectl apply</code> it, Kubernetes determines that it's exactly the same as the Deployment spec you already have and does nothing. Even if it did generate new Pods, the server is highly likely to notice that it already has an image <code>learningk8s_website:latest</code> and won't pull a new one; indeed, you're explicitly telling Kubernetes not to.</p>
<p>The usual practice here is to include some unique identifier in the image name, such as a date stamp or commit hash.</p>
<pre><code>IMAGE=$REGISTRY/name/learningk8s_website:$(git rev-parse --short HEAD)
docker build -t "$IMAGE" .
docker push "$IMAGE"
</code></pre>
<p>You then need to make the corresponding change in the Deployment spec and <code>kubectl apply</code> it. This will cause Kubernetes to notice that there is some change in the pod spec, create new pods with the new image, and destroy the old pods (in that order). You may find a templating engine like <a href="https://helm.sh" rel="noreferrer">Helm</a> to be useful to make it easier to inject this value into the YAML.</p>
|
<p>Can the Kubernetes NodePort service port change upon service restart or pod crash? How do you ensure that the port of NodePort service remains the same?</p>
| <p>From <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>If you set the <strong>type</strong> field to <strong>NodePort</strong>, the Kubernetes master will allocate a port from a range specified by <strong>--service-node-port-range</strong> flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service. That port will be reported in your <strong>Service</strong>’s <strong>.spec.ports[*].nodePort</strong> field.</p>
</blockquote>
<p>Pod crash or restart will not change your NodePort. Re-creating your service will.</p>
<p>You can specify a custom NodePort as described in <a href="https://stackoverflow.com/a/43944385/7568391">this answer</a>, this will keep Service's NodePort the same</p>
|
<p>I need a scalable queue handling based on docker/python worker. My thought went towards kubernetes. However, I am unsure about the best controller/service.</p>
<p>Based on azure functions I get incoming http traffic adding simple messages to a storage queue. Those messages need to be worked on and the results fed back into a result queue. </p>
<p>To process those queue messages I developed python code looping the queue and working on those jobs. After each successful loop, the message will be removed from the source-queue and the result written into the result-queue. Once the queue is empty the code exists.</p>
<p>So I created a docker image that runs the python code. If more than one container is started the queue gets worked faster obviously.
I also implemented the new Azure Kubernetes Services to scale that.
While being new to kubernetes I read about the job paradigm to work a queue until the job is ready. My simple yaml template looks like this:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
parallelism: 4
template:
metadata:
name: myjob
spec:
containers:
- name: c
image: repo/image:tag
</code></pre>
<p>My problem now is, that the job cannot be restarted.</p>
<p>Usually, the queue gets filled with some entries and then for a while nothing happens. Then again bigger queues can arrive that need to be worked on as fast as possible. Of course, I want to run the job again then, but that seems not possible. Also, I want to reduce the footprint to a minimum if nothing is in the queue.</p>
<p>So my question is, what architecture/constructs should I use for this scenario and are there simple yaml examples for that?</p>
| <p>This may be a "goofy/hacky" answer, but it's simple, robust, and I've been using it in a production system for months now.</p>
<p>I have a similar system where I have a queue that sometimes is emptied out and sometimes gets slammed. I wrote my queue processor similarly, it handles one message in the queue at a time and terminates if the queue is empty. It is set up to run in a Kubernetes job.</p>
<p>The trick is this: I created a CronJob to regularly start one single new instance of the job, and the job allows infinite parallelism. If the queue is empty, it immediately terminates ("scales down"). If the queue is slammed and the last job hadn't finished yet, another instance starts ("scales up").</p>
<p>No need to futz with querying the queue and scaling a statefulset or anything, and no resources are consumed if the queue is sitting empty. You may have to adjust the CronJob interval to fine tune how fast it reacts to the queue filling up, but it should react pretty well.</p>
|
<p>So after googling a little bit (which is polluted by people having trouble with Pull Secrets) I am posting this here — and to GCP Support (will update as I hear).</p>
<p>I created a Cluster from GitLab Kubernetes integration (docs: <a href="https://about.gitlab.com/solutions/kubernetes" rel="noreferrer">https://about.gitlab.com/solutions/kubernetes</a>) within the same project as my GCR registry / images. </p>
<p>When I add a new service / deployment to this Cluster using Kubectl (which relies on a private image within the GCR Registry in this project) the pods in the GitLab created cluster fail to pull from GCR with: ErrImagePull.</p>
<p>To be clear — I am NOT pulling from a GitLab private registry, I am attempting to pull from a GCR Registry within the same project as the GKE cluster created from GitLab (which should not require a Pull Secret).</p>
<p>Other Clusters (created from GCP console) within this project can properly access the same image so my thinking is that there is some difference between Clusters created via an API (in this case from GitLab) vs Clusters created from the GCP console.</p>
<p>I am hoping someone has run into this in the past — or can explain the differences in the Service Accounts etc that could be causing the problem. </p>
<blockquote>
<p>I am going to attempt to create a service account and manually grant it Project Viewer role to see if that solves the problem.</p>
</blockquote>
<p>Update: manually configured Service Account did not solve issue.</p>
<p><em>Note: I am trying to pull an image into the Cluster NOT into a GitLab Runner that is running on the Cluster. Ie. I want a separate Service / Deployment to be running along side my GitLab infrastructure.</em></p>
| <p><strong>TL;DR</strong> — Clusters created by GitLab-Ci Kubernetes Integration will not be able to pull an image from a GCR Registry in the same project as the container images — without modifying the Node(s) permissions (scopes).</p>
<blockquote>
<p>While you CAN manually modify the permissions on an Individual Node machine(s) to grant the Application Default Credentials (see: <a href="https://developers.google.com/identity/protocols/application-default-credentials" rel="noreferrer">https://developers.google.com/identity/protocols/application-default-credentials</a>) the proper scopes in real time — doing it this way would mean that if your node is re-created at some point in the future it WOULD NOT have your modified scopes and things would break.</p>
</blockquote>
<p>Instead of modifying the permissions manually — create a new Node pool that has the proper Scope(s) to access your required GCP services.</p>
<p>Here are some resources I used for reference:</p>
<ol>
<li><a href="https://medium.com/google-cloud/updating-google-container-engine-vm-scopes-with-zero-downtime-50bff87e5f80" rel="noreferrer">https://medium.com/google-cloud/updating-google-container-engine-vm-scopes-with-zero-downtime-50bff87e5f80</a></li>
<li><a href="https://adilsoncarvalho.com/changing-a-running-kubernetes-cluster-permissions-a-k-a-scopes-3e90a3b95636" rel="noreferrer">https://adilsoncarvalho.com/changing-a-running-kubernetes-cluster-permissions-a-k-a-scopes-3e90a3b95636</a></li>
</ol>
<h2>Creating a properly Scoped Node Pool Generally looks like this</h2>
<pre><code>gcloud container node-pools create [new pool name] \
--cluster [cluster name] \
--machine-type [your desired machine type] \
--num-nodes [same-number-nodes] \
--scopes [your new set of scopes]
</code></pre>
<p>If you aren't sure what the names of your required Scopes are — You can see a full list of Scopes AND Scope Aliases over here: <a href="https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create" rel="noreferrer">https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create</a></p>
<p>For me I did gke-default (same as my other cluster) and sql-admin. The reason for this being that I need to be able to access an SQL Database in Cloud SQL during part of my build and I don't want to have to connect to a pubic IP to do that.</p>
<h2>gke-default Scopes (for reference)</h2>
<ol>
<li><a href="https://www.googleapis.com/auth/devstorage.read_only" rel="noreferrer">https://www.googleapis.com/auth/devstorage.read_only</a> (allows you to pull)</li>
<li><a href="https://www.googleapis.com/auth/logging.write" rel="noreferrer">https://www.googleapis.com/auth/logging.write</a></li>
<li><a href="https://www.googleapis.com/auth/monitoring" rel="noreferrer">https://www.googleapis.com/auth/monitoring</a></li>
<li><a href="https://www.googleapis.com/auth/service.management.readonly" rel="noreferrer">https://www.googleapis.com/auth/service.management.readonly</a></li>
<li><a href="https://www.googleapis.com/auth/servicecontrol" rel="noreferrer">https://www.googleapis.com/auth/servicecontrol</a></li>
<li><a href="https://www.googleapis.com/auth/trace.append" rel="noreferrer">https://www.googleapis.com/auth/trace.append</a></li>
</ol>
<p>Contrast the above with more locked down permissions from a GitLab-Ci created cluster ( ONLY these two: <a href="https://www.googleapis.com/auth/logging.write" rel="noreferrer">https://www.googleapis.com/auth/logging.write</a>, <a href="https://www.googleapis.com/auth/monitoring" rel="noreferrer">https://www.googleapis.com/auth/monitoring</a>):</p>
<p>Obviosuly configuring your cluster to ONLY the minimum permissions needed is for sure the way to go here. Once you figure out what that is and create your new properly scoped Node Pool...</p>
<p>List your nodes with:</p>
<pre><code>kubectl get nodes
</code></pre>
<p>The one you just created (most recent) is has the new settings while the older option is the default gitlab cluster that can pull from the GCR.</p>
<p>Then: </p>
<pre><code>kubectl cordon [your-node-name-here]
</code></pre>
<p>After that you want to drain:</p>
<pre><code>kubectl drain [your-node-name-here] --force
</code></pre>
<p>I ran into issues where the fact that I had a GitLab Runner installed meant that I couldn't drain the pods normally due to the local data / daemon set that was used to control it. </p>
<p>For that reason once I cordon'd my Node I just deleted the node from Kubectl (not sure if this will cause problems — but it was fine for me). Once your node is deleted you need to delete the 'default-pool' node pool created by GitLab.</p>
<p>List your node-pools:</p>
<pre><code>gcloud container node-pools list --cluster [CLUSTER_NAME]
</code></pre>
<p>See the old scopes created by gitlab:</p>
<pre><code>gcloud container node-pools describe default-pool \
--cluster [CLUSTER_NAME]
</code></pre>
<p>Check to see if you have the correct new scopes (that you just added):</p>
<pre><code>gcloud container node-pools describe [NEW_POOL_NAME] \
--cluster [CLUSTER_NAME]
</code></pre>
<p>If your new Node Pool has the right scopes your deployments can now delete the default pool with:</p>
<pre><code>gcloud container node-pools delete default-pool \
--cluster <YOUR_CLUSTER_NAME> --zone <YOUR_ZONE>
</code></pre>
<p>In my personal case I am still trying to figure out how to allow access to the private network (ie. get to Cloud SQL via private IP) but I can pull my images now so I am half way there. </p>
<p>I think that's it — hope it saved you a few minutes!</p>
|
<p>I'm trying to figure out how to share data between two charts in helm.</p>
<p>I've set up a chart with a sole YAML for a configmap in one chart. Let's call the chart cm1. It defines it's name like so:</p>
<pre><code>name: {{ .Release.Name }}-maps
</code></pre>
<p>Then I set up two charts that deploy containers that would want to access the data in the configmap in c1. Let's call them c1 and c2. c1 has a requirements.yaml that references the chart for cm1, and likewise for c2. Now I have a parent chart that tries to bring it all together, let's call it p1. p1 defines c1 and c2 in requirements.yaml. The I <code>helm install --name k1 p1</code> and I get an error:</p>
<p>Error: release k1 failed: configmaps "k1-maps" already exists.</p>
<p>I thought that when helm builds its dependency tree that it would see that the k1-maps was already defined when chart cm1 was first loaded. </p>
<p>What's the best practice to share a configmap between two charts?</p>
| <p>You haven't given a ton of information about the contents of your charts, but it sounds like both c1 and c2 are defining and attempting to install the configmap. Helm doesn't really know anything special about the dependencies, it just knows to also install them. It will happily attempt (and fail) to install the chart a second time if told to.</p>
<p>The configmap should be created and installed only as part of the parent chart. C1 and C2 should be able to reference it by name even though it isn't defined in either of them.</p>
|
<p>Is it preferable to have a Kubernetes cluster with 4 nodes having resources 4 CPUs, 16 GB RAM or 2 nodes cluster with resources 8 CPUs and 32 GB RAM?</p>
<p>What benefits user will get if they go for horizontal scaling over vertical scaling in Kubernetes concepts. I mean suppose we want to run 4 pods, is it good to go with 2 nodes cluster with resources 8 CPU and 32 GB RAM or 4 nodes cluster with resources 4 CPU and 16 GB RAM.</p>
| <p>In general I would recommend larger nodes because it's easier to place containers on them.</p>
<p>If you have a pod that <code>resources: {requests: {cpu: 2.5}}</code>, you can only place one of them on a 4-core node, and two on 2x 4-core nodes, but you can put 3 on a single 8-core node.</p>
<pre><code>+----+----+----+----+ +----+----+----+----+
|-WORKLOAD--| | |-WORKLOAD--| |
+----+----+----+----+ +----+----+----+----+
+----+----+----+----+----+----+----+----+
|-WORKLOAD--|--WORKLOAD--|-WORKLOAD--| |
+----+----+----+----+----+----+----+----+
</code></pre>
<p>If you have 16 cores total and 8 cores allocated, it's possible that no single node has more than 2 cores free with 4x 4-CPU nodes, but you're guaranteed to be able to fit that pod with 2x 8-CPU nodes.</p>
<pre><code>+----+----+----+----+ +----+----+----+----+
|-- USED -| | |-- USED -| |
+----+----+----+----+ +----+----+----+----+
+----+----+----+----+ +----+----+----+----+
|-- USED -| | |-- USED -| |
+----+----+----+----+ +----+----+----+----+
Where |-WORKLOAD--| goes?
+----+----+----+----+----+----+----+----+
|------- USED ------| |
+----+----+----+----+----+----+----+----+
+----+----+----+----+----+----+----+----+
|------- USED ------| |
+----+----+----+----+----+----+----+----+
</code></pre>
<p>At the specific scale you're talking about, though, I'd be a little worried about running a 2-node cluster: if a single node dies you've lost half your cluster capacity. Unless I knew that I was running multiple pods that needed 2.0 CPU or more I might lean towards the 4-node setup here so that it will be more resilient in the event of node failure (and that does happen in reality).</p>
|
<p>I'm running kubernetes v1.11.5 and I'm installing helm with a tiller deployment for each namespace.
Let's focus on a single namespace. This is the tiller service account configuration:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: marketplace-int
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: marketplace-int
rules:
- apiGroups:
- ""
- extensions
- apps
- rbac.authorization.k8s.io
- roles.rbac.authorization.k8s.io
- authorization.k8s.io
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding
namespace: marketplace-int
subjects:
- kind: ServiceAccount
name: tiller
namespace: marketplace-int
roleRef:
kind: Role
name: tiller-manager
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>When I try to deploy a chart I get this error:</p>
<pre><code>Error: release citest failed: roles.rbac.authorization.k8s.io "marketplace-int-role-ns-admin" is forbidden:
attempt to grant extra privileges:
[{[*] [*] [*] [] []}] user=&{system:serviceaccount:marketplace-int:tiller 5c6af739-1023-11e9-a245-0ab514dfdff4
[system:serviceaccounts system:serviceaccounts:marketplace-int system:authenticated] map[]}
ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []}
{[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]}
{[*] [ extensions apps rbac.authorization.k8s.io roles.rbac.authorization.k8s.io authorization.k8s.io] [*] [] []}] ruleResolutionErrors=[]
</code></pre>
<p>The error comes when trying to create rbac config for that namespace (with tiller sa):</p>
<pre><code># Source: marketplace/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: citest
chart: marketplace-0.1.0
heritage: Tiller
release: citest
namespace: marketplace-int
name: marketplace-int-role-ns-admin
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>The error message clearly says that the tiller service account doesn't have permission for <code>roles.rbac.authorization.k8s.io</code> but that permission is granted as showed previously.</p>
<pre><code>$kubectl describe role tiller-manager
Name: tiller-manager
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"tiller-manager","namespace":"marketplace-i...
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
* [] [] [*]
*.apps [] [] [*]
*.authorization.k8s.io [] [] [*]
*.extensions [] [] [*]
*.rbac.authorization.k8s.io [] [] [*]
*.roles.rbac.authorization.k8s.io [] [] [*]
</code></pre>
<p>Honestly, I don't fully understand the error message to check if the <code>ownerrules</code> are fine and I'm trying to find out what does it means this kind of messages that seems to be related with the role description: <code>{[*] [*] [*] [] []}</code></p>
<p>Any clue about what permissions I am missing?</p>
| <p>This is due to permission escalation prevention in RBAC. See <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping</a> for details. </p>
<p>Permission to create a role object is necessary, but not sufficient. </p>
<p>A user can only create/update a role if at least one of the following things is true:</p>
<ol>
<li><p>they already have all the permissions contained in the role, at the same scope as the object being modified (cluster-wide for a ClusterRole, within the same namespace or cluster-wide for a Role). In your case, that would mean the user attempting to create the role must already have <code>apiGroups=*, resources=*, verbs=*</code> permissions within the namespace where it is attempting to create the role. You can grant this by granting the cluster-admin clusterrole to the serviceaccount within that namespace with a rolebinding. </p></li>
<li><p>they are given explicit permission to perform the "escalate" verb on the roles or clusterroles resource in the rbac.authorization.k8s.io API group (Kubernetes 1.12 and newer)</p></li>
</ol>
|
<p>I'm daily user of kubectl, but not expert of linux. Recently I need edit some service type after deployment, so searched and used kubectl <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#replace" rel="noreferrer">replace</a> and it worked well.</p>
<pre><code>cat yaml | kubectl replace -f -
service/tracs-pool-1sv replaced
</code></pre>
<p>But I don't understand why add <strong>a short dash</strong> <code>-</code> at the last.
The doc only says:</p>
<blockquote>
<p>Replace a pod based on the JSON passed into stdin.</p>
</blockquote>
<p>I searched and found <a href="https://stackoverflow.com/questions/35116699/piping-not-working-with-echo-command">this</a> SO question, and learned kubectl command may be that kind of command that <strong>does not read stdin</strong>(am I right?).</p>
<p>I tried</p>
<pre><code>cat yaml |xargs kubectl replace -f
</code></pre>
<p>but error returned:</p>
<pre><code>the path "apiVersion:" does not exist
</code></pre>
<p>So is the ending short dash(-) syntax built for kubectl <strong>ONLY</strong>? or is some more <strong>common</strong> syntax of linux bash stdin pipe?
Can some one explain why xargs not work here and I must place a short dash(-) at the end?</p>
| <p>This is a reasonably common, but not universal, Un*x convention. (It is mentioned in the POSIX specification and so most non-Linux Unices will support it as well.)</p>
<p>The important detail here is that the <code>kubectl ... -f</code> option expects a <em>filename</em>. If you have a file named <code>x.yaml</code>, a more direct way to write what you've shown is just</p>
<pre class="lang-sh prettyprint-override"><code>kubectl replace -f x.yaml
</code></pre>
<p>Where you say <code>-f -</code>, that ostensibly means "a file named <code>-</code>", but <code>kubectl</code> (along with many other tools) actually interprets this to mean "the process's standard input". For instance, you could use this for a very lightweight templating system, like</p>
<pre class="lang-sh prettyprint-override"><code>sed 's/TAG/1.2.3-20190103/g' x.yaml | kubectl replace -f -
</code></pre>
<p>For Un*x tooling in general, <a href="http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap01.html#tag_17_04" rel="noreferrer">POSIX.1 states</a> that, for many commands,</p>
<blockquote>
<p>...an operand naming a file can be specified as '-', which means to use the standard input instead of a named file....</p>
</blockquote>
<p>Some commands that support this include <a href="http://pubs.opengroup.org/onlinepubs/9699919799/utilities/cat.html#tag_20_13" rel="noreferrer">cat</a>, <a href="http://pubs.opengroup.org/onlinepubs/9699919799/utilities/grep.html#tag_20_55" rel="noreferrer">grep</a>, <a href="http://pubs.opengroup.org/onlinepubs/9699919799/utilities/sort.html#tag_20_119" rel="noreferrer">sort</a>, and tar (not required by POSIX). One way to move a directory tree between two Linux machines, for instance, is to create a tar file on stdout, pipe that stream via ssh to a remote machine, and then unpack the tar file from stdin:</p>
<pre class="lang-sh prettyprint-override"><code>tar cf - . | ssh elsewhere tar xf - -C /other/dir
</code></pre>
<p><a href="http://pubs.opengroup.org/onlinepubs/9699919799/utilities/xargs.html#tag_20_158" rel="noreferrer">xargs</a> is a tool that converts (most often) a list of filenames on standard input to command line arguments. For instance, <strong>find</strong>(1) can print a list of matching filenames to its stdout, so you could build a pipeline to delete shell backup files like</p>
<pre class="lang-sh prettyprint-override"><code>find . -name '*~' | xargs rm
</code></pre>
<p>You wouldn't usually use this with Kubernetes; your example tries to pass the YAML content itself as command-line arguments to <code>kubectl</code>, for example. You could apply <code>kubectl</code> across a directory tree with something like</p>
<pre class="lang-sh prettyprint-override"><code>find . name '*.yaml' | xargs -n1 kubectl apply -f
</code></pre>
<p>but since <code>kubectl ... -f</code> <em>also</em> supports directory names (not a universal convention) you could do the same thing more directly as</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f . # where . is the current directory
</code></pre>
|
<p>Is it preferable to have a Kubernetes cluster with 4 nodes having resources 4 CPUs, 16 GB RAM or 2 nodes cluster with resources 8 CPUs and 32 GB RAM?</p>
<p>What benefits user will get if they go for horizontal scaling over vertical scaling in Kubernetes concepts. I mean suppose we want to run 4 pods, is it good to go with 2 nodes cluster with resources 8 CPU and 32 GB RAM or 4 nodes cluster with resources 4 CPU and 16 GB RAM.</p>
| <p><strong>Horizontal Autoscaling</strong></p>
<ul>
<li><p>Pros</p>
<ul>
<li>Likely to have more capacity since you are expanding VMs or/and servers. You are essentially expanding your cluster.</li>
<li>In theory, more redundancy since you are spreading your workloads across different physical servers.</li>
</ul></li>
<li><p>Cons</p>
<ul>
<li>In theory, it's slower. Meaning it's slower to provision servers and VMs than pods/containers in the same machine (for vertical autoscaling)</li>
<li>Also, you need to provision both servers/VMs and containers/pods when you scale up.</li>
<li>Doesn't work that well with plain bare-metal infrastructure/servers.</li>
</ul></li>
</ul>
<p><strong>Vertical Autoscaling</strong></p>
<ul>
<li><p>Pros</p>
<ul>
<li>In theory, it should be faster to autoscale if you have large servers provisioned. (Also, faster response)</li>
<li>If you have data-intensive apps you might benefit from workloads running on the same machines.</li>
<li>Great if you have a few extra unused bare-metal servers.</li>
</ul></li>
<li><p>Cons</p>
<ul>
<li>If you have large servers provisioned you may waste a lot of resources.</li>
<li>You need to calculate the capacity of your workloads more precisely (this could be a pro or a con depending on how you see it)</li>
<li>If you have a fixed set of physical servers, you will run into eventual limitations of CPUs, Storage, Memory, etc.</li>
</ul></li>
</ul>
<p>Generally, you'd want to have a combination of both Horizontal and Vertical autoscaling.</p>
|
<p>We have a docker image and a corresponding yaml file for the deployment using kubernetes. The application we have built is in scala with akka-http. We have used akka-cluster. We have a particular variable(seed-nodes in our case - akka cluster) in the configuration file which is used in our application code that uses the pod ip. But, we will not get the pod ip unless the deployment is done. How should we go about tackling the issue? Will environment variables help, if yes how? </p>
<p>More specifically, Once the docker images is deployed in the container in a pod, and when the container starts, the pod ip is already assigned. So, can we programmatically or otherwise configure the pod ips in our code or config file before the process starts in the container? </p>
<p>For reference, this is our configuration file : </p>
<pre><code>akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://[email protected]:3000",
"akka.tcp://[email protected]:3001",
],
metrics {
enabled = off
}
}
}
service {
validateTokenService {
ml.pubkey.path = "<filePath>"
}
ml_repository {
url = <url address>
}
replication.factor = 3
http.service.interface = "0.0.0.0"
http.service.port = 8080
}
</code></pre>
<p>In the above file, instead of having akka.remote.netty.tcp.hostname as "127.0.0.1", we need to have the pod-ip.
So, that we can use it in the seed nodes as : </p>
<pre><code>seed-nodes = [
"akka.tcp://our-system@hostname:3000",
"akka.tcp://our-system@hostname:3001",
],
</code></pre>
<p>How can we do so?
Thanks in advance.</p>
| <p>As of 2018 the ideal way to initialize a Akka Cluster within kubernetes is the <a href="https://developer.lightbend.com/docs/akka-management/current/bootstrap/kubernetes.html" rel="nofollow noreferrer">akka-management</a>.</p>
<p>It uses the Kubernetes API to fetch a list of all running pods and bootstrap the cluster.
There's no need to configure any seed nodes.</p>
<p>It does also provice a readiness check that waits until the pod has joined the cluster. And a alive check.</p>
|
<p>So there is:</p>
<ul>
<li>the <em>StatefulSet</em> to control several replicas of a <em>Pod</em> in an ordered manner.</li>
<li>the <em>PersistentVolumeClaim</em> to provide volume to a <em>Pod</em>.</li>
<li>the <code>statefulset.spec.volumeClaimTemplate[]</code> to bind the previous two together.</li>
<li>the <em>PersistentVolumeSelector</em> to control which <em>PersistentVolume</em> fulfills which <em>PersistentVolumeClaim</em>.</li>
</ul>
<p>Suppose I have persistent volumes named <em>pv0</em> and <em>pv1</em>, and a statefulset with 2 replicas called <em>couchdb</em>. Concretely, the statefulset is:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: couchdb
spec:
...
replicas: 2
template:
...
spec:
containers:
- name: couchdb
image: klaemo/couchdb:1.6
volumeMounts:
- name: db
mountPath: /usr/local/var/lib/couchdb
volumes:
- name: db
persistentVolumeClaim
claimName: db
volumeClaimTemplates:
- metadata:
name: db
spec:
...
</code></pre>
<p>this <em>StatefulSet</em> generates two <em>PersistentVolumeClaim</em> named <em>db-couchdb-0</em> and <em>db-couchdb-1</em>. The problem is that it is not guaranteed that pvc <em>db-couchdb-0</em> will be always bound to <em>pv0</em>.</p>
<p>The question is: <strong>how do you ensure controlled binds for <em>PersistentVolumeClaim</em> managed by a <em>StatefulSet</em> controller?</strong></p>
<p>I tried adding a volume selector like this:</p>
<pre><code>selector:
matchLabels:
name: couchdb
</code></pre>
<p>to the <code>statefulset.spec.volumeClaimTemplate[0].spec</code> but the value of <em>name</em> doesn't get templated. Both claims will end up looking for a <em>PersistentVolume</em> labeled <em>name=couchdb</em>.</p>
| <p>What you're looking for is a <code>claimRef</code> inside the persistent volume, which have the name and namespace of PVC, to which you want to bind your PV. Please have a look at the following jsons:</p>
<p>Pv-0.json</p>
<pre><code>{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-0",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-0"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-0"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
</code></pre>
<p>Pv-1.json</p>
<pre><code>{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-1",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-1"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-1"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
</code></pre>
<p>Statefulset.json</p>
<pre><code>{
"kind": "StatefulSet",
"apiVersion": "apps/v1beta1",
"metadata": {
"name": "test-sf",
"labels": {
"state": "test-sf"
}
},
"spec": {
"replicas": 2,
"template": {
"metadata": {
"labels": {
"app": "test-sf"
},
"annotations": {
"pod.alpha.kubernetes.io/initialized": "true"
}
}
...
...
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "data"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
]
}
}
</code></pre>
<p>The <code>volumeClaimTemplate</code> will create two PVC <code>test-sf-data-0</code> and <code>test-sf-data-1</code>. The two PV definition contains the <code>claimRef</code> section which has the <code>namespace</code> and <code>PVC name</code> on which PV should bind to. Please note that you have to provide the <code>namespace</code> as a mandatory because PV's are independent of namespace and there might be two PVC with same name on two different namespace. Hence, how does kubernetes controller manager will understand on which PVC, PV should bind, if we don't provide namespace name.</p>
<p>Hope this answers your question. </p>
|
<p>I am kubernetes newbie, and I have a basic question</p>
<p>my understanding from <a href="https://kubernetes.io/docs/reference/kubectl/conventions/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/conventions/</a> is , we can generate yaml templates using "kubernetes run" command</p>
<p>But when I tried doing same, it didn't work as expected</p>
<pre><code>kubectl run deployment-sample --image deployment-sample --dry-run -o yaml --generator=extensions/v1beta1
error: generator "extensions/v1beta1" not found
kubectl run deployment-sample --image deployment-sample --dry-run -o yaml --generator=apps/v1beta1
error: generator "apps/v1beta1" not found
</code></pre>
<p>Not sure if my understanding is wrong or something wrong in my command ?</p>
<p>I am on kubernetes 1.11</p>
| <p>I find that I can create a Deployment with <code>kubectl run --generator=deployment/v1beta1 foobar --image=nginx -o yaml --dry-run</code> so your case would be <code>kubectl run --generator=deployment/v1beta1 deployment-sample --image=deployment-sample -o yaml --dry-run</code>. The kubectl conventions page you refer to does say this generator is 'recommended' for Deployments. </p>
<p>But I'm not sure why the docs list a non-recommended generator option that actually doesn't work. For a command like this you can recreate the expected output in a reference environment through the online tutorials at <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/</a> You get the same output there so it is not just you or your cluster. My best guess is that 'extensions/v1beta1' <a href="https://kubernetes.io/docs/reference/federation/extensions/v1beta1/definitions/" rel="nofollow noreferrer">is too general to match to a deployment specifically</a>. It <a href="https://github.com/kubernetes/website/issues/12090" rel="nofollow noreferrer">could well be that the documentation needs changing on this</a>. </p>
|
<p>From <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">assess cluster api</a>, i know that the pod in the cluster can using the clusterIp service kubernetes.default.svc to access the api server, but i am curious about how it works. </p>
<p>The pod in the cluster would only try to access the clusterip defined in the kubernetes.default.svc, the clusterip is nothing different with the other cluster ip except the svc's name.</p>
<p>So how can a http request to the specific clusterip be routed to the api server, does it configured by the api server proxy when create the kubernetes.default.svc?</p>
| <blockquote>
<p>The pod in the cluster would only try to access the clusterip defined in the kubernetes.default.svc, the clusterip is nothing different with the other cluster ip except the svc's name.</p>
</blockquote>
<p>Absolutely correct</p>
<blockquote>
<p>So how can a http request to the specific clusterip be routed to the api server, does it configured by the api server proxy when create the kubernetes.default.svc?</p>
</blockquote>
<p>This magic happens via <code>kube-proxy</code>, which <em>usually</em> delegates down to <code>iptables</code>, although I think it more recent kubernetes installs they are using ipvs to give a lot more control over ... well, almost everything. The <code>kube-proxy</code> receives its instructions from the API informing it of any changes, which it applies to the individual Nodes to keep the world in sync.</p>
<p>If you have access to the Nodes, you can run <code>sudo iptables -t nat -L -n</code> and see all the <code>KUBE-SERVICE-*</code> rules that are defined -- usually with helpful comments, even -- and see how they are mapped from the <code>ClusterIP</code> down to the Pod's IP of the Pods which match the selector on the <code>Service</code></p>
|
<p>I initialized master node and joined workers nodes to the cluster with <code>kubeadm</code>. According to the logs worker nodes successfully joined to the cluster.</p>
<p>However, when I list the nodes in master using <code>kubectl get nodes</code>, worker nodes are absent. What is wrong?</p>
<pre><code>[vagrant@localhost ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 12m v1.13.1
</code></pre>
<p>Here are <code>kubeadm</code> logs</p>
<pre><code>PLAY[
Alusta kubernetes masterit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:01.078073",
"end":"2019-01-05 07:06:59.079748",
"rc":0,
"start":"2019-01-05 07:06:58.001675",
"stderr":"",
"stderr_lines":[
],
...
}TASK[
kubeadm init
]************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8",
"delta":"0:01:05.163377",
"end":"2019-01-05 07:08:06.229286",
"rc":0,
"start":"2019-01-05 07:07:01.065909",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[init] Using Kubernetes version: v1.13.1\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 19.504023 seconds\n[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"\n[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\n[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes master has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nYou can now join any number of machines by running the following on each node\nas root:\n\n kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"stdout_lines":[
"[init] Using Kubernetes version: v1.13.1",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Activating the kubelet service",
"[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
"[certs] Generating \"ca\" certificate and key",
"[certs] Generating \"apiserver\" certificate and key",
"[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.101]",
"[certs] Generating \"apiserver-kubelet-client\" certificate and key",
"[certs] Generating \"etcd/ca\" certificate and key",
"[certs] Generating \"etcd/server\" certificate and key",
"[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"etcd/healthcheck-client\" certificate and key",
"[certs] Generating \"etcd/peer\" certificate and key",
"[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.0.0.101 127.0.0.1 ::1]",
"[certs] Generating \"apiserver-etcd-client\" certificate and key",
"[certs] Generating \"front-proxy-ca\" certificate and key",
"[certs] Generating \"front-proxy-client\" certificate and key",
"[certs] Generating \"sa\" key and public key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Writing \"admin.conf\" kubeconfig file",
"[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
"[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
"[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s",
"[apiclient] All control plane components are healthy after 19.504023 seconds",
"[uploadconfig] storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace",
"[kubelet] Creating a ConfigMap \"kubelet-config-1.13\" in namespace kube-system with the configuration for the kubelets in the cluster",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label \"node-role.kubernetes.io/master=''\"",
"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]",
"[bootstrap-token] Using token: orl7dl.vsy5bmmibw7o6cc6",
"[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles",
"[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials",
"[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token",
"[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster",
"[bootstraptoken] creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace",
"[addons] Applied essential addon: CoreDNS",
"[addons] Applied essential addon: kube-proxy",
"",
"Your Kubernetes master has initialized successfully!",
"",
"To start using your cluster, you need to run the following as a regular user:",
"",
" mkdir -p $HOME/.kube",
" sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
" sudo chown $(id -u):$(id -g) $HOME/.kube/config",
"",
"You should now deploy a pod network to the cluster.",
"Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
" https://kubernetes.io/docs/concepts/cluster-administration/addons/",
"",
"You can now join any number of machines by running the following on each node",
"as root:",
"",
" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
]
}TASK[
set_fact
]****************************************************************
ok:[
k8s-n1
]=>{
"ansible_facts":{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
},
"changed":false
}TASK[
debug
]*******************************************************************
ok:[
k8s-n1
]=>{
"kubeadm_join":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9"
}TASK[
Aseta ymparistomuuttujat
]************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"cp /etc/kubernetes/admin.conf /home/vagrant/ && chown vagrant:vagrant /home/vagrant/admin.conf && export KUBECONFIG=/home/vagrant/admin.conf && echo export KUBECONFIG=$KUBECONFIG >> /home/vagrant/.bashrc",
"delta":"0:00:00.008628",
"end":"2019-01-05 07:08:08.663360",
"rc":0,
"start":"2019-01-05 07:08:08.654732",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Konfiguroi CNI-verkko
]***************************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n1
]TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
sysctl
]******************************************************************
ok:[
k8s-n1
]=>{
"changed":false
}TASK[
Asenna Flannel-plugin
]***************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"export KUBECONFIG=/home/vagrant/admin.conf ; kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml",
"delta":"0:00:00.517346",
"end":"2019-01-05 07:08:17.731759",
"rc":0,
"start":"2019-01-05 07:08:17.214413",
"stderr":"",
"stderr_lines":[
],
"stdout":"clusterrole.rbac.authorization.k8s.io/flannel created\nclusterrolebinding.rbac.authorization.k8s.io/flannel created\nserviceaccount/flannel created\nconfigmap/kube-flannel-cfg created\ndaemonset.extensions/kube-flannel-ds-amd64 created\ndaemonset.extensions/kube-flannel-ds-arm64 created\ndaemonset.extensions/kube-flannel-ds-arm created\ndaemonset.extensions/kube-flannel-ds-ppc64le created\ndaemonset.extensions/kube-flannel-ds-s390x created",
"stdout_lines":[
"clusterrole.rbac.authorization.k8s.io/flannel created",
"clusterrolebinding.rbac.authorization.k8s.io/flannel created",
"serviceaccount/flannel created",
"configmap/kube-flannel-cfg created",
"daemonset.extensions/kube-flannel-ds-amd64 created",
"daemonset.extensions/kube-flannel-ds-arm64 created",
"daemonset.extensions/kube-flannel-ds-arm created",
"daemonset.extensions/kube-flannel-ds-ppc64le created",
"daemonset.extensions/kube-flannel-ds-s390x created"
]
}TASK[
shell
]*******************************************************************
changed:[
k8s-n1
]=>{
"changed":true,
"cmd":"sleep 10",
"delta":"0:00:10.004446",
"end":"2019-01-05 07:08:29.833488",
"rc":0,
"start":"2019-01-05 07:08:19.829042",
"stderr":"",
"stderr_lines":[
],
"stdout":"",
"stdout_lines":[
]
}PLAY[
Alusta kubernetes workerit
]**********************************************
TASK[
Gathering Facts
]*********************************************************
ok:[
k8s-n3
]ok:[
k8s-n2
]TASK[
kubeadm reset
]***********************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.085388",
"end":"2019-01-05 07:08:34.547407",
"rc":0,
"start":"2019-01-05 07:08:34.462019",
"stderr":"",
"stderr_lines":[
],
...
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":"kubeadm reset -f",
"delta":"0:00:00.086224",
"end":"2019-01-05 07:08:34.600794",
"rc":0,
"start":"2019-01-05 07:08:34.514570",
"stderr":"",
"stderr_lines":[
],
"stdout":"[preflight] running pre-flight checks\n[reset] no etcd config found. Assuming external etcd\n[reset] please manually reset etcd to prevent further issues\n[reset] stopping the kubelet service\n[reset] unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]\n[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]\n[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually.\nFor example: \niptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)\nto reset your system's IPVS tables.",
"stdout_lines":[
"[preflight] running pre-flight checks",
"[reset] no etcd config found. Assuming external etcd",
"[reset] please manually reset etcd to prevent further issues",
"[reset] stopping the kubelet service",
"[reset] unmounting mounted directories in \"/var/lib/kubelet\"",
"[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]",
"[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]",
"[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]",
"",
"The reset process does not reset or clean up iptables rules or IPVS tables.",
"If you wish to reset iptables, you must do so manually.",
"For example: ",
"iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X",
"",
"If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)",
"to reset your system's IPVS tables."
]
}TASK[
kubeadm join
]************************************************************
changed:[
k8s-n3
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:01.988676",
"end":"2019-01-05 07:08:38.771956",
"rc":0,
"start":"2019-01-05 07:08:36.783280",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}changed:[
k8s-n2
]=>{
"changed":true,
"cmd":" kubeadm join 10.0.0.101:6443 --token orl7dl.vsy5bmmibw7o6cc6 --discovery-token-ca-cert-hash sha256:a38a1b8f98a7695880fff2ce6a45ee90a77807d149c5400cc84af3fcf56fd8a9",
"delta":"0:00:02.000874",
"end":"2019-01-05 07:08:38.979256",
"rc":0,
"start":"2019-01-05 07:08:36.978382",
"stderr":"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06",
"stderr_lines":[
"\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06"
],
"stdout":"[preflight] Running pre-flight checks\n[discovery] Trying to connect to API Server \"10.0.0.101:6443\"\n[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"\n[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key\n[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"\n[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"\n[join] Reading configuration from the cluster...\n[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Activating the kubelet service\n[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...\n[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the master to see this node join the cluster.",
"stdout_lines":[
"[preflight] Running pre-flight checks",
"[discovery] Trying to connect to API Server \"10.0.0.101:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://10.0.0.101:6443\"",
"[discovery] Requesting info from \"https://10.0.0.101:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"10.0.0.101:6443\"",
"[discovery] Successfully established connection with API Server \"10.0.0.101:6443\"",
"[join] Reading configuration from the cluster...",
"[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'",
"[kubelet] Downloading configuration for the kubelet from the \"kubelet-config-1.13\" ConfigMap in the kube-system namespace",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Activating the kubelet service",
"[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...",
"[patchnode] Uploading the CRI Socket information \"/var/run/dockershim.sock\" to the Node API object \"localhost.localdomain\" as an annotation",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this node join the cluster."
]
}PLAY RECAP *********************************************************************
k8s-n1:ok=24 changed=16 unreachable=0 failed=0
k8s-n2:ok=16 changed=13 unreachable=0 failed=0
k8s-n3:ok=16 changed=13 unreachable=0 failed=0
</code></pre>
<p>.</p>
<pre><code>[vagrant@localhost ~]$ kubectl get events -a
Flag --show-all has been deprecated, will be removed in an upcoming release
LAST SEEN TYPE REASON KIND MESSAGE
3m15s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 72f6776d-c267-4e31-8e6d-a4d36da1d510
3m16s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 2d68a2c8-e27a-45ff-b7d7-5ce33c9e1cc4
4m2s Warning Rebooted Node Node localhost.localdomain has been rebooted, boot id: 0213bbdf-f4cd-4e19-968e-8162d95de9a6
</code></pre>
| <p>By default the nodes (kubelet) identify themselves using their hostnames. It seems that your VMs' hostnames are not set.</p>
<p>In the <code>Vagrantfile</code> set the <code>hostname</code> value to different names for each VM.
<a href="https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname" rel="nofollow noreferrer">https://www.vagrantup.com/docs/vagrantfile/machine_settings.html#config-vm-hostname</a></p>
|
<p>kubernetes client corev1api connect_get_namespaced_pod_exec fails to run for python.</p>
<p>I have checked the python version == 2.7 and pip freeze - ipaddress==1.0.22, urllib3==1.24.1 and websocket-client==0.54.0 are the versions which satisfy the requirement - as mentioned here: <a href="https://github.com/kubernetes-client/python/blob/master/README.md#hostname-doesnt-match" rel="noreferrer">https://github.com/kubernetes-client/python/blob/master/README.md#hostname-doesnt-match</a>
followed the issue on this thread - <a href="https://github.com/kubernetes-client/python/issues/36" rel="noreferrer">https://github.com/kubernetes-client/python/issues/36</a> - not much help.</p>
<p>Tried usings stream as suggested here - <a href="https://github.com/kubernetes-client/python/blob/master/examples/exec.py" rel="noreferrer">https://github.com/kubernetes-client/python/blob/master/examples/exec.py</a></p>
<p>Ran:</p>
<pre><code>api_response = stream(core_v1_api.connect_get_namespaced_pod_exec,
name, namespace,
command=exec_command,
stderr=True, stdin=False,
stdout=True, tty=False)
</code></pre>
<p>Got this error:</p>
<blockquote>
<p>ApiException: (0)
Reason: hostname '10.47.7.95' doesn't match either of '', 'cluster.local'</p>
</blockquote>
<p>Without stream using directly the CoreV1Api -</p>
<p>Ran :</p>
<pre><code>core_v1_api = client.CoreV1Api()
api_response = core_v1_api.connect_get_namespaced_pod_exec(name=name,namespace=namespace,command=exec_command,stderr=True, stdin=False,stdout=True, tty=False)
</code></pre>
<p>Got this error:</p>
<blockquote>
<p>ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Date': 'Sat, 05 Jan 2019 08:01:22 GMT', 'Content-Length': '139', 'Content-Type': 'application/json'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Upgrade request required","reason":"BadRequest","code":400}</p>
</blockquote>
| <p>I wrote a simple program to check that:</p>
<pre><code>from kubernetes import client, config
from kubernetes.stream import stream
# create an instance of the API class
config.load_kube_config()
api_instance = client.CoreV1Api()
exec_command = [
'/bin/sh',
'-c',
'echo This is Prafull Ladha and it is test function']
resp = stream(api_instance.connect_get_namespaced_pod_exec, "nginx-deployment-76bf4969df-467z2", 'default',
command=exec_command,
stderr=True, stdin=False,
stdout=True, tty=False)
print("Response: " + resp)
</code></pre>
<p>It is working perfectly fine for me.</p>
<p>I believe you're using <code>minikube</code> for development purpose. It is not able to recognise your hostname. You can make it work by disabling <code>assert_hostname</code> in your program like:</p>
<pre><code>from kubernetes.client import configuration
config.load_kube_config()
configuration.assert_hostname = False
</code></pre>
<p>This should resolve your issue.</p>
|
<p>So, instead of explaining the architecture I draw you a picture today :) <em>I know, it's 1/10.</em></p>
<p><em>Forgot to paint this as well, it is a <strong>single node cluster</strong></em></p>
<p>Hope this will save you some time.
Probably it's also easier to see where my struggles are, as I expose the lack of understandings.</p>
<p>So, in a nutshell:</p>
<blockquote>
<p><strong>What is working:</strong></p>
<ul>
<li><p>I can curl each ingress via virtual hosts from <em>inside</em> of the server using <code>curl -vH 'host: host.com' http://192.168.1.240/articleservice/system/ipaddr</code></p>
</li>
<li><p>I can access the server</p>
</li>
</ul>
</blockquote>
<hr />
<blockquote>
<p><strong>What's not working:</strong></p>
<ul>
<li>I can <em>not</em> access the cluster from <em>outside</em>.</li>
</ul>
</blockquote>
<p>Somehow I am not able to solve this myself, even tho I read quite a lot and had lots of help. As I am having issues with this for a period of time now explicit answers are really appreciated.</p>
<p><a href="https://i.stack.imgur.com/NsX1r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NsX1r.png" alt="architecture" /></a></p>
| <p>Generally you cannot access your cluster from outside without exposing a service.
You should change your "Ingress Controller" service type to <code>NodePort</code> and let kubernetes assign a port to that service.<br>
you can see ports assigned to a service using <code>kubectl get service ServiceName</code>.<br>
now it's possible to access that service from out side on <code>http://ServerIP:NodePort</code> but if you need to use standard HTTP and HTTPS ports you should use a reverse proxy outside of your cluster to flow traffic from port 80 to <code>NodePort</code> assigned to Ingress Controller Service.<br>
If you don't like to add reverse proxy, it is possible to add <code>externalIPs</code> to Ingress controller service but in this way you lose <code>RemoteAddr</code> in your Endpoints and you get ingress controller pod IP instead.
<code>externalIPs</code> can be list of your public IPs</p>
<p>you can find useful information about services and ingress in following links:<br>
<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Services</a></p>
<p><a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">Nginx Ingress - Bare-metal considerations</a></p>
|
<p>I using Kubernetes on Digitalocean and I have installed nginx-ingress which created an external load balancer. However when I install Sentry using helm <a href="https://github.com/helm/charts/tree/master/stable/sentry" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/sentry</a> another load balancer was created. Oddly though sentry is only accessible through the nginx load balancer, i.e if I point my domain to the ip address of the load balancer created in the Sentry install it does load.</p>
<p>I new to kubernetes so pretty sure I've done something wrong when
installing Sentry and assume it will relate to the values I used when installing the chart as it has a Service type of LoadBalancer.</p>
<p>So my question is can I get rid of the loadbalancer created by Sentry and what's the best way to do it with breaking anything.</p>
<pre><code># Name of the service and what port to expose on the pod
# Don't change these unless you know what you're doing
service:
name: sentry
type: LoadBalancer
externalPort: 9000
internalPort: 9000
## External IP addresses of service
## Default: nil
##
# externalIPs:
# - 192.168.0.1
## Configure ingress resource that allow you to access the
## Sentry installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
enabled: true
hostname: sentry.mydomain.com
## Ingress annotations
##
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
## Ingress TLS configuration
## Secrets must be manually created in the namespace
##
tls:
- secretName: sentry-mydomain-crt
hosts:
- sentry.mydomain.com
</code></pre>
| <p>Yes you can set the type of the service in the values file to ClusterIP. </p>
<p>The values file inside the chart defaults to LoadBalancer type (<a href="https://github.com/helm/charts/blob/master/stable/sentry/values.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/sentry/values.yaml</a>) but you can override this by setting it to ClusterIP in your values file that you deploy with or by using '--set service.type=ClusterIP' as a parameter. </p>
<p>Then it will only be exposed through the Ingress and won't have an external LoadBalancer. See <a href="https://stackoverflow.com/questions/53959974/ingress-service-type">Ingress service type</a></p>
<p>Since you've already installed sentry you will want to find its release name (you'll see it as a prefix on the sentry resources from 'kubectl get' commands like 'kubectl get pod' or from 'helm list'). If you are using it then you'll want to do a 'helm upgrade'. If you aren't using it yet then you could do 'helm delete' on that release and install it again. </p>
|
<p>I saw that there is an object named <code>PodTemplate</code> which have little <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#pod-templates" rel="nofollow noreferrer">documentation</a>. </p>
<p>It is mentioned:</p>
<blockquote>
<p>Pod templates are pod specifications which are included in other
objects, such as <strong>Replication Controllers</strong>, <strong>Jobs</strong>, and <strong>DaemonSets</strong>.</p>
</blockquote>
<p>But I am not sure how to mention it on <code>Replication Controllers</code>, <code>Jobs</code> or <code>DaemonSets</code>. </p>
<p>I created a <code>PodTemplate</code> like this: </p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: v1
kind: PodTemplate
metadata:
name: pod-test
namespace: default
template:
metadata:
name: pod-template
spec:
containers:
- name: container
image: alpine
command: ["/bin/sh"]
args: ["-c", "sleep 100"]
EOF
</code></pre>
<p>I want to use it in a <code>DaemonSet</code>, how can I do it ? </p>
<p>Here is an example for <code>DaemonSet</code> YAML: </p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pod-by-daemonset
namespace: default
spec:
selector:
matchLabels:
name: selector
template:
metadata:
labels:
name: selector
spec:
containers: # I don't want to specify it, I want to use the template.
- name: container
image: alpine
EOF
</code></pre>
| <p>Interestingly the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">page for DaemonSet</a> gives an example of creating a DaemonSet and says that </p>
<blockquote>
<p>the .spec.template is a pod template. It has exactly the same schema
as a Pod, except it is nested and does not have an apiVersion or kind.</p>
</blockquote>
<p>So the intention is to inline the Pod schema in the DaemonSet.</p>
<p>However, there does <a href="http://docker.codescode.com/about-podtamplate/" rel="nofollow noreferrer">seem to have been a plan</a> to be able to reference a PodTemplate by the form:</p>
<pre><code>TemplateRef:
Name: <templatename>
</code></pre>
<p>What <a href="https://github.com/kubernetes/kubernetes/issues/8109" rel="nofollow noreferrer">seems to have happened from the trail on github</a> is that this way of referencing a predefined PodTemplate was added but not finished and then removed. You can give it a try but it looks to me like only the inline spec is supported.</p>
|
<p>Using K8S deployment, our project is based on springcloud. I want to know that in K8S, because the multi-node deployment passes the default host name of the registry, the gateway is deployed in A, and the config is deployed in B. They can't access each other through eureka before. I changed to <code>eureka.instance.prefer-ip-address: true</code>, but I found that they can only access each other on the same host. He is not using cluster-ip of K8S. I want to know how to access each other between services in K8S.</p>
| <p>In version 7-201712-EA of Activiti Cloud we provided an example using services running the netflix libraries in kubernetes - the stable <a href="https://github.com/Activiti/activiti-cloud-examples/tree/7-201712-EA" rel="nofollow noreferrer">github tags</a> and docker images are available to refer to. We approached it by creating Kubernetes Services for each component and getting the component to <a href="https://stackoverflow.com/questions/52066141/zuul-unable-to-route-traffic-to-service-on-kubernetes">register with eureka using the k8s service name</a>.</p>
<p>To make sure the component declared the correct service name to eureka we <a href="https://github.com/Activiti/example-runtime-bundle/blob/7-201712-EA/src/main/resources/application.properties#L35" rel="nofollow noreferrer">set the eureka.instance.hostname</a> in the component, which can be set <a href="https://github.com/Activiti/activiti-cloud-examples/blob/7-201712-EA/kubernetes/kubectl/application.yml#L157" rel="nofollow noreferrer">in the Deployment yaml by specifying an environment variable</a> or using the default environment variable <code>EUREKA_INSTANCE_HOSTNAME</code>. We also kept thing simple by using the same port for the java app in the Pod and for the Service. Again this can be set to match by setting the port in the Pod spec and passing the <code>SERVER_PORT</code> environment variable to the spring boot app.</p>
|
<h2>The problem</h2>
<p>The pipe character does not seem to work in Istio's <code>VirtualService</code>.</p>
<p>The example below is intended to route requests based on the <code>user-agent</code> header. Requests from a mobile device should go to <code>myapp</code> and requests from a desktop user should go to <code>deskt-app</code>, handled by next match block. The <code><REGEX></code> field works when I use this regex:</p>
<ul>
<li><code>^.*\bMobile\b.*$</code></li>
</ul>
<p>But the powers that be require a more sophisticated regex to identify mobile users. My routing breaks entirely when I use these:</p>
<ul>
<li><code>^.*\b(iPhone|Pixel)\b.*$</code></li>
<li><code>^.*\b(iPhone|Pixel)+\b.*$</code></li>
<li><code>^.*\biPhone|Pixel\b.*$</code></li>
</ul>
<h2>Expected behavior</h2>
<p>Using a regex with a pipe (logical OR) I expect to be routed to <code>myapp</code> when I have a <code>user-agent</code> header that contains the word "iPhone" or "Pixel".</p>
<h2>Actual behavior</h2>
<p>I get routed to <code>deskt-app</code>.</p>
<h2>The question</h2>
<p>How do I achieve a logical OR in an Istio <code>VirtualService</code> regex pattern? And is that my problem or am I overlooking something obvious?</p>
<hr>
<h2>Example <code>VirtualService</code></h2>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
...
...
http:
- match:
- headers:
user-agent:
regex: "<REGEX>" <------
uri:
prefix: /foo/bar
route:
- destination:
host: myapp
port:
number: 80
- match:
- uri:
prefix: /foo/bar
route:
- destination:
host: deskt-app
port:
number: 80
</code></pre>
<hr>
<p>EDIT: <a href="https://github.com/istio/istio/issues/10766" rel="nofollow noreferrer">Github Issue</a></p>
| <p>Your configuration is correct, so the issue must be with the Regex, or the contents of the user-agent are different, give this a try <code>'^.*(iPhone|Pixel).*$'</code></p>
<p>Just verified that the configuration below routes correctly when the header contains android or iphone:</p>
<pre><code> - match:
- headers:
user-agent:
regex: '^.*(Android|iPhone).*$'
</code></pre>
<p>And tested with:</p>
<p><strong>[match]</strong> curl -H "user-agent: Mozilla/5.0 (Linux; U; iPhone 4.4.2; en-us;)" ... </p>
<p><strong>[no match]</strong> curl -H "user-agent: Mozilla/5.0 (Linux; U; Iphne 4.4.2; en-us;)" ...</p>
|
<p>This is my first time running through the Kubernetes tutorial.
I installed Docker, Kubectl and Minikube on a headless Ubuntu server (18.04).
I ran Minikube like this - </p>
<pre><code>minikube start --vm-driver=none
</code></pre>
<p>I have a local docker image that run a restful service on port 9110. I create a deployment and expose it like this - </p>
<pre><code>kubectl run hello-node --image=dbtemplate --port=9110 --image-pull-policy=Never
kubectl expose deployment hello-node --type=NodePort
</code></pre>
<p>status of my service - </p>
<pre><code># kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node NodePort 10.98.104.45 <none> 9110:32651/TCP 39m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h2m
# kubectl describe services hello-node
Name: hello-node
Namespace: default
Labels: run=hello-node
Annotations: <none>
Selector: run=hello-node
Type: NodePort
IP: 10.98.104.45
Port: <unset> 9110/TCP
TargetPort: 9110/TCP
NodePort: <unset> 32651/TCP
Endpoints: 172.17.0.5:9110
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# minikube ip
192.168.1.216
</code></pre>
<p>As you can see, the service is available on the internal IP of 172.17.0.5. </p>
<p>Is there some way for me to get this service mapped to/exposed on the IP of the parent host, which is 192.168.1.216. I would like my restful service at 192.168.1.216:9110. </p>
| <p>I think <code>minikube tunnel</code> might be what you're looking for. <a href="https://github.com/kubernetes/minikube/blob/master/docs/networking.md" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/master/docs/networking.md</a></p>
<blockquote>
<p>Services of type <code>LoadBalancer</code> can be exposed via the <code>minikube tunnel</code> command.</p>
</blockquote>
|
<p>When I run <code>kubectl run</code> ... or any command I get an error message saying</p>
<pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>What exactly is this error and how to resolve it?</p>
| <p>In my case, working with minikube I had not started minikube. Starting minikube with</p>
<pre><code>minikube start
</code></pre>
<p>fixed it.</p>
|
<p>I am creating an EFK stack on a k8s cluster. I am using an EFK helm chart described <a href="https://akomljen.com/get-kubernetes-logs-with-efk-stack-in-5-minutes/" rel="nofollow noreferrer">here</a>. This creates two PVC's: one for es-master and one for es-data.</p>
<p>Let's say I allocated 50 Gi for each of these PVC's. When these eventually fill up, my desired behavior is to have new data start overwriting the old data. Then I want the old data stored to, for example, an s3 bucket. How can I configure Elasticsearch to do this?</p>
| <p>One easy tool that can help you do that is Elasticsearch Curator:
<a href="https://www.elastic.co/guide/en/elasticsearch/client/curator/5.5/actions.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/client/curator/5.5/actions.html</a></p>
<p>you can use it to:</p>
<ol>
<li>Rollover the indices that hold the data, by size/time. This will cause each PVC to hold few indices, based on time.</li>
<li>snapshot the rolled over indices to backup in S3</li>
<li>delete old indices based on their date - delete the oldest indices in order to free up space for the new indices.</li>
</ol>
<p>Curator can help you do all these.</p>
|
<p>I have a cluster for staging that's been up for over a year using CloudSQL, and now I'd like to bring up another GKE cluster (same google project) pointed at the same database for testing. However, I'm seeing errors when trying to use the credentials.json from the old cluster in the new one. </p>
<pre><code>googleapi: Error 403: The client is not authorized to make this request., notAuthorized"
</code></pre>
<p>I've poked around IAM to find a way to open the permissions to the new cluster but haven't found a way even though I see a service account with the "Cloud SQL Client" role.</p>
<p>What's the right way to share credentials or open permissions (or do I need to create a new service account for this)?</p>
<p>Our template deployment yaml looks like:</p>
<pre><code> - name: postgres-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.09
imagePullPolicy: Always
command: ["/cloud_sql_proxy",
"--dir=/cloudsql",
"-instances=@@PROJECT@@:us-central1:@@DBINST@@=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
</code></pre>
| <p>As you are receiving the Error 403 from some Google API it appears a IAM permission error. To resolve the issue please make sure 'Cloud SQL Client' role is assigned to your service account. To check what permissions are added to your service account go to the Cloud Project IAM page (Left Menu > IAM & Admin > IAM) and look for the row with the service account that is having the issue. The service account should say "Cloud SQL Client" on the Role column. </p>
<p>In case if you find "Cloud SQL Client" Role is not added to the service account, please try to follow the instructions outlined in <a href="https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource" rel="nofollow noreferrer">this document</a>. If the service account is not already in the members list, it will not have any Role assigned to it. Click Add and enter the email address of the service account. If the service account is already on the members list, it has existing Roles. Click the drop-down list under Role(s) for the service account that you want to edit or you can add "Cloud SQL Client" Role as an additional Role to the service account. You need to select the "Cloud SQL Client" Role from the drop-down list under “Cloud SQL”. </p>
<p>In case if you see the "Cloud SQL Client" Role already exists, click on edit to open the drop-down list. After that click on delete and save it. Please make sure the service account is removed from the IAM page. Click the ADD button on top of the Cloud Project IAM page. Enter the service account email address and select the "Cloud SQL Client" role from the drop-down list under “Cloud SQL”. After that click the SAVE button and the service account should appear again under in the list. With this we are removing and then adding again the permissions for the service account.</p>
<p>You can also try by adding a new service account as outlined in <a href="https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account" rel="nofollow noreferrer">this document</a> and by selecting the "Cloud SQL Client" role from the drop-down list under “Cloud SQL”. Please note that you need “Service Account Admin” role or the “Editor” primitive role to do the operations. </p>
<p>If still does not resolve the issue please make sure the “Cloud SQL Instance” name is correct. You can copy and paste the "Instance connection name" from the Google Cloud Console page of the Cloud SQL Instance as outlined in this <a href="https://stackoverflow.com/questions/36410483/getting-notauthorized-error-with-cloud-sql-proxy-locally">StackOverflow issue</a>.</p>
<p>Alternatively <a href="https://stackoverflow.com/questions/45766105/how-do-i-connect-to-a-cloud-sql-instance-using-a-service-account">by updating the secret to use the right key</a> you can resolve the issue. You can make more than one key for a service account. </p>
|
<p>I'm relatively new (< 1 year) to GCP, and I'm still in the process of mapping the various services onto my existing networking mental model.</p>
<p>Once knowledge gap I'm struggling to fill is how HTTP requests are load balanced to services running in our GKE clusters.</p>
<p>On a test cluster, I created a service in front of pods that serve HTTP:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: contour
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 8080
- port: 443
name: https
protocol: TCP
targetPort: 8443
selector:
app: contour
type: LoadBalancer
</code></pre>
<p>The service is listening on node ports 30472 and 30816.:</p>
<pre><code>$ kubectl get svc contour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour LoadBalancer 10.63.241.69 35.x.y.z 80:30472/TCP,443:30816/TCP 41m
</code></pre>
<p>A GCP network load balancer is automatically created for me. It has its own public IP at 35.x.y.z, and is listening on ports 80-443:</p>
<p><a href="https://i.stack.imgur.com/VAimU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VAimU.png" alt="auto load balancer"></a></p>
<p>Curling the load balancer IP works:</p>
<pre><code>$ curl -q -v 35.x.y.z
* TCP_NODELAY set
* Connected to 35.x.y.z (35.x.y.z) port 80 (#0)
> GET / HTTP/1.1
> Host: 35.x.y.z
> User-Agent: curl/7.62.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Mon, 07 Jan 2019 05:33:44 GMT
< server: envoy
< content-length: 0
<
</code></pre>
<p>If I ssh into the GKE node, I can see the <code>kube-proxy</code> is listening on the service nodePorts (30472 and 30816) and nothing has a socket listening on ports 80 or 443:</p>
<pre><code># netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:20256 0.0.0.0:* LISTEN 1022/node-problem-d
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1221/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1369/kube-proxy
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 297/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 330/sshd
tcp6 0 0 :::30816 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::4194 :::* LISTEN 1221/kubelet
tcp6 0 0 :::30472 :::* LISTEN 1369/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1221/kubelet
tcp6 0 0 :::5355 :::* LISTEN 297/systemd-resolve
tcp6 0 0 :::10255 :::* LISTEN 1221/kubelet
tcp6 0 0 :::10256 :::* LISTEN 1369/kube-proxy
</code></pre>
<p>Two questions:</p>
<ol>
<li>Given nothing on the node is listening on ports 80 or 443, is the load balancer directing traffic to ports 30472 and 30816?</li>
<li>If the load balancer is accepting traffic on 80/443 and forwarding to 30472/30816, where can I see that configuration? Clicking around the load balancer screens I can't see any mention of ports 30472 and 30816.</li>
</ol>
| <p>I think I found the answer to my own question - can anyone confirm I'm on the right track?</p>
<p>The network load balancer redirects the traffic to a node in the cluster without modifying the packet - packets for port 80/443 still have port 80/443 when they reach the node.</p>
<p>There's nothing listening on ports 80/443 on the nodes. However <code>kube-proxy</code> has written iptables rules that match packets <strong>to</strong> the load balancer IP, and rewrite them with the appropriate ClusterIP and port:</p>
<p>You can see the iptables config on the node:</p>
<pre><code>$ iptables-save | grep KUBE-SERVICES | grep loadbalancer
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:http loadbalancer IP" -m tcp --dport 80 -j KUBE-FW-D53V3CDHSZT2BLQV
-A KUBE-SERVICES -d 35.x.y.z/32 -p tcp -m comment --comment "default/contour:https loadbalancer IP" -m tcp --dport 443 -j KUBE-FW-J3VGAQUVMYYL5VK6
$ iptables-save | grep KUBE-SEP-ZAA234GWNBHH7FD4
:KUBE-SEP-ZAA234GWNBHH7FD4 - [0:0]
-A KUBE-SEP-ZAA234GWNBHH7FD4 -s 10.60.0.30/32 -m comment --comment "default/contour:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZAA234GWNBHH7FD4 -p tcp -m comment --comment "default/contour:http" -m tcp -j DNAT --to-destination 10.60.0.30:8080
$ iptables-save | grep KUBE-SEP-CXQOVJCC5AE7U6UC
:KUBE-SEP-CXQOVJCC5AE7U6UC - [0:0]
-A KUBE-SEP-CXQOVJCC5AE7U6UC -s 10.60.0.30/32 -m comment --comment "default/contour:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-CXQOVJCC5AE7U6UC -p tcp -m comment --comment "default/contour:https" -m tcp -j DNAT --to-destination 10.60.0.30:8443
</code></pre>
<p>An interesting implication is the the nodePort is created but doesn't appear to be used. That matches this comment in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">kube docs</a>:</p>
<blockquote>
<p>Google Compute Engine does not need to allocate a NodePort to make LoadBalancer work</p>
</blockquote>
<p>It also explains why GKE creates an automatic firewall rule that allows traffic from 0.0.0.0/0 towards ports 80/443 on the nodes. The load balancer isn't rewriting the packets, so the firewall needs to allow traffic from anywhere to reach iptables on the node, and it's rewritten there.</p>
|
<p>My understanding is that setting the <code>Service</code> type to <code>LoadBalancer</code> creates a new Azure Load Balancer and assigns an IP address to the <code>Service</code>. Does this mean that I can have multiple Services using port 80? If the app behind my <code>Service</code> (an ASP.NET Core app) can handle TLS and HTTPS why shouldn't I just use <code>LoadBalancer</code>'s for any <code>Service</code> I want to expose to the internet?</p>
<p>What is the advantage of using an <code>Ingress</code> if I don't care about TLS termination (You can let Cloudflare handle TLS termination)? If anything, it slows things down by adding an extra hop for every request.</p>
<h2>Update</h2>
<p>Some answers below mention that creating load balancers is costly. It should be noted that load balancers on Azure are free but they do charge for IP addresses of which they give you five for free. So for small projects where you want to expose up to five IP addresses, it's essentially free. Any more than that, then you may want to look ad usign <code>Ingress</code>.</p>
<p>Some answers also mention extra complexity if you don't use <code>Ingress</code>. I have already mentioned that Cloudflare can handle TLS termination for me. I've also discovered the <code>external-dns</code> Kubernetes project to create DNS entries in Cloudflare pointing at the load balancers IP address? It seems to me that cutting out <code>Ingress</code> reduces complexity as it's one less thing that I have to configure and manage. The choice of Ingress is also massive, it's likely that I'll pick the wrong one which will end up unmaintained after some time.</p>
| <p>There is a nice article <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">here</a> which describe the differences on Service(Load Balancer) and Ingress.</p>
<p>In summary, you can have multiple Service(Load Balancer) in the cluster, where each application is exposed independently from each other. The main issue is that each Load Balancer added will increase the cost of your solution, and does not have to be this way, unless you strictly need this.</p>
<p>If multiple applications listen on port 80, and they are inside the container, there is no reason you also need to map it to the port 80 in the host node. You can assign it to any port, because the Service will handle the dynamic port mappings for you.</p>
<p>The ingress is best in this scenario, because you can have one ingress listing on port 80, and route the traffic to the right service based on many variables, like:</p>
<ul>
<li>Domain</li>
<li>Url Path</li>
<li>Query String</li>
<li>And many other</li>
</ul>
<p>Ingress in not just for TLS termination, it is in simple terms a proxy\gateway that will control the routing to the right service, TLS termination is just one of the features.</p>
|
<p>I'm using Docker Desktop on Windows 10. For the purposes of development, I want to expose a local folder to a container. When running the container in Docker, I do this by specifying the volume flag (-v). </p>
<p>How do I achieve the same when running the container in Kubernetes?</p>
| <p>You should use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">hostpath</a> Volume type in your pod`s spec to mount a file or directory from the host node’s filesystem, where hostPath.path field should be of following format to accept Windows like paths:</p>
<ul>
<li>/W/fooapp/influxdb</li>
<li>//W/fooapp/influxdb</li>
<li>/////W/fooapp/influxdb</li>
</ul>
<p>Please check <a href="https://github.com/kubernetes/kubernetes/issues/59876" rel="noreferrer">this</a> github issue explaining peculiarities of Kubernetes Volumes on Windows.
I assume also that you have enabled <a href="https://web.archive.org/web/20181231025453/http://peterjohnlightfoot.com/docker-for-windows-on-hyper-v-fix-the-host-volume-sharing-issue/" rel="noreferrer">Shared Drives</a> feature in your Docker for Windows installation.</p>
|
<h1>Purpose - What do I want to achieve?</h1>
<p>I want to access <code>systemctl</code> from inside a container running a kubernetes node (ami: running debian stretch)</p>
<h1>Working setup:</h1>
<ul>
<li><p><a href="https://github.com/kubernetes/kops/blob/master/channels/stable#L22" rel="noreferrer">Node AMI</a>: kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17</p></li>
<li><p>Node Directories Mounted in the container to make <code>systemctl</code> work:</p>
<ul>
<li>/var/run/dbus</li>
<li>/run/systemd</li>
<li>/bin/systemctl</li>
<li>/etc/systemd/system</li>
</ul></li>
</ul>
<h1>Not Working setup:</h1>
<ul>
<li><p><a href="https://github.com/kubernetes/kops/blob/master/channels/stable#L26" rel="noreferrer">Node AMI</a>: kope.io/k8s-1.11-debian-stretch-amd64-hvm-ebs-2018-08-17</p></li>
<li><p>Node Directories Mounted in the container to make <code>systemctl</code> work:</p>
<ul>
<li>/var/run/dbus</li>
<li>/run/systemd</li>
<li>/bin/systemctl</li>
<li>/etc/systemd/system</li>
</ul></li>
</ul>
<h1>Debugging in an attempt to solve the problem</h1>
<p>To debug this issue with the <code>debian-stretch</code> image not supporting <code>systemctl</code> with the same mounts as <code>debian-jessie</code></p>
<p>1) I began by spinning up a nginx deployment by mounting the above mentioned volumes in it</p>
<pre><code>kubectl apply -f https://k8s.io/examples/application/deployment.yaml
kubectl exec -it nginx-deployment /bin/bash
root@nginx-deployment-788f65877d-pzzrn:/# systemctl
systemctl: error while loading shared libraries: libsystemd-shared-
232.so: cannot open shared object file: No such file or directory
</code></pre>
<p>2) As the above issue showed the file <code>libsystemd-shared-232.so</code> not found. I found the actual path by looking into the node.</p>
<pre><code>admin@ip-10-0-20-11:~$ sudo find / -iname 'libsystemd-shared-232.so'
/lib/systemd/libsystemd-shared-232.so
</code></pre>
<p>3) Mounted the <code>/lib/systemd</code> in the nginx pod and ran the systemctl again</p>
<pre><code> kubectl exec -it nginx-deployment /bin/bash
root@nginx-deployment-587d866f54-ghfll:/# systemctl
systemctl: error while loading shared libraries: libcap.so.2:cannot
open shared object file: No such file or directory
</code></pre>
<p>4) Now the <code>systemctl</code> was failing with a new so missing error</p>
<pre><code>root@nginx-deployment-587d866f54-ghfll:/# systemctl
systemctl: error while loading shared libraries: libcap.so.2: cannot
open shared object file: No such file or directory
</code></pre>
<p>5) To solve the above error i again searched the node for <code>libcap.so.2</code> Found it in the below path.</p>
<pre><code>admin@ip-10-0-20-11:~$ sudo find / -iname 'libcap.so.2'
/lib/x86_64-linux-gnu/libcap.so.2
</code></pre>
<p>6) Seeing the above directory not mounted in my pod. I mounted the below path in the nginx pod.</p>
<pre><code>/lib/x86_64-linux-gnu mounted in the nginx pod(deployment)
</code></pre>
<p>7) The nginx pod is not able to come up after adding the above mount. Getting the below error:</p>
<pre><code>$ k logs nginx-deployment-f9c5ff956-b9wn5
standard_init_linux.go:178: exec user process caused "no such file
or directory"
</code></pre>
<p>Please suggest how to debug further. And what all mounts are required to make systemctl work from inside a container in a debian stretch environment.</p>
<p>Any pointers to take the debugging further could be helpful.</p>
| <p>Rather than mounting some of the library files from the host you can just install <code>systemd</code> in the container.</p>
<pre><code>$ apt-get -y install systemd
</code></pre>
<p>Now, that won't necessarily make <code>systemctl</code> run. You will need <code>systemd</code> to be running in your container which is spawned by <code>/sbin/init</code> on your system. <code>/sbin/init</code> needs to run as root so essentially you would have to run this with the <code>privileged</code> flag in the pod or container <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#securitycontext-v1-core" rel="noreferrer">security context</a> on Kubernetes. Now, this is insecure and <a href="https://lwn.net/Articles/676831/" rel="noreferrer">there is a long history</a> about running systemd in a container where the Docker folks were mostly against it (security) and the Red Hat folks said that it was needed.</p>
<p>Nevertheless, the Red Hat folks figured out a way to <a href="https://developers.redhat.com/blog/2016/09/13/running-systemd-in-a-non-privileged-container/" rel="noreferrer">make it work without the <code>unprivileged</code> flag</a>. You need:</p>
<ul>
<li><code>/run</code> mounted as a tmpfs in the container.</li>
<li><code>/sys/fs/cgroup</code> mounted as read-only is ok.</li>
<li><code>/sys/fs/cgroup/systemd/</code> mounted as read/write. </li>
<li>Use for <code>STOPSIGNAL</code> <code>SIGRTMIN+3</code></li>
</ul>
<p>In Kubernetes you need an <a href="https://developers.redhat.com/blog/2016/09/13/running-systemd-in-a-non-privileged-container/" rel="noreferrer"><code>emptyDir</code></a> to mount a <code>tmpfs</code>. The others can be mounted as host volumes.</p>
|
<p>I've used <a href="https://github.com/cvallance/mongo-k8s-sidecar" rel="nofollow noreferrer">mongo k8s sidecar</a> to provision a 3 member replica set mongo cluster on kubernetes. I need to expose mongodb service externally and hence created a LoadBalancer. </p>
<p>This is how the service looks like</p>
<pre><code>LoadBalancer Ingress: xxx.yyy.elb.amazonaws.com
Port: <unset> 27017/TCP
TargetPort: 27017/TCP
NodePort: <unset> 30994/TCP
Endpoints: 100.14.1.3:27017,100.14.1.4:27017,100.14.2.5:27017
</code></pre>
<p>Trying to connect using mongodb shell 3.6 works fine
<code>mongo --host xxx.yyy.elb.amazonaws.com</code></p>
<p>But in the java client code I see the following exception.
<code>java.net.UnknownHostException: mongo-1.mongo.dev.svc.cluster.local</code></p>
<p>I can confirm that the mongo pods are up and running. I am able to connect to mongo from other pods within the cluster - just not able to reach it externally. </p>
<p>Few things I don't understand is whats exactly happening in the java client.</p>
<p>The java client (which uses spring-data-mongo for configuration) is being created as follows.</p>
<p><code>MongoClient mongoClient = new MongoClient( "xxx.yyy.elb.amazonaws.com" , 27017 );</code></p>
<p>The fullstack trace is as follows</p>
<p><code>Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@161f6623. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongo-2.mongo.dev.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo-2.mongo.dev.svc.cluster.local}, caused by {java.net.UnknownHostException: mongo-2.mongo.dev.svc.cluster.local}}, {address=mongo-0.mongo.dev.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo-0.mongo.dev.svc.cluster.local}, caused by {java.net.UnknownHostException: mongo-0.mongo.dev.svc.cluster.local}}, {address=mongo-1.mongo.dev.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo-1.mongo.dev.svc.cluster.local}, caused by {java.net.UnknownHostException: mongo-1.mongo.dev.svc.cluster.local}}]
at com.mongodb.internal.connection.BaseCluster.createTimeoutException(BaseCluster.java:401)
at com.mongodb.internal.connection.BaseCluster.selectServer(BaseCluster.java:120)</code></p>
<p>Why is the mongoClient using the pod name , even though I've passed the loadbalancer address? How do I fix this?</p>
<p>Thanks in advance</p>
| <p>You are getting an error for <code>mongo-1.mongo.dev.svc.cluster.local</code> and that's the internal endpoint within the cluster. In other words, that's how you would get to your Mongo instance from other pods in the cluster.</p>
<p>On the Java client, you need to use <code>xxx.yyy.elb.amazonaws.com:27017</code> as the Mongo endpoint configuration.</p>
<p>Something like this:</p>
<pre><code>MongoClient mongoClient = new MongoClient( "xxx.yyy.elb.amazonaws.com" , 27017 );
</code></pre>
<p>To give you an overview of the path, your Mongo instance is exposed through a <a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> Kubernetes Service on port <code>27017</code>.</p>
<p>Then the traffic comes into the load balancer and from there gets forwarded to your endpoints <code>100.14.1.3:27017</code>, etc. </p>
<p>Then from there, they enter the Node on the NodePort <code>30994</code> on each node. </p>
<p>Then the nodes that have a pod running reply with an answer.</p>
<p>The Mongo process in the container itself runs on port <code>27017</code> so the moment the traffic gets to the node on port <code>30994</code> the container runtime forwards the traffic to your application in the container to <code>27017</code>.</p>
|
<p>I am failing to deploy postgres (single node, official image) on kubernetes and allow services to access postgres via ClusterIP service. </p>
<p>The config is rather simple - Namespace, Deployment, Service:</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: database
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11.1
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
name: pg
namespace: database
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- protocol: TCP
name: postgres
port: 5432
targetPort: 5432
</code></pre>
<p>To test is executed a "/bin/bash" into the pod and ran a simple psql command to test the connection. All works well locally:</p>
<pre><code>kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -U admin postgresdb -c "\t"
Tuples only is on.
</code></pre>
<p>But as soon as I try to access postgres via service, the command fails:</p>
<pre><code>kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -h pg -U admin postgresdb -c "\t"
psql: could not connect to server: Connection timed out
Is the server running on host "pg" (10.245.102.15) and accepting
TCP/IP connections on port 5432?
</code></pre>
<p>This is tested on a DigitalOcean a single node cluster (1.12.3).</p>
<p>Postgres listened on <code>*</code> on the correct port, <code>pg_hba.conf</code> looks by default like this:</p>
<pre><code>...
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
</code></pre>
<p>To reproduce see <a href="https://gist.githubusercontent.com/sontags/c364751e7f0d8ba1a02a9805efc68db6/raw/01b1808348541d743d6a861402cfba224bee8971/database.yaml" rel="nofollow noreferrer">this gist</a></p>
<p>Execute via (please use a fresh cluster and read thru):</p>
<pre><code>export k8sconf=/path/to/your/k8s/confic/file
kubectl --kubeconfig $k8sconf apply -f https://gist.githubusercontent.com/sontags/c364751e7f0d8ba1a02a9805efc68db6/raw/01b1808348541d743d6a861402cfba224bee8971/database.yaml
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- /bin/bash /reproducer/runtest.sh
</code></pre>
<p>Any hint why the service does not allow to connect or other tests to perform?</p>
| <p>Hard to tell without access to your cluster. This works fine on my AWS cluster. Some things to look at:</p>
<ul>
<li>Is the kube-proxy running on all nodes?</li>
<li>Is your network overlay/CNI running on all nodes?</li>
<li>Does this happen with the pg pod only? what about other pods?</li>
<li>DNS seems to be fine since <code>pg</code> is being resolved to <code>10.245.102.15</code></li>
<li>Are your nodes allowing <a href="https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux" rel="nofollow noreferrer">IP forwarding</a> from the Linux side?</li>
<li>Are your Digital Ocean firewall rules allowing traffic from any source on port <code>5432</code>? Note that the PodCidr and K8s Service IP range is different the hostCidr (of your droplets).</li>
</ul>
|
<p>I'm building an application written in PHP/Symfony4. I have prepared an API service and some services written in NodeJS/Express.
I'm configuring server structure with Google Cloud Platform. The best idea, for now, is to have multizone multi-clusters configuration with the load balancer.</p>
<p>I was using this link <a href="https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/tree/master/examples/zone-printer" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/tree/master/examples/zone-printer</a> as a source for my configuration. But now I don't know how to upload/build docker-compose.yml do GCR which can be used in Google Kubernetes.</p>
<pre><code>version: '3'
services:
php:
image: gcr.io/XXX/php
build: build/php
expose:
- '9000'
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log
web:
image: gcr.io/XXX/nginx
build: build/nginx
restart: always
ports:
- "81:80"
depends_on:
- php
volumes:
- ./symfony:/var/www/html/symfony:cached
- ./logs:/var/log/nginx
</code></pre>
<p>I need to have a single container GCR.io/XXX/XXX/XXX for kubernetes-ingress configuration. Should I use docker-compose.yml or find something else? Which solution will be best?</p>
| <p>docker-compose and Kubernetes declarations are not compatible with each other. If you want to use Kubernetes you can use a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">Pod</a> with 2 containers (according to your example). If you want to take it a step further, you can use a Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> that can manage your pod replicas, in case you are using multiple replicas.</p>
<p>Something like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: php
image: gcr.io/XXX/php
ports:
- containerPort: 9000
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
- name: web
image: gcr.io/XXX/nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/symfony
name: symphony
- mountPath: /var/log
name: logs
volumes:
- name: symphony
hostPath:
path: /home/symphony
- name: logs
hostPath:
path: /home/logs
</code></pre>
<p>Even further, you can remove your web container and use <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx ingress controller</a>. More about Kubernetes Ingresses <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">here</a></p>
|
<p>GKE seems to create a cluster using one availability zone for the master although it provides an option to deploy nodes to multiple availability zones. I am concerned that if master AZ goes down, I cannot manage my cluster anymore. I understand my apps will continue to run but it is a big concern that I cannot scale up my service or deploy a new version of my apps, etc.</p>
<p>Is my understanding of "GKE cluster is vulnerable to master zone going down" correct? If not, can you please explain how? If it is correct, what are my options to make it highly available so that it can tolerate one availability zone going down?</p>
| <p>GKE <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters" rel="nofollow noreferrer">regional clusters</a>, which offer a multi-master setup with one master in each zone in the region, are now generally available. See the <a href="https://cloud.google.com/blog/products/gcp/with-google-kubernetes-engine-regional-clusters-master-nodes-are-now-highly-available" rel="nofollow noreferrer">launch blog post</a> for a quick overview.</p>
|
<p>I am facing a weird behaviour with kubectl and --dry-run.</p>
<p>To simplify let's say that I have the following yaml file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginxsdf
imagePullPolicy: Always
name: nginx
</code></pre>
<p>Modifying for example the image or the number of replicas:</p>
<ul>
<li><p><code>kubectl apply -f Deployment.yaml -o yaml --dry-run</code> outputs me the resource having the <strong>OLD</strong> specifications</p></li>
<li><p><code>kubectl apply -f Deployment.yaml -o yaml</code> outputs me the resource having <strong>NEW</strong> specifications</p></li>
</ul>
<p>According to the documentation:</p>
<blockquote>
<p>--dry-run=false: If true, only print the object that would be sent, without sending it.</p>
</blockquote>
<p>However the object printed is the old one and not the one that will be sent to the ApiServer</p>
<p>Tested on minikube, gke v1.10.0</p>
<p>In the meanwhile I opened a new gitHub issue for it:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/issues/72644" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/72644</a></li>
</ul>
| <p>I got the following answer in the kubernetes issue page:</p>
<blockquote>
<p>When updating existing objects, kubectl apply doesn't send an entire object, just a patch. It is not exactly correct to print either the existing object or the new object in dry-run mode... the outcome of the merge is what should be printed.</p>
<p>For kubectl to be able to accurately reflect the result of the apply, it would need to have the server-side apply logic clientside, which is a non-goal.</p>
<p>Current efforts are directed at moving apply logic to the server. As part of that, the ability to dry-run server-side has been added. <code>kubectl apply --server-dry-run</code> will do what you want, printing the result of the apply merge, without actually persisting it.</p>
<p>@apelisse we should probably update the flag help for apply and possibly print a warning when using --dry-run when updating an object via apply to document the limitations of --dry-run and direct people to use --server-dry-run</p>
</blockquote>
|
<p>I am using python to access my cluster. The only thing I came across, that is close is <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_namespaced_pod" rel="noreferrer">list_namespaced_pod</a>, which does not give me the actual names of the pods.</p>
| <p>As stated in the comments, you can access all information in the <code>metadata</code> of each pod in the list of pod items returned by the API call. </p>
<p>Here is an example: </p>
<pre><code>def get_pods():
v1 = client.CoreV1Api()
pod_list = v1.list_namespaced_pod("example")
for pod in pod_list.items:
print("%s\t%s\t%s" % (pod.metadata.name,
pod.status.phase,
pod.status.pod_ip))
</code></pre>
|
<p>a bit of background is that I have setup an Azure Kubernetes Service cluster and deployed a basic .Net Core api as a deployment object. I then deployed a nodeport service to expose the api and then deployed a nginx-controller and an ingress object to configure it. I use the IP of the ingress-controller to route the request and that works eg.<a href="http://1.2.3.4/hello-world-one/api/values" rel="noreferrer">http://1.2.3.4/hello-world-one/api/values</a>.
But when I replace the Ip with the generated dns, somehow the path is ignored and I get the default backend - 404 returned from the nginx controller. The expected behaviour is that the dns will resolve then the path "api/values" will be sent to my service.</p>
<p>Can anyone help me with this?
Thanks in advance.</p>
<p>My deployment, service and ingress configs are below.</p>
<pre><code> apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test-service
image: <my-repo>.azurecr.io/testservice
imagePullPolicy: Always
ports:
- name: tcp
containerPort: 80
imagePullSecrets:
- name: regsecret
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: NodePort
selector:
app: test
ports:
- name: http
port: 32768
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: dev-pip-usw-qa-aks
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: hello-world-ingress.be543d4af69d4c7ca489.westus.aksapp.io
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-one
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-two
backend:
serviceName: frontend
servicePort: http
</code></pre>
| <p>pretty sure <code>rules</code> should look like this:</p>
<pre><code>rules:
- host: hello-world-ingress.be543d4af69d4c7ca489.westus.aksapp.io
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-one
backend:
serviceName: frontend
servicePort: http
- path: /hello-world-two
backend:
serviceName: frontend
servicePort: http
</code></pre>
<p>reading: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#types-of-ingress</a></p>
|
<p>I'm getting an error: </p>
<blockquote>
<p>UnauthorizedError: InvalidSignature: The token has an invalid signature</p>
</blockquote>
<p>when I'm trying to access Azure Service Bus from a nodejs docker container running inside Kubernetes cluster in Azure. </p>
<p>Interesting to note that I don't get this error when I run the code locally, or just inside the docker container on my dev laptop but as soon as I deploy container to K8 cluster I'm getting that error.</p>
<p>I verified the service-bus SAS primary key is correct inside K8 cluster secrets file.</p>
<p>Here's how error object looks like inside K8 cluster:</p>
<pre><code>UnauthorizedError: InvalidSignature: The token has an invalid signature. {"timestamp":"2019-01-08T05:43:48.918Z"}
debug: condition: com.microsoft:auth-failed {"timestamp":"2019-01-08T05:43:48.920Z"}
debug: info: undefined {"timestamp":"2019-01-08T05:43:48.920Z"}
debug: message: InvalidSignature: The token has an invalid signature. {"timestamp":"2019-01-08T05:43:48.924Z"}
debug: name: UnauthorizedError {"timestamp":"2019-01-08T05:43:48.924Z"}
debug: retryable: false {"timestamp":"2019-01-08T05:43:48.924Z"}
debug: stack: UnauthorizedError: InvalidSignature: The token has an invalid signature.
at Object.translate (/usr/src/app/node_modules/@azure/amqp-common/dist/lib/errors.js:527:17)
at Receiver.messageCallback (/usr/src/app/node_modules/@azure/amqp-common/dist/lib/requestResponseLink.js:109:44)
at Receiver.emit (events.js:182:13)
at emit (/usr/src/app/node_modules/rhea-promise/dist/lib/util/utils.js:129:24)
at Object.emitEvent (/usr/src/app/node_modules/rhea-promise/dist/lib/util/utils.js:140:9)
at Receiver._link.on (/usr/src/app/node_modules/rhea-promise/dist/lib/link.js:249:25)
at Receiver.emit (events.js:182:13)
at Receiver.link.dispatch (/usr/src/app/node_modules/rhea/lib/link.js:59:37)
at Incoming.on_transfer (/usr/src/app/node_modules/rhea/lib/session.js:360:22)
at Session.on_transfer (/usr/src/app/node_modules/rhea/lib/session.js:736:19) {"timestamp":"2019-01-08T05:43:48.925Z"}
debug: translated: true {"timestamp":"2019-01-08T05:43:48.925Z"}
</code></pre>
<p>I'm using <a href="https://github.com/Azure/azure-service-bus-node" rel="nofollow noreferrer">@azure/service-bus</a> as a node package to work with azure service bus.</p>
<p>Any help, suggestions or ideas highly appreciated.</p>
<p>Thank you very one.</p>
| <p>in this case the issue was with the way OP passes the SAS string to the container</p>
|
<p>We have containers in docker as smallest unit and we have pods as smallest unit in kubernetes. The safe practice is that we should keep one container in one pod. So it is one and the same i.e pods and containers acts same (One container in one pod). Then why pods where created if one container is to be put in one pod. We could have used containers itself. </p>
| <p>The reason behind using pod rather than directly container is that kubernetes requires more information to orchestrate the containers like <code>restart policy</code>, <code>liveness probe</code>, <code>readiness probe</code>. A <code>liveness probe</code> defines that container inside the pods is alive or not, <code>restart policy</code> defines the what to do with container when it failed. A <code>readiness probe</code> defines that container is ready to start serving.</p>
<p>So, Instead of adding those properties to the existing container, kubernetes had decided to write the wrapper on containers with all the necessary additional information.</p>
<p>Also, Kubernetes supports the multi-container pod which is mainly requires for the <code>sidecar containers</code> mainly log or data collector or proxies for the main container. Another advantage of multi-container pod is they can have very tightly coupled application container together sharing the same data, same network namespace and same IPC namespace which would not be possible if they choose for directly using container without any wrapper around it.</p>
<p>Following is very nice article on this:</p>
<p><a href="https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/" rel="nofollow noreferrer">https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/</a></p>
<p>Hope this gives you brief idea.</p>
|
<p>I am using <a href="https://github.com/jetstack/cert-manager" rel="noreferrer">cert-manager</a> 0.5.2 to manage Let's Encrypt certificates on our Kubernetes cluster.</p>
<p>I was using the Let's Encrypt staging environment, but have now moved to use their production certificates. <strong>The problem is that my applications aren't updating to the new, valid certificates.</strong> </p>
<p>I must have screwed something up while updating the issuer, certificate, and ingress resources, but I can't see what. I have also reinstalled the NGINX ingress controller and cert-manager, and recreated my applications, but I am still getting old certificates. What can I do next?</p>
<p><strong>Describing the <code>letsencrypt</code> cluster issuer:</strong></p>
<pre><code>Name: letsencrypt
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt","namespace":""},"spec":{"acme":{"e...
API Version: certmanager.k8s.io/v1alpha1
Kind: ClusterIssuer
Metadata:
Cluster Name:
Creation Timestamp: 2019-01-04T09:27:49Z
Generation: 0
Resource Version: 130088
Self Link: /apis/certmanager.k8s.io/v1alpha1/letsencrypt
UID: 00f0ea0f-1003-11e9-997f-ssh3b4bcc625
Spec:
Acme:
Email: [email protected]
Http 01:
Private Key Secret Ref:
Key:
Name: letsencrypt
Server: https://acme-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-v02.api.letsencrypt.org/acme/acct/48899673
Conditions:
Last Transition Time: 2019-01-04T09:28:33Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
</code></pre>
<p><strong>Describing the <code>tls-secret</code> certificate:</strong></p>
<pre><code>Name: tls-secret
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"tls-secret","namespace":"default"},"spec":{"acme"...
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Cluster Name:
Creation Timestamp: 2019-01-04T09:28:13Z
Resource Version: 130060
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-secret
UID: 0f38w7y4-1003-11e9-997f-e6e9b4bcc625
Spec:
Acme:
Config:
Domains:
mydomain.com
Http 01:
Ingress Class: nginx
Dns Names:
mydomain.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt
Secret Name: tls-secret
Events: <none>
</code></pre>
<p><strong>Describing the <code>aks-ingress</code> ingress controller:</strong></p>
<pre><code>Name: aks-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
tls-secret terminates mydomain.com
Rules:
Host Path Backends
---- ---- --------
mydomain.com
/ myapplication:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: ...
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: letsencrypt
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 21m nginx-ingress-controller Ingress default/aks-ingress
Normal CREATE 21m nginx-ingress-controller Ingress default/aks-ingress
</code></pre>
<p><strong>Logs for cert-manager after restarting the server:</strong></p>
<pre><code>I0104 09:28:38.378953 1 setup.go:144] Skipping re-verifying ACME account as cached registration details look sufficient.
I0104 09:28:38.379058 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt"
I0104 09:28:38.378953 1 setup.go:144] Skipping re-verifying ACME account as cached registration details look sufficient.
I0104 09:28:38.379058 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt"
I0104 09:28:38.378455 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt'
I0104 09:28:38.378455 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt'
I0104 09:28:33.440466 1 controller.go:185] certificates controller: Finished processing work item "default/tls-secret"
I0104 09:28:33.440417 1 sync.go:206] Certificate default/tls-secret scheduled for renewal in 1423 hours
I0104 09:28:33.440466 1 controller.go:185] certificates controller: Finished processing work item "default/tls-secret"
I0104 09:28:33.440417 1 sync.go:206] Certificate default/tls-secret scheduled for renewal in 1423 hours
I0104 09:28:33.439824 1 controller.go:171] certificates controller: syncing item 'default/tls-secret'
I0104 09:28:33.439824 1 controller.go:171] certificates controller: syncing item 'default/tls-secret'
I0104 09:28:33.377556 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt"
I0104 09:28:33.377556 1 controller.go:154] clusterissuers controller: Finished processing work item "letsencrypt"
I0104 09:28:33.359246 1 helpers.go:147] Setting lastTransitionTime for ClusterIssuer "letsencrypt" condition "Ready" to 2019-01-04 09:28:33.359214315 +0000 UTC m=+79.014291591
I0104 09:28:33.359178 1 setup.go:181] letsencrypt: verified existing registration with ACME server
I0104 09:28:33.359178 1 setup.go:181] letsencrypt: verified existing registration with ACME server
I0104 09:28:33.359246 1 helpers.go:147] Setting lastTransitionTime for ClusterIssuer "letsencrypt" condition "Ready" to 2019-01-04 09:28:33.359214315 +0000 UTC m=+79.014291591
I0104 09:28:32.427832 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt'
I0104 09:28:32.427978 1 controller.go:182] ingress-shim controller: Finished processing work item "default/aks-ingress"
I0104 09:28:32.427832 1 controller.go:140] clusterissuers controller: syncing item 'letsencrypt'
I0104 09:28:32.427832 1 controller.go:168] ingress-shim controller: syncing item 'default/aks-ingress'
I0104 09:28:32.428133 1 logger.go:88] Calling GetAccount
I0104 09:28:32.427936 1 sync.go:140] Certificate "tls-secret" for ingress "aks-ingress" already exists
I0104 09:28:32.427965 1 sync.go:143] Certificate "tls-secret" for ingress "aks-ingress" is up to date
I0104 09:28:32.427978 1 controller.go:182] ingress-shim controller: Finished processing work item "default/aks-ingress"
I0104 09:28:32.428133 1 logger.go:88] Calling GetAccount
I0104 09:28:32.427936 1 sync.go:140] Certificate "tls-secret" for ingress "aks-ingress" already exists
I0104 09:28:32.427832 1 controller.go:168] ingress-shim controller: syncing item 'default/aks-ingress'
I0104 09:28:32.427965 1 sync.go:143] Certificate "tls-secret" for ingress "aks-ingress" is up to date
I0104 09:28:29.439299 1 controller.go:171] certificates controller: syncing item 'default/tls-secret'
E0104 09:28:29.439586 1 controller.go:180] certificates controller: Re-queuing item "default/tls-secret" due to error processing: Issuer letsencrypt not ready
I0104 09:28:29.439404 1 sync.go:120] Issuer letsencrypt not ready
E0104 09:28:29.439586 1 controller.go:180] certificates controller: Re-queuing item "default/tls-secret" due to error processing: Issuer letsencrypt not ready
I0104 09:28:29.439299 1 controller.go:171] certificates controller: syncing item 'default/tls-secret'
I0104 09:28:29.439404 1 sync.go:120] Issuer letsencrypt not ready
I0104 09:28:27.404656 1 controller.go:68] Starting certificates controller
I0104 09:28:27.404606 1 controller.go:68] Starting issuers controller
I0104 09:28:27.404325 1 controller.go:68] Starting ingress-shim controller
I0104 09:28:27.404606 1 controller.go:68] Starting issuers controller
I0104 09:28:27.404325 1 controller.go:68] Starting ingress-shim controller
I0104 09:28:27.404269 1 controller.go:68] Starting clusterissuers controller
I0104 09:28:27.404656 1 controller.go:68] Starting certificates controller
I0104 09:28:27.404269 1 controller.go:68] Starting clusterissuers controller
I0104 09:28:27.402806 1 leaderelection.go:184] successfully acquired lease kube-system/cert-manager-controller
I0104 09:28:27.402806 1 leaderelection.go:184] successfully acquired lease kube-system/cert-manager-controller
I0104 09:27:14.359634 1 server.go:84] Listening on http://0.0.0.0:9402
I0104 09:27:14.357610 1 controller.go:126] Using the following nameservers for DNS01 checks: [10.0.0.10:53]
I0104 09:27:14.357610 1 controller.go:126] Using the following nameservers for DNS01 checks: [10.0.0.10:53]
I0104 09:27:14.358408 1 leaderelection.go:175] attempting to acquire leader lease kube-system/cert-manager-controller...
I0104 09:27:14.359634 1 server.go:84] Listening on http://0.0.0.0:9402
I0104 09:27:14.356692 1 start.go:79] starting cert-manager v0.5.2 (revision 9e8c3ad899c5aafaa360ca947eac7f5ba6301035)
I0104 09:27:14.358408 1 leaderelection.go:175] attempting to acquire leader lease kube-system/cert-manager-controller...
I0104 09:27:14.356692 1 start.go:79] starting cert-manager v0.5.2 (revision 9e8c3ad899c5aafaa360ca947eac7f5ba6301035)
</code></pre>
<p><strong>Certificate resource:</strong></p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: tls-secret
spec:
secretName: tls-secret
dnsNames:
- mydomain.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- mydomain.com
issuerRef:
name: letsencrypt
kind: ClusterIssuer
</code></pre>
| <p>in this case the problem went away after recreating the secret and the cert-manager certificate resource.</p>
<p>generally what you want to check, annotations on your ingress resource (certmanager.k8s.io/cluster-issuer: letsencrypt), cert-manager certificate resource, ssl certificate secret in k8s and in ingress resource</p>
|
<p>[xueke@master-01 admin]$ kubectl logs nginx-deployment-76bf4969df-999x8
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-deployment-76bf4969df-999x8)</p>
<pre><code>[xueke@master-01 admin]$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.101:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<p>I specified the admin user here
How do I need to modify it?</p>
| <p>The above error means your apiserver doesn't have the credentials (<code>kubelet cert and key</code>) to authenticate the kubelet's log/exec commands and hence the <code>Forbidden</code> error message.</p>
<p>You need to provide <code>--kubelet-client-certificate=<path_to_cert></code> and <code>--kubelet-client-key=<path_to_key></code> to your apiserver, this way apiserver authenticate the kubelet with the certficate and key pair.</p>
<p>For more information, have a look at:</p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/</a></p>
|
<p>Is there a possibility to share a single GPU between <strong>kubernetes</strong> pods ?</p>
| <p>As the <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#v1-8-onwards" rel="noreferrer">official doc</a> says</p>
<blockquote>
<p>GPUs are only supposed to be specified in the limits section, which means:</p>
<p>You can specify GPU limits without specifying requests because Kubernetes will use the limit as the request value by default.</p>
<p>You can specify GPU in both limits and requests but these two values must be equal.</p>
<p>You cannot specify GPU requests without specifying limits.
<strong>Containers (and pods) do not share GPUs</strong>. There’s no overcommitting of GPUs.</p>
<p>Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.</p>
</blockquote>
<p>Also, you can follow <a href="https://github.com/kubernetes/kubernetes/issues/52757" rel="noreferrer">this</a> discussion to get a little bit more information.</p>
|
<p>I've tried to install Minikube via Linux (Ubuntu 18.04) and have installed curl following these steps: </p>
<pre><code>./configure --nghttp2 --prefix=/usr/local --with-ssl
make
sudo make install
</code></pre>
<p>My curl version is 7.63.0.</p>
<p>According to the Platform 9 guide, I wrote the following instruction:</p>
<pre><code>curl -Lo minikube \
https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-
amd64 && chmod +x minikube && mv minikube /usr/local/bin/
</code></pre>
<p>This is the system's result:</p>
<blockquote>
<p>curl (3) URL using bad/illegal format or missing URL</p>
</blockquote>
<p>How can I fix this?</p>
| <p>Try this:</p>
<pre><code>curl -L -o minikube "https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64" && chmod +x minikube && sudo mv ./minikube /usr/local/bin/
</code></pre>
<p>See if it's working.</p>
|
<p>Below is Helm code to install </p>
<pre><code>helm install coreos/kube-prometheus --name kube-prometheum --namespace monitoring -f kube-prometheus.yml
</code></pre>
<p>by this way we can override the value.yml values with the values present in kube-prometheus.yml.</p>
<p>Is there any way by which we can first install and then update the value.yml from kube-prometheus.yml file.</p>
<p>I can use <code>helm upgrade releasename kube-prometheum</code>after changing the value.yml file directly. I don't want that</p>
<p>Use case:
Initially, I used an image with tag 1.0 in value.yml. Now I have below code in kube-prometheus.yml just to update the image tag </p>
<pre><code>prometheusconfigReloader:
image:
tag: 2.0
</code></pre>
<p>Instead of deleting and creating again. I want to upgrade it. This is just for example, there could be multiple values. that is why I can't use -set.</p>
| <p>So you first run <code>helm install coreos/kube-prometheus --name kube-prometheum --namespace monitoring -f kube-prometheus.yml</code> with your values file set to point at 1.0 of the image:</p>
<pre><code>prometheusconfigReloader:
image:
tag: 1.0
</code></pre>
<p>Then you change the values file or create a new values file or even create a new values file containing:</p>
<pre><code>prometheusconfigReloader:
image:
tag: 2.0
</code></pre>
<p>Let's say this file is called kube-prometheus-v2.yml Then you can run:</p>
<p><code>helm upgrade -f kube-prometheus-v2.yml kube-prometheum coreos/kube-prometheus</code></p>
<p>Or even:</p>
<p><code>helm upgrade -f kube-prometheus.yml -f kube-prometheus-v2.yml kube-prometheum coreos/kube-prometheus</code></p>
<p>This is because both values file overrides will be overlaid and according to the <a href="https://github.com/helm/helm/blob/master/docs/helm/helm_upgrade.md" rel="noreferrer"><code>helm upgrade</code> documentation</a> "priority will be given to the last (right-most) value specified".</p>
<p>Or if you've already installed and want to find out what the values file that was used contained then you can use <a href="https://github.com/helm/helm/blob/master/docs/helm/helm_get_values.md" rel="noreferrer"><code>helm get values kube-prometheum</code></a></p>
|
<p>I would like to deploy the <a href="https://github.com/IBM-Cloud/jpetstore-kubernetes" rel="nofollow noreferrer">java petstore</a> for kubernetes. In order to achieve this I have <strong>2 simple deployments</strong>. The first one is the <strong>java web app</strong> and the second one is a <strong>MySQL database</strong>.</p>
<p><em>When istio is disabled the connection between the app and the DB works well.<br>
Unfortunatly when the istio sidecar is injected the communication between the two stops working.</em></p>
<p>Here is the deployment file of the web app:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jpetstoreweb
spec:
replicas: 1
template:
metadata:
labels:
app: jpetstoreweb
annotations:
sidecar.istio.io/inject: "true"
spec:
containers:
- name: jpetstoreweb
image: wingardiumleviosa/petstore:v7
env:
- name: VERSION
value: "1"
- name: DB_URL
value: "jpetstoredb-service"
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: "jpetstore"
- name: DB_USERNAME
value: "jpetstore"
- name: DB_PASSWORD
value: "foobar"
ports:
- containerPort: 9080
readinessProbe:
httpGet:
path: /
port: 9080
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: jpetstoreweb-service
spec:
selector:
app: jpetstoreweb
ports:
- port: 80
targetPort: 9080
---
</code></pre>
<p>And next the deployment file of the mySql database :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jpetstoredb
spec:
replicas: 1
template:
metadata:
labels:
app: jpetstoredb
annotations:
sidecar.istio.io/inject: "true"
spec:
containers:
- name: jpetstoredb
image: wingardiumleviosa/petstoredb:v1
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "foobar"
- name: MYSQL_DATABASE
value: "jpetstore"
- name: MYSQL_USER
value: "jpetstore"
- name: MYSQL_PASSWORD
value: "foobar"
---
apiVersion: v1
kind: Service
metadata:
name: jpetstoredb-service
spec:
selector:
app: jpetstoredb
ports:
- port: 3306
targetPort: 3306
</code></pre>
<p>Finally the error logs from the web app trying to connect to the DB :</p>
<pre><code>Exception thrown by application class 'org.springframework.web.servlet.FrameworkServlet.processRequest:488'
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Communication link failure: java.io.EOFException, underlying cause: null ** BEGIN NESTED EXCEPTION ** java.io.EOFException STACKTRACE: java.io.EOFException at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1395) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:1539) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:1930) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1168) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1279) at com.mysql.jdbc.MysqlIO.sqlQuery(MysqlIO.java:1225) at com.mysql.jdbc.Connection.execSQL(Connection.java:2278) at com.mysql.jdbc.Connection.execSQL(Connection.java:2237) at com.mysql.jdbc.Connection.execSQL(Connection.java:2218) at com.mysql.jdbc.Connection.setAutoCommit(Connection.java:548) at org.apache.commons.dbcp.DelegatingConnection.setAutoCommit(DelegatingConnection.java:331) at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setAutoCommit(PoolingDataSource.java:317) at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:221) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:350) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:261) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:101) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at com.sun.proxy.$Proxy28.getCategory(Unknown Source) at org.springframework.samples.jpetstore.web.spring.ViewCategoryController.handleRequest(ViewCategoryController.java:31) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:874) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:808) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:476) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:431) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1255) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:743) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:440) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:182) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:93) at com.ibm.ws.security.jaspi.JaspiServletFilter.doFilter(JaspiServletFilter.java:56) at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:90) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:996) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1134) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1005) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:927) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1023) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:417) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:376) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:532) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:466) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:331) at com.ibm.ws.http.channel.internal.inbound.HttpICLReadCallback.complete(HttpICLReadCallback.java:70) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:501) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:571) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:926) at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1015) at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:232) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.lang.Thread.run(Thread.java:812) ** END NESTED EXCEPTION **
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:488)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:431)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
</code></pre>
<p>Extract : <code>Could not open JDBC Connection for transaction</code></p>
<hr>
<p><strong>Additionnal info :</strong></p>
<p>1) I can curl the DB from the web app container using CURL and it answers correctly.</p>
<p>2) I use Cilium instead of Calico</p>
<p>3) I installed Istio using HELM</p>
<p>4) Kubernetes is installed on bare metal (no cloud provider)</p>
<p>5) <code>kubectl get pods -n istio-system</code> all istio pods are running</p>
<p>6) <code>kubectl get pods -n kube-system</code> all cilium pods are running</p>
<p>7) Istio is injected using <code>kubectl apply -f <(~/istio-1.0.5/bin/istioctl kube-inject -f ~/jpetstore.yaml) -n foo</code>. If I use any other method Istio is not injecting itself in the Web pod (But works for the DB pod, god knows why)</p>
<p>8) The DB pod is always happy and working well</p>
<p>9) Logs of the istio-proxy container inside the WebApp pod : <code>kubectl logs jpetstoreweb-84c7d8964-s642k istio-proxy -n myns</code></p>
<pre><code>2018-12-28T03:52:30.610101Z info Version [email protected]/istio-1.0.5-c1707e45e71c75d74bf3a5dec8c7086f32f32fad-Clean
2018-12-28T03:52:30.610167Z info Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.233.72.142", ID:"jpetstoreweb-84c7d8964-s642k.myns", Domain:"myns.svc.cluster.local", Metadata:map[string]string(nil)}
2018-12-28T03:52:30.611217Z info Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot.istio-system:15007
discoveryRefreshDelay: 1s
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: jpetstoreweb
zipkinAddress: zipkin.istio-system:9411
2018-12-28T03:52:30.611249Z info Monitored certs: []envoy.CertSource{envoy.CertSource{Directory:"/etc/certs/", Files:[]string{"cert-chain.pem", "key.pem", "root-cert.pem"}}}
2018-12-28T03:52:30.611829Z info Starting proxy agent
2018-12-28T03:52:30.611902Z info Received new config, resetting budget
2018-12-28T03:52:30.611912Z info Reconciling configuration (budget 10)
2018-12-28T03:52:30.611926Z info Epoch 0 starting
2018-12-28T03:52:30.613236Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster jpetstoreweb --service-node sidecar~10.233.72.142~jpetstoreweb-84c7d8964-s642k.myns~myns.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warn --v2-config-only]
[2018-12-28 03:52:30.630][20][info][main] external/envoy/source/server/server.cc:190] initializing epoch 0 (hot restart version=10.200.16384.256.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=4882536)
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:192] statically linked extensions:
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:194] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:197] filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash,istio_authn,jwt-auth,mixer
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:200] filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:203] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.rbac,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy,mixer
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:205] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:207] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:210] transport_sockets.downstream: alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:213] transport_sockets.upstream: alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-28 03:52:30.634][20][info][config] external/envoy/source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-12-28 03:52:30.638][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:30.638][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:30.638][20][info][config] external/envoy/source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:94] loading tracing configuration
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:103] loading tracing driver: envoy.zipkin
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:116] loading stats sink configuration
[2018-12-28 03:52:30.640][20][info][main] external/envoy/source/server/server.cc:432] starting main dispatch loop
[2018-12-28 03:52:32.010][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:32.011][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:34.691][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:34.691][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:38.483][20][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:130] cm init: initializing cds
[2018-12-28 03:53:01.596][20][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||kubernetes.default.svc.cluster.local during init
</code></pre>
<p><strong>...</strong></p>
<pre><code>[2018-12-28T04:09:09.561Z] - 115 1548 6 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40318 10.233.72.142:9080 10.233.72.1:43098
[2018-12-28T04:09:14.555Z] - 115 1548 8 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40350 10.233.72.142:9080 10.233.72.1:43130
[2018-12-28T04:09:19.556Z] - 115 1548 5 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40364 10.233.72.142:9080 10.233.72.1:43144
[2018-12-28T04:09:24.558Z] - 115 1548 6 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40378 10.233.72.142:9080 10.233.72.1:43158
</code></pre>
<p>10) Using Istio 1.0.5 and kubernetes 1.13.0</p>
<p>All idears are welcome ;-)<br>
Thx</p>
| <p>So there really is an issue with <strong>Istio 1.0.5</strong> and the <strong>JDBC</strong> of <strong>MySQL</strong></p>
<p>The temporary solution is to delete the mesh ressource in the following way :</p>
<p><code>kubectl delete meshpolicies.authentication.istio.io default</code></p>
<p>As stated <a href="https://github.com/retroryan/istio-workshop/issues/48" rel="nofollow noreferrer">here</a> and referencing <a href="https://github.com/istio/istio/issues/10062" rel="nofollow noreferrer">this</a>.</p>
<p>(FYI : I deleted the ressource BEFORE deploying my petstore app.)</p>
<hr>
<p>As of <strong>Istio 1.1.1</strong> there is more data on this problem in the <a href="https://istio.io/help/faq/security/#mysql-with-mtls" rel="nofollow noreferrer">FAQ</a></p>
|
<p>I want to use the Kubernetes pod name to be used as an identifier in my container as an argument. </p>
<p>I have deployed my echo containers on my Kubernetes cluster using the following config:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
</code></pre>
<p>When I do "kubectl get pods":</p>
<pre><code>NAME READY STATUS RESTARTS AGE
echo1-76c689b7d-48h4v 1/1 Running 0 19h
echo1-76c689b7d-4gq2v 1/1 Running 0 19h
</code></pre>
<p>I want to echo the pod name by passing the pod name in my config above:</p>
<pre><code>args:
- "-text=echo1"
</code></pre>
<p>How do I access my pod name to be used in my args?</p>
| <p>So a few things. First you would use the fieldRef syntax for an environment variable as shown in <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</a>. Then you would use the env var in your argument (<code>"-text=$(PODNAME)"</code>). However this will give you the actual pod name, like <code>echo1-76c689b7d-48h4v</code>. What you want is either the deployment name or the value of the <code>app</code> label, the latter is easier to instead of <code>metadata.name</code> as the field path, use something like <code>metadata.labels['app']</code> (requires Kubernetes 1.9+).</p>
|
<p>I have a Docker image <code>gin-web</code> which I have locally and also pushed onto Docker Hub. When in am running my Kubernetes folder i.e.<code>Kubernetes/</code>, I am getting error for my <code>k8s-deployment.yml</code></p>
<p><code>Kubernetes\</code> consist of <code>k8s-deployment.yml</code> and <code>k8s-service.yml</code>
Service appears for console.(Using <code>minikube dashboard</code>).</p>
<p>I have referred <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">configured Pod to pull image from Private Registry</a> and added <code>imagePullSecrets</code> to <code>k8-deployment.yml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gin-web # Enter deployment name
labels:
app: gin-web
spec:
replicas: 3 #Enter the number of replicas
template:
metadata:
labels:
app: gin-web
tier: service
spec:
imagePullSecrets:
- name: regcred
containers:
- name: gin-web
image: "gin-web:1.0.1"
ports:
- containerPort: 9090
env:
- name: PORT
value: "9090"
# define resource requests and limits
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits: #k8 automatically restart container when hit with these Limtis
memory: "128Mi"
cpu: "250m"
# check if gin-web is alive and healthy
#Check if MS recieve traffic from k*
readinessProbe:
httpGet:
path: /ping
port: 9090
initialDelaySeconds: 5
timeoutSeconds: 5
# check for k8 if container is healthy
livenessProbe:
httpGet:
path: /ping
port: 9090
initialDelaySeconds: 5
timeoutSeconds: 5
</code></pre>
<p>I am getting this error under Deployments in Kubernetes console:</p>
<p><code>Failed to pull image "gin-web:1.0.1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gin-web, repository does not exist or may require 'docker login'</code></p>
| <p>Looks like you are missing the user or group in the container image string. As far as I can see, nothing in docker hub is just plain <code>gin-web</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gin-web # Enter deployment name
labels:
app: gin-web
spec:
replicas: 3 #Enter the number of replicas
template:
metadata:
labels:
app: gin-web
tier: service
spec:
imagePullSecrets:
- name: regcred
containers:
- name: gin-web
image: "<your-user>/gin-web:1.0.1" <== Add user here
ports:
- containerPort: 9090
env:
- name: PORT
value: "9090"
# define resource requests and limits
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits: #k8 automatically restart container when hit with these Limtis
memory: "128Mi"
cpu: "250m"
# check if gin-web is alive and healthy
#Check if MS recieve traffic from k*
readinessProbe:
httpGet:
path: /ping
port: 9090
initialDelaySeconds: 5
timeoutSeconds: 5
# check for k8 if container is healthy
livenessProbe:
httpGet:
path: /ping
port: 9090
initialDelaySeconds: 5
timeoutSeconds: 5
</code></pre>
|
<p>I created a nfs server in a pod to use it as a volume. When creating another pod with a volume, the volume mount does work with the ip of the nfs pod. Since this ip is not guaranteed to stay the same, I added a service for my nfs pod and added a fixed cluster ip. When starting the container with the volume mount, it always fails with the following error:</p>
<blockquote>
<p>Unable to mount volumes for pod "nginx_default(35ecd8ec-a077-11e8-b7bc-0cc47a9aec96)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx". list of unmounted volumes=[nfs-demo]. list of unattached volumes=[nfs-demo nginx-test-account-token-2dpgg] </p>
</blockquote>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
name: nfs-server
spec:
containers:
- name: nfs-server
image: my-nfs-server:v1
args: ["/exports"]
securityContext:
privileged: true
---
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
name: nfs-server
clusterIP: "10.96.0.3"
ports:
- name: nfs
port: 2049
protocol: UDP
- name: mountd
port: 20048
protocol: UDP
- name: rpcbind
port: 111
protocol: UDP
- name: nfs-tcp
port: 2049
protocol: TCP
- name: mountd-tcp
port: 20048
protocol: TCP
- name: rpcbind-tcp
port: 111
protocol: TCP
</code></pre>
<p>My pod trying to mount the server:</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/exports"
name: nfs-demo
securityContext:
privileged: true
securityContext:
supplementalGroups: [100003]
serviceAccountName: nginx-test-account
volumes:
- name: nfs-demo
nfs:
server: 10.96.0.3
path: "/exports"
readOnly: false
</code></pre>
<p>I used this as a base for my nfs server image: </p>
<p><a href="https://github.com/cpuguy83/docker-nfs-server" rel="noreferrer">https://github.com/cpuguy83/docker-nfs-server</a></p>
<p><a href="https://medium.com/@aronasorman/creating-an-nfs-server-within-kubernetes-e6d4d542bbb9" rel="noreferrer">https://medium.com/@aronasorman/creating-an-nfs-server-within-kubernetes-e6d4d542bbb9</a></p>
<p>Does anyone have an idea why the mount ist working with the pod ip but not with the service ip?</p>
| <p>I found a new way to solve this problem ,you can set nfs-server port to be fixed ,then mount nfs-server by service . you can refer to <a href="https://wiki.debian.org/SecuringNFS" rel="nofollow noreferrer">https://wiki.debian.org/SecuringNFS</a></p>
<p><a href="https://i.stack.imgur.com/kh1SX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kh1SX.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/ASYrx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ASYrx.png" alt="enter image description here"></a></p>
|
<p>I'm trying to install metricbeat helm chart to forward my kubernetes metrics to elasticsearch.</p>
<p>Default configuration works but when I configure output to elasticsearch, the pod tell me </p>
<blockquote>
<blockquote>
<p>Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'metricbeat.yml')</p>
</blockquote>
</blockquote>
<p>I download the <a href="https://github.com/helm/charts/blob/master/stable/metricbeat/values.yaml" rel="nofollow noreferrer">values.yaml</a> and modify output.file in both daemonset and deployment from </p>
<pre><code>output.file:
path: "/usr/share/metricbeat/data"
filename: metricbeat
rotate_every_kb: 10000
number_of_files: 5
</code></pre>
<p>to </p>
<pre><code>output.file:
enable: false
output.elasticsearch:
enable: true
hosts: ["http://192.168.10.156:9200/"]
</code></pre>
<p>How do I modify the config to forward metrics to elasticsearch?</p>
| <p>According to <a href="https://www.elastic.co/guide/en/beats/metricbeat/6.x/file-output.html#_literal_enabled_literal_6" rel="noreferrer">the fine manual</a>, the property is actually <code>enabled:</code> not <code>enable:</code> so I would presume you actually want:</p>
<pre><code>output.file:
enabled: false
</code></pre>
<p>Although to be honest, I always thought you could have as many outputs as you wish, but <a href="https://discuss.elastic.co/t/more-than-one-namespace-configured-accessing-output/146646" rel="noreferrer">that is clearly not true</a></p>
|
<p>I am following documentation example at <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use</a></p>
<p>I created following pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
initContainers:
- name: init-myservice
image: busybox
command:
[
"sh",
"-c",
"until nslookup myservice; do echo waiting for myservice; sleep 2; done;",
]
- name: init-mydb
image: busybox
command:
[
"sh",
"-c",
"until nslookup mydb; do echo waiting for mydb; sleep 2; done;",
]
containers:
- name: myapp-container
image: busybox
command: ["sh", "-c", "echo The app is running! && sleep 3600"]
</code></pre>
<p>but I did not create the services yet (<code>myservice, mydb</code>).</p>
<p>My expectation is for deployment to <strong>hold until I create services</strong>, but it just continues with deployment and creates the pod called "<code>myapp-pod</code>".</p>
<p>Am I missing something on this run?<br>
Why it does not hold until I create the services?</p>
| <p>This happens because you are using <code>ash</code> inside busybox and it has different behavior (not same as bash). So your script actually ends there.</p>
<p>You can try it inside busybox yourself:</p>
<pre><code>kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
</code></pre>
<p>And then use your command:</p>
<pre><code>until nslookup myservice; do echo waiting for myservice; sleep 2; done;
</code></pre>
<p>To fix this issue you can try something different, for example <strong>alpine</strong>.</p>
<pre><code>kubectl run -i --tty alpine --image=alpine --restart=Never -- sh
</code></pre>
|
<p>I just finished installing <code>kubectl</code> via <code>Ubuntu 64 (Linux)</code>.
I followed the online Kubernetes guide, but I've had an error with the host. After the first step (install kubectl), when I checked the kubectl version, this was the message (I didn't yet install and deploy the minikube because I've not the connection according to this problem):</p>
<pre><code>root@ubuntu:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Does someone have the same problem? How can I fix it?</p>
| <p>The reason behind that is <code>kubectl version</code> prints <code>Client Version</code> and <code>Server Version</code> (kubernetes version) both. When you just install kubectl it is just Kubernetes client. If you have kubernetes cluster installed, it will print both kubectl version and kubernetes version.</p>
<p>If you want to just want to print client version, then use following command:</p>
<pre><code>kubectl version --client=true
</code></pre>
<p>The error meant <code>kubectl tried to contact the kubernetes server to get its version but couldn't connect. Are you specifying the right host or port to connect to the kubernetes server.</code></p>
<p>The reason behind the error is you have not installed kubernetes cluster on you mac. You just installed kubectl which is just a client to access kubernetes cluster. Once you install the kubernetes cluster the output of kubectl version will be like:</p>
<pre><code>[root@ip-10-0-1-138 centos]# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1",
GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>If you want to install the complete kubernetes cluster on mac, please try <a href="https://kubernetes.io/docs/setup/minikube/" rel="noreferrer">minikube</a> to install cluster locally.</p>
|
<p>According to the docs: </p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run</a></p>
<p><code>kubectl run NAME --image=image</code></p>
<p>will run an image.</p>
<p>Some questions:</p>
<ul>
<li><p>I assume this is a pod rather than a container? </p></li>
<li><p>And I assume NAME is associated with the pod? </p></li>
</ul>
| <p>Snowcrash, you are correct. This is basically the same as docker run command. So using <code>kubectl run NAME --image=image</code> will exactly run a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod" rel="noreferrer">pod</a> named <code>NAME</code> from docker image called <code>image</code>.
You can check what exactly is happening by using <code>kubectl describe pod NAME</code>
Here is an example of <code>kubectl run nginx --image=nginx</code></p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 89s (x2 over 89s) default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
Normal Scheduled 19s default-scheduler Successfully assigned default/nginx-7cdbd8cdc9-glkxq to centos-master
Normal Pulling 18s kubelet, centos-master pulling image "nginx"
Normal Pulled 14s kubelet, centos-master Successfully pulled image "nginx"
Normal Created 14s kubelet, centos-master Created container
Normal Started 14s kubelet, centos-master Started container
</code></pre>
<p>So what happened after <code>kubectl run</code> is:</p>
<ul>
<li><p>Scheduler was trying to pick a node to launch the container (at first
it failed due to a taints, because my node is in NotReady state (not
important at the moment, but you can read more about it <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">here</a>)</p></li>
<li><p>Scheduler successfully assigned the pod to the node (centos-master).</p></li>
<li><p>The kubelet checks if the docker image is available and pulls it if
necessary. </p></li>
<li><p>Then the container is created and started.</p></li>
</ul>
<p>*<a href="https://medium.com/@mhausenblas/the-kubectl-run-command-27c68de5cb76" rel="noreferrer">here</a> you can find an interesting article which explains this in a little more detailed way. </p>
<p>The name is associated with pod, because Pod is the smallest unit of work in Kubernetes. Each pod can contain one or more containers. All the containers in the Pod have the same IP address and port space, can access shared storage on the Node hosting that pod. </p>
<p>Basically the <code>kubectl</code> command-line tool supports several different ways to <a href="https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview/#imperative-commands" rel="noreferrer">create and manage</a> Kubernetes objects:</p>
<ul>
<li>Imperative commands </li>
<li>Imperative object configuration </li>
<li>Declarative object configuration</li>
</ul>
<p>*you can find more about them in this <a href="https://stackoverflow.com/questions/48015637/kubernetes-kubectl-run-vs-create-and-apply">StackOverflow answer</a> or this Medium <a href="https://medium.com/bitnami-perspectives/imperative-declarative-and-a-few-kubectl-tricks-9d6deabdde" rel="noreferrer">article</a>. </p>
<p><code>run</code> command is an example of imperative approach. Is the simplest to start </p>
<blockquote>
<p>[...] Because this technique operates directly on live objects, it
provides no history of previous configurations.</p>
</blockquote>
|
<p>I have a python application that is running in Kubernetes. The app has a ping health-check which is called frequently via a REST call and checks that the call returns an HTTP 200. This clutters the Kubernetes logs when I view it through the logs console.</p>
<p>The function definition looks like this:</p>
<pre><code>def ping():
return jsonify({'status': 'pong'})
</code></pre>
<p>How can I silence a specific call from showing up in the log? Is there a way I can put this in code such as a python decorator on top of the health check function? Or is there an option in the Kubernetes console where I can configure to ignore this call?</p>
| <p>In kubernetes, everything in container which you have on <code>stdout</code> or <code>stderr</code> will come into the kubernetes logs. The only way to exclude the logs of <code>health-check</code> ping call remove from kubernetes logs is that, In your application you should redirect output of those ping calls to somefile in say <code>/var/log/</code>. This will effectively remove the output of that <code>health-check</code> ping from the <code>stdout</code>.</p>
<p>Once the output is not in <code>stdout</code> or <code>stderr</code> of the pods, pod logs will not have logs from that special <code>health-check</code></p>
<p>You can also use the sidecar containers to streamline your application logs, like if you don't want all of the logs of application in <code>kubectl logs</code> output. You can write those file.</p>
<p>As stated in Official docs of kubernetes:</p>
<blockquote>
<p>By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints log to its own stdout or stderr stream.</p>
</blockquote>
<p>This approach allows you to separate several log streams from different parts of your application, some of which can lack support for writing to stdout or stderr. The logic behind redirecting logs is minimal, so it’s hardly a significant overhead.</p>
<p>For more information on Kubernetes logging, please refer official docs:</p>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/</a></p>
|
<p>I'm experiencing a <strong>strange issue</strong> when using <strong>CPU Requests/Limits in Kubernetes</strong>. Prior to setting any CPU Requests/Limits at all, all my services performed very well. I recently started placing some Resource Quotas to avoid future resource starvation. These values were set based in the actual usage of those services, but to my surprise, after those were added, some <strong>services started to increase their response time drastically</strong>. My first guess was that I might placed wrong Requests/Limits, but looking at the metrics revealed that in fact <strong>none of the services facing this issue were near those values</strong>. In fact, some of them were closer to the Requests than the Limits. </p>
<p>Then I started looking at CPU throttling metrics and found that <strong>all my pods are being throttled</strong>. I then increased the limits for one of the services to 1000m (from 250m) and I saw less throttling in that pod, but I don't understand why I should set that higher limit if the pod wasn't reaching its old limit (250m).</p>
<p>So my question is: <strong>If I'm not reaching the CPU limits, why are my pods throttling? Why is my response time increasing if the pods are not using their full capacity?</strong></p>
<p>Here there are some screenshots of my metrics <strong>(CPU Request: 50m, CPU Limit: 250m)</strong>:</p>
<p><strong>CPU Usage (here we can see the CPU of this pod never reached its limit of 250m):</strong>
<a href="https://i.stack.imgur.com/3W7ig.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3W7ig.png" alt="CPU Usage"></a></p>
<p><strong>CPU Throttling:</strong>
<a href="https://i.stack.imgur.com/XaPIf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XaPIf.png" alt="CPU Throttling"></a></p>
<p><strong>After setting limits to this pod to <em>1000m</em>, we can observe less throttling</strong>
<a href="https://i.stack.imgur.com/XfQxU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XfQxU.png" alt="Comparation"></a></p>
<p><strong>kubectl top</strong></p>
<p><a href="https://i.stack.imgur.com/mZWoi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mZWoi.png" alt="Top"></a></p>
<p>P.S: Before setting these Requests/Limits there wasn't throttling at all (as expected)</p>
<p>P.S 2: None of my nodes are facing high usage. In fact, none of them are using more than 50% of CPU at any time.</p>
<p>Thanks in advance!</p>
| <p>If you see the <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="noreferrer">documentation</a> you see when you issue a <code>Request</code> for CPUs it actually uses the <a href="https://docs.docker.com/engine/reference/run/#cpu-share-constraint" rel="noreferrer"><code>--cpu-shares</code></a> option in Docker which actually uses the cpu.shares attribute for the <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cpu_and_memory-use_case" rel="noreferrer">cpu,cpuacct cgroup</a> on Linux. So a value of <code>50m</code> is about <code>--cpu-shares=51</code> based on the maximum being <code>1024</code>. <code>1024</code> represents 100% of the shares, so <code>51</code> would be 4-5% of the share. That's pretty low, to begin with. But the important factor here is that this relative to how many pods/container you have on your system and what cpu-shares those have (are they using the default).</p>
<p>So let's say that on your node you have another pod/container with 1024 shares which is the default and you have this pod/container with 4-5 shares. Then this container will get about get about 0.5% CPU, while the other pod/container will
get about 99.5% of the CPU (if it has no limits). So again it all depends on how many pods/container you have on the node and what their shares are.</p>
<p>Also, not very well documented in the <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="noreferrer">Kubernetes docs</a>, but if you use <code>Limit</code> on a pod it's basically using two flags in Docker: <a href="https://docs.docker.com/engine/reference/run/#cpu-period-constraint" rel="noreferrer"><code>--cpu-period and --cpu--quota</code></a> which actually uses the cpu.cfs_period_us and the cpu.cfs_quota_us attributes for the <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cpu_and_memory-use_case" rel="noreferrer">cpu,cpuacct cgroup</a> on Linux. This was introduced to the fact that cpu.shares didn't provide a limit so you'd spill over cases where containers would grab most of the CPU.</p>
<p>So, as far as this limit is concerned you will never hit it if you have other containers on the same node that don't have limits (or higher limits) but have a higher cpu.shares because they will end up optimizing and picking idle CPU. This could be what you are seeing, but again depends on your specific case.</p>
<p>A longer explanation for all of the above <a href="https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b" rel="noreferrer">here</a>.</p>
|
<p>Hi I am using latest kubernetes 1.13.1 and docker-ce (Docker version 18.06.1-ce, build e68fc7a).</p>
<p>I setup a deployment file that mount a file from the host (host-path) and mounts it inside a container (mountPath).</p>
<p>The bug is when I am trying to mount a find from the host to the container I get an error message that It's not a file. (Kubernetes think that the file is a directory for some reason) </p>
<p>When I am trying to run the containers using the command:
Kubectl create -f
it stay at ContainerCreating stage forever.</p>
<p>after deeper look on it using Kubectl describe pod it say:<br>
Is has an error message the the file is not recognized as a file. </p>
<p>Here is the deployment file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: notixxxion
name: notification
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: notification
spec:
containers:
- image: docker-registry.xxxxxx.com/xxxxx/nxxxx:laxxt
name: notixxxion
ports:
- containerPort: xxx0
#### host file configuration
volumeMounts:
- mountPath: /opt/notification/dist/hellow.txt
name: test-volume
readOnly: false
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /exec-ui/app-config/hellow.txt
# this field is optional
type: FileOrCreate
#type: File
status: {}
</code></pre>
| <p>I have reinstalled the kubernetes cluster and it got little bit better.
kubernetes now can read files without any problem and the container in creating and running But, there is some other issue with the host path storage type:</p>
<p>hostPath containing mounts do not update as they change on the host even after I delete the pod and create it again </p>
|
<p>First, a little context: I have 4 Kubernetes clusters, one for each environment (develop, staging, testing, prod). Each cluster has its own <code>values.yaml</code> file with env-specific configuration of all helm charts that we've written. </p>
<p>So, when our CD pipeline deploys <code>mychart</code> to the <code>develop</code> cluster, it essentially does the following:</p>
<p><code>helm install -f base-values.yaml -f develop-values.yaml ./mychart.tgz</code></p>
<p>Now, let's presume <code>mychart</code> has a <code>requirements.yaml</code> file which specifies <a href="https://github.com/helm/charts/tree/master/stable/mongodb" rel="nofollow noreferrer">the mongodb chart</a> as a subchart dependency. The mongodb chart references, for example, <code>.Values.mongodbRootPassword</code>. When included as a subchart by <code>mychart</code>, I can set <code>.Values.mongodb.mongodbRootPassword</code> in <code>mychart</code>'s default <code>values.yaml</code> to change this value.</p>
<p>My problem is that given my current CD pipeline, if I set <code>.Values.mongodb.mongodbRootPassword</code> in <code>develop-values.yaml</code>, it will be taken for <em>all</em> mongodb instances that are deployed to the <code>develop</code> cluster - not just <code>mychart</code>'s. </p>
<p>So, my questions:</p>
<ul>
<li>using per-environment <code>values.yaml</code> files, how would I go about setting <code>mychart</code>'s mongodb's root password in one of the cluster-specific <code>values.yaml</code> files?</li>
<li>is this even possible? Or is my current CD approach (per-environment values files) an anti-pattern?</li>
<li>if this is not possible or an anti-pattern, how would you go about setting the values of helm individual charts and subcharts on a per-environment basis?</li>
</ul>
| <blockquote>
<ul>
<li>using per-environment <code>values.yaml</code> files, how would I go about setting <code>mychart</code>'s mongodb's root password in one of the cluster-specific <code>values.yaml</code> files?</li>
</ul>
</blockquote>
<p>You can override YAML files with <a href="https://github.com/helm/helm/blob/master/docs/charts.md#using-the-cli-with-tags-and-conditions" rel="nofollow noreferrer"><code>--set</code></a> option from <code>install</code></p>
<p><strong>Example:</strong></p>
<p><code>helm install -f base-values.yaml -f develop-values.yaml --set someSection.someValue=1234 ./mychart.tgz</code></p>
<p>Then, you can set CI command call with environment settings and have just one YAML.</p>
<blockquote>
<ul>
<li>is this even possible? Or is my current CD approach (per-environment values files) an anti-pattern?</li>
</ul>
</blockquote>
<p>Nope, that's a feature :-)</p>
<blockquote>
<ul>
<li>if this is not possible or an anti-pattern, how would you go about setting the values of helm individual charts and subcharts on a per-environment basis?</li>
</ul>
</blockquote>
<p>It would be nice also. Perhaps, if your infrastructure grow fast and you have a lot of environments, apps or else, this could be better to manage.</p>
|
<p>By running command <code>kubectl logs pod -c container</code></p>
<p>I am getting continuous autoscrolling list of logs. Is there any way I can get to the end or see the latest log. I don't want go through all the logs. </p>
<p>I have tried using -f as well. Any suggestion?</p>
| <p>According to <code>kubectl logs --help</code>
you can use <code>--tail</code></p>
<p>e.g. <code>kubectl logs pod --tail=10</code></p>
|
<p>I am using GKE with istio add-on enabled. Myapp somehow gives 503 errors using when using websocket. I am starting to think that maybe the websocket is working but the database connection is not and that causes 503's, as the cloudsql-proxy logs give errors:</p>
<pre><code>$ kubectl logs myapp-54d6696fb4-bmp5m cloudsql-proxy
2019/01/04 21:56:47 using credential file for authentication; [email protected]
2019/01/04 21:56:47 Listening on 127.0.0.1:5432 for myproject:europe-west4:mydatabase
2019/01/04 21:56:47 Ready for new connections
2019/01/04 21:56:51 New connection for "myproject:europe-west4:mydatabase"
2019/01/04 21:56:51 couldn't connect to "myproject:europe-west4:mydatabase": Post https://www.googleapis.com/sql/v1beta4/projects/myproject/instances/mydatabase/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: read tcp 10.44.11.21:60728->108.177.126.95:443: read: connection reset by peer
2019/01/04 22:14:56 New connection for "myproject:europe-west4:mydatabase"
2019/01/04 22:14:56 couldn't connect to "myproject:europe-west4:mydatabase": Post https://www.googleapis.com/sql/v1beta4/projects/myproject/instances/mydatabase/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: read tcp 10.44.11.21:36734->108.177.127.95:443: read: connection reset by peer
</code></pre>
<p>Looks like the required authentication details should be in the credentials of the proxy service account I created and thus is provided for:</p>
<pre><code>{
"type": "service_account",
"project_id": "myproject",
"private_key_id": "myprivekeyid",
"private_key": "-----BEGIN PRIVATE KEY-----\MYPRIVATEKEY-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "myclientid",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/proxy-user%40myproject.iam.gserviceaccount.com"
}
</code></pre>
<p>My question:
How do I get rid of the errors/ get a proper google sql config from GKE?</p>
<p>At cluster creation I selected the mTLS 'permissive' option.</p>
<p>My config:
myapp_and_router.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- port: 8089
# 'name: http' apparently does not work
name: db
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myproject/firstapp:v1
imagePullPolicy: Always
ports:
- containerPort: 8089
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
## Custom healthcheck for Ingress
readinessProbe:
httpGet:
path: /healthz
scheme: HTTP
port: 8089
initialDelaySeconds: 5
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /healthz
scheme: HTTP
port: 8089
initialDelaySeconds: 5
timeoutSeconds: 20
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject:europe-west4:mydatabase=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
---
###########################################################################
# Ingress resource (gateway)
##########################################################################
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
# 'name: http' apparently does not work
name: db
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "*"
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myapp
weight: 100
websocketUpgrade: true
---
</code></pre>
<p>EDIT 1: I had not enabled permissions (scopes) for the various google services when creating the cluster, see <a href="https://stackoverflow.com/questions/54145787/permissions-on-gke-cluster">here</a>. After creating a new cluster with the permissions I now get a new errormessage:</p>
<pre><code>kubectl logs mypod cloudsql-proxy
2019/01/11 20:39:58 using credential file for authentication; [email protected]
2019/01/11 20:39:58 Listening on 127.0.0.1:5432 for myproject:europe-west4:mydatabase
2019/01/11 20:39:58 Ready for new connections
2019/01/11 20:40:12 New connection for "myproject:europe-west4:mydatabase"
2019/01/11 20:40:12 couldn't connect to "myproject:europe-west4:mydatabase": Post https://www.googleapis.com/sql/v1beta4/projects/myproject/instances/mydatabase/createEphemeral?alt=json: oauth2: cannot fetch token: 400 Bad Request
Response: {
"error": "invalid_grant",
"error_description": "Invalid JWT Signature."
}
</code></pre>
<p>EDIT 2: Looks like new error was caused by the Service Accounts keys no longer being valid. After making new ones I can connect to the database!</p>
| <p>I saw similar errors but was able to get cloudsql-proxy working in my istio cluster on GKE by creating the following service entries (with some help from <a href="https://github.com/istio/istio/issues/6593#issuecomment-420591213" rel="nofollow noreferrer">https://github.com/istio/istio/issues/6593#issuecomment-420591213</a>):</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google-apis
spec:
hosts:
- "*.googleapis.com"
ports:
- name: https
number: 443
protocol: HTTPS
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: cloudsql-instances
spec:
hosts:
# Use `gcloud sql instances list` to get the addresses of instances
- 35.226.125.82
ports:
- name: tcp
number: 3307
protocol: TCP
</code></pre>
<p>Also, I still saw those connection errors during initialization until I added a delay in my app startup (<code>sleep 10</code> before running server) to give the istio-proxy and cloudsql-proxy containers time to get set up first. </p>
<p>EDIT 1: Here are logs with the errors, then the successful "New connection/Client closed" lines once things are working:</p>
<pre><code>2019/01/10 21:54:38 New connection for "my-project:us-central1:my-db"
2019/01/10 21:54:38 Throttling refreshCfg(my-project:us-central1:my-db): it was only called 44.445553175s ago
2019/01/10 21:54:38 couldn't connect to "my-project:us-central1:my-db": Post https://www.googleapis.com/sql/v1beta4/projects/my-project/instances/my-db/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp 108.177.112.84:443: getsockopt: connection refused
2019/01/10 21:54:38 New connection for "my-project:us-central1:my-db"
2019/01/10 21:54:38 Throttling refreshCfg(my-project:us-central1:my-db): it was only called 44.574562959s ago
2019/01/10 21:54:38 couldn't connect to "my-project:us-central1:my-db": Post https://www.googleapis.com/sql/v1beta4/projects/my-project/instances/my-db/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp 108.177.112.84:443: getsockopt: connection refused
2019/01/10 21:55:15 New connection for "my-project:us-central1:my-db"
2019/01/10 21:55:16 Client closed local connection on 127.0.0.1:5432
2019/01/10 21:55:17 New connection for "my-project:us-central1:my-db"
2019/01/10 21:55:17 New connection for "my-project:us-central1:my-db"
2019/01/10 21:55:27 Client closed local connection on 127.0.0.1:5432
2019/01/10 21:55:28 New connection for "my-project:us-central1:my-db"
2019/01/10 21:55:30 Client closed local connection on 127.0.0.1:5432
2019/01/10 21:55:37 Client closed local connection on 127.0.0.1:5432
2019/01/10 21:55:38 New connection for "my-project:us-central1:my-db"
2019/01/10 21:55:40 Client closed local connection on 127.0.0.1:5432
</code></pre>
<p>EDIT 2: Ensure that Cloud SQL api is within scope of your cluster.</p>
|
<p>I found the following Airflow DAG in this <a href="https://kubernetes.io/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/" rel="noreferrer">Blog Post</a>:</p>
<pre><code>from airflow import DAG
from datetime import datetime, timedelta
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
from airflow.operators.dummy_operator import DummyOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime.utcnow(),
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
dag = DAG(
'kubernetes_sample', default_args=default_args, schedule_interval=timedelta(minutes=10))
start = DummyOperator(task_id='run_this_first', dag=dag)
passing = KubernetesPodOperator(namespace='default',
image="Python:3.6",
cmds=["Python","-c"],
arguments=["print('hello world')"],
labels={"foo": "bar"},
name="passing-test",
task_id="passing-task",
get_logs=True,
dag=dag
)
failing = KubernetesPodOperator(namespace='default',
image="ubuntu:1604",
cmds=["Python","-c"],
arguments=["print('hello world')"],
labels={"foo": "bar"},
name="fail",
task_id="failing-task",
get_logs=True,
dag=dag
)
passing.set_upstream(start)
failing.set_upstream(start)
</code></pre>
<p>and before I attempted to add anything custom to it ... attempted to run it as is. However, the code seems to timeout in my airflow environment.</p>
<p>Per documentation <a href="https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator" rel="noreferrer">here</a> I attempted to set <code>startup_timeout_seconds</code> to something ridiculous like 10m ... but still got the the timeout message described in the documentation:</p>
<pre><code>[2019-01-04 11:13:33,360] {pod_launcher.py:112} INFO - Event: fail-7dd76b92 had an event of type Pending
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 6, in <module>
exec(compile(open(__file__).read(), __file__, 'exec'))
File "/usr/local/lib/airflow/airflow/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/airflow/airflow/bin/cli.py", line 392, in run
pool=args.pool,
File "/usr/local/lib/airflow/airflow/utils/db.py", line 50, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/airflow/airflow/models.py", line 1492, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/airflow/airflow/contrib/operators/kubernetes_pod_operator.py", line 123, in execute
raise AirflowException('Pod Launching failed: {error}'.format(error=ex))
airflow.exceptions.AirflowException: Pod Launching failed: Pod took too long to start
</code></pre>
<p>Any Input would be appreciated.</p>
| <p>Since this code isn’t using fully qualified images, that means Airflow is pulling the images from <a href="https://hub.docker.com/" rel="noreferrer">hub.docker.com</a>, and <code>"Python:3.6"</code> and <code>"ubuntu:1604"</code> aren’t available docker images names for <a href="https://hub.docker.com/_/python" rel="noreferrer">Python</a> or <a href="https://hub.docker.com/_/ubuntu" rel="noreferrer">Ubuntu</a> in <a href="https://hub.docker.com/" rel="noreferrer">hub.docker.com</a>.</p>
<p>Also the "Python" command shouldn’t be capitalised.</p>
<p>A working code with valid docker image names would be:</p>
<pre><code>from airflow import DAG
from datetime import datetime, timedelta
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
from airflow.operators.dummy_operator import DummyOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime.utcnow(),
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
dag = DAG(
'kubernetes_sample', default_args=default_args, schedule_interval=timedelta(minutes=10))
start = DummyOperator(task_id='run_this_first', dag=dag)
passing = KubernetesPodOperator(namespace='default',
image="python:3.6-stretch",
cmds=["python","-c"],
arguments=["print('hello world')"],
labels={"foo": "bar"},
name="passing-test",
task_id="passing-task",
get_logs=True,
dag=dag
)
failing = KubernetesPodOperator(namespace='default',
image="ubuntu:16.04",
cmds=["python","-c"],
arguments=["print('hello world')"],
labels={"foo": "bar"},
name="fail",
task_id="failing-task",
get_logs=True,
dag=dag
)
passing.set_upstream(start)
failing.set_upstream(start)
</code></pre>
|
<p>using <code>kubectl</code> and <code>kops</code> <code>1.8</code></p>
<p>When spinning of a cluster in <code>aws</code> using <code>kops</code> the client certificate (present as string in the <code>client-certificate-data</code> field of <code>~/.kube/config</code>) created has the following values:</p>
<pre><code> Subject: O=system:masters, CN=kubecfg
</code></pre>
<p>Unless I am wrong, starting from <code>kubernetes 1.4</code>, the value for <code>O</code>rganitazion is interpeted as <code>group</code> information (string associated with <code>CN</code> value is the so-called user, since <code>k8s</code> does not inherently have such a concept)</p>
<p><strong>1</strong>: How can I see what permissions are associated with the <code>system:masters</code> group and/or the <code>kubecfg</code> user? </p>
<ul>
<li>(related to the above): what is the out-of-the-box authorization method I am using now? <code>RBAC</code>? How can I check this?</li>
</ul>
<p><strong>2</strong>: Why the entries in my <code>~/.kube/config</code> do <strong>not</strong> incorporate a <code>kubecfg</code> user? (but rather a user bearing my cluster name and another user named <code>admin</code>?)</p>
<pre><code>$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: <server_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: <some_pass>
username: admin
- name: <my_cluster_name>.local-basic-auth
user:
password: <some_pass>
username: admin
</code></pre>
<p>At the end of the day, what user I am performing api calls with, when executing <code>kubectl</code> commands?</p>
<p><strong>update</strong>: I tried to mess up the value of <code>client-certificate-data</code> in my <code>~/.kube/config</code> and I got </p>
<blockquote>
<p>error: tls: private key does not match public key</p>
</blockquote>
<p>I am assuming this means I am using a <code>x509</code> based auth (?)</p>
<p>So I am making api calls as <code>kubecfg</code> ?</p>
| <p>Note that there is a <a href="https://stackoverflow.com/questions/6556522/authentication-versus-authorization">difference</a> between <strong>authentication</strong> and <strong>authorization</strong>.</p>
<ul>
<li>Authentication ensures that a user is who they say they are. Calls to the Kubernetes API always requires successful authentication.</li>
<li>Authorization determines whether a user has access to specific Kubernetes API resources (via RBAC). RBAC must be explicitly enabled as a configuration option to the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a> server.</li>
</ul>
<p>Kubernetes <strong>Authentication</strong></p>
<ul>
<li><p>In Kubernetes, there are many <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">mechanisms</a> which may be used to to authenticate a user such as tokens, passwords, OIDC connect tokens, and SSL x509 client certs.</p></li>
<li><p>As you have discovered above, <code>kops</code> will auto-generate a ~/.kube/config file with an embedded SSL x509 client certificate.
Presenting the client certificate along with any REST call to the kube-apiserver allows the kube-apiserver to authenticate the caller by validating that the client certificate has been signed by the cluster Certificate Authority (CA). If the client cert has been properly signed, then the caller is who they say they are.</p></li>
<li><p>The identity of the holder of the client certificate is determined by the Subject field of the SSL x509 client certificate.</p>
<ul>
<li>Subject Common Name determines the user identity. (e.g. CN=Bob)</li>
<li>Subject Organization determines the user's groups. (e.g. O=admins, O=system:masters). Note that multiple organizations (i.e. groups) may be specified.</li>
</ul></li>
<li>Please note that <code>user</code> name in the kubeconfig file is just an opaque value user for the convenience of the kubectl tool. The real identity of the <code>user</code> is that which has been embedded in the SSL x509 client certificate or any other token.</li>
<li>Be aware that each instance of every component (e.g. kubelet, scheduler, etcd) in a typical Kubernetes deployment has its own SSL x509 client certificate, which it uses to authenticate when communicating with other components. See this <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/" rel="nofollow noreferrer">link</a></li>
</ul>
<p>Kubernetes <strong>Authorization</strong></p>
<ul>
<li>In Kubernetes, RBAC <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings" rel="nofollow noreferrer">roles and rolebindings</a> determine exactly what kube-apiserver REST endpoints a particular user may access, and what <code>verb</code> operations are allowed (e.g. "get", "list", "watch", "create", "update", "patch", "delete"). </li>
<li>One may create a <code>role</code> which is a set of permissions that define access to kube-apiserver REST endpoints.</li>
<li>One may then create a <code>rolebinding</code> which binds a user id or a group id to a specific <code>role</code>.</li>
<li>Please note there are both cluster-wide and namespace-wide rolebindings available.</li>
<li>Here's an example:</li>
</ul>
<pre><code>
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-grantor
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["create"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterroles"]
verbs: ["bind"]
resourceNames: ["admin","edit","view"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-grantor-binding
namespace: user-1-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-grantor
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user-1
</code></pre>
<ul>
<li>RBAC is only one possible authorization mechanism. Kubernetes <a href="https://kubernetes.io/docs/reference/access-authn-authz/abac/" rel="nofollow noreferrer">ABAC</a> is less popular.</li>
</ul>
<p><strong>Checking if you have RBAC enabled</strong></p>
<p>Simply verify the kube-apiserver startup options. If kube-apiserver is running as a pod, you can check it like this:</p>
<pre><code>$ kubectl get po kube-apiserver-ubuntu-18 -n kube-system -o name |grep api
pod/kube-apiserver-ubuntu-18
$ kubectl get po kube-apiserver-ubuntu-18 -n kube-system -o yaml
# THEN SEARCH FOR RBAC
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
</code></pre>
|
<blockquote>
<p>k8s version: v1.10.4<br>
flannel version: v0.10.0<br>
docker version v1.12.6 </p>
</blockquote>
<p>when i use command <code>brctl show</code> on node,it shows as bellow:</p>
<pre><code>[root@node03 tmp]# brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.0a580af40501 no veth39711246
veth591ea0bf
veth5b889fed
veth61dfc48a
veth6ef58804
veth75f5ef36
vethc162dc8a
docker0 8000.0242dfd605c0 no
</code></pre>
it shows that the vethXXX are binding on network bridge named cni0, but when i use command `ip addr`,it shows :
<pre><code>[root@node03 tmp]# ip addr |grep veth
6: veth61dfc48a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
7: veth591ea0bf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
9: veth6ef58804@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
46: vethc162dc8a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
55: veth5b889fed@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
61: veth75f5ef36@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
78: veth39711246@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
</code></pre>
these veth are all binding on `if3` ,but `if3` is not cni0.it is `docker0`
<pre><code>3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
</code></pre>
<p>it seems that network bridge <code>docker0</code> is useless, but <code>ip addr</code> shows that all veth device are binding on it . what role does network bridge <code>docker0</code> play in k8s with flannel? thanks</p>
| <p>There are two network models here Docker and Kubernetes.</p>
<p>Docker model</p>
<blockquote>
<p>By default, Docker uses host-private networking. It creates a virtual bridge, called <code>docker0</code> by default, and allocates a subnet from one of the private address blocks defined in <a href="https://www.rfc-editor.org/rfc/rfc1918" rel="nofollow noreferrer">RFC1918</a> for that bridge. For each container that Docker creates, it allocates a virtual Ethernet device (called <code>veth</code>) which is attached to the bridge. The veth is mapped to appear as <code>eth0</code> in the container, using Linux namespaces. The in-container <code>eth0</code> interface is given an IP address from the bridge’s address range.</p>
<p><strong>The result is that Docker containers can talk to other containers only if they are on the same machine</strong> (and thus the same virtual bridge). <strong>Containers on different machines can not reach each other</strong> - in fact they may end up with the exact same network ranges and IP addresses.</p>
</blockquote>
<p>Kubernetes model</p>
<blockquote>
<p>Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):</p>
</blockquote>
<ul>
<li>all containers can communicate with all other containers without NAT</li>
<li>all nodes can communicate with all containers (and vice-versa) without NAT</li>
<li>the IP that a container sees itself as is the same IP that others see it as</li>
</ul>
<blockquote>
<p>Kubernetes applies IP addresses at the <code>Pod</code> scope - containers within a <code>Pod</code> share their network namespaces - including their IP address. This means that containers within a <code>Pod</code> can all reach each other’s ports on <code>localhost</code>. This does imply that containers within a <code>Pod</code> must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model. This is implemented, using Docker, as a “pod container” which holds the network namespace open while “app containers” (the things the user specified) join that namespace with Docker’s <code>--net=container:<id></code> function.</p>
<p>As with Docker, it is possible to request host ports, but this is reduced to a very niche operation. In this case a port will be allocated on the host <code>Node</code> and traffic will be forwarded to the <code>Pod</code>. The <code>Pod</code> itself is blind to the existence or non-existence of host ports.</p>
</blockquote>
<p>In order to integrate the platform with the underlying network infrastructure Kubernetes provide a plugin specification called <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Container Networking Interface (CNI)</a>. If the Kubernetes fundamental requirements are met vendors can use network stack as they like, typically using overlay networks to support <strong>multi-subnet</strong> and <strong>multi-az</strong> clusters.</p>
<p>Bellow is shown how overlay networks are implemented through <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> which is a popular <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">CNI</a>.</p>
<p><a href="https://i.stack.imgur.com/DOxTE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DOxTE.png" alt="flannel" /></a></p>
<p>You can read more about other CNI's <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">here</a>. The Kubernetes approach is explained in <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Cluster Networking</a> docs. I also recommend reading <a href="https://www.contino.io/insights/kubernetes-is-hard-why-eks-makes-it-easier-for-network-and-security-architects" rel="nofollow noreferrer">Kubernetes Is Hard: Why EKS Makes It Easier for Network and Security Architects</a> which explains how <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> works, also another <a href="https://medium.com/all-things-about-docker/setup-hyperd-with-flannel-network-1c31a9f5f52e" rel="nofollow noreferrer">article from Medium</a></p>
<p>Hope this answers your question.</p>
|
<p>Does liveness probe in kubernetes verify Certificate for https request? I am using a self-signed certificate for my project. </p>
| <p>No, as per the official documentation
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a></p>
<blockquote>
<p>If scheme field is set to HTTPS, the kubelet sends an HTTPS request skipping the certificate verification.</p>
</blockquote>
|
<p>I have multiple Kubernetes clusters. they are all on the "same network" - that is, there are other servers/services on the network such that all pods in all clusters can access those servers/services.</p>
<p>I know that each pod gets a unique IP and thus all pods can communicate with each other directly with those IPs. BUT (AND HERE IS MY QUESTION) - do pods get unique IPs across my whole network? or just across/within the cluster?</p>
<p>Reason for this question is - can I have all the clusters use the same network range for pods.
that is, is this proper design: </p>
<pre><code>DESIGN A
CLUSTER 1
Pod1.1 10.0.0.1
Pod1.1 10.0.0.2
...
Pod1.1 10.0.0.n
CLUSTER 2
Pod2.1 10.0.0.1
Pod2.1 10.0.0.2
...
Pod2.1 10.0.0.n
</code></pre>
<p>or, do I need to have it like this: </p>
<pre><code>DESIGN B
CLUSTER 1
Pod1.1 10.0.0.1
Pod1.1 10.0.0.2
...
Pod1.1 10.0.0.n
CLUSTER 2
Pod2.1 10.0.1.1
Pod2.1 10.0.1.2
...
Pod2.1 10.0.1.n
</code></pre>
<p>ANOTHER WAY OF ASKING THIS QUESTION:</p>
<p>When a Pod1.1 calls a webserver <em>outside</em> of the cluster, does it advertise it self as 10.0.0.1? or as some NAT IP that kubernetes handles and maps back to 10.0.0.1?</p>
<p>Thanks!</p>
| <blockquote>
<p>do pods get unique IPs across my whole network? or just across/within the cluster?</p>
</blockquote>
<p>Generally, just within the cluster. When you set up your network overlay you set it up with a PodCidr and that will be the pod network within your cluster. You could have two clusters with the same PodCidr and still able to communicate with each other. Once traffic leaves the cluster and a server for that matter it's seen with the external IP.</p>
<p>So yes, there is some trickery with iptables (or depending the overlay) to make it look like it's 'local' within the cluster, but outside of the cluster, it's just IP routing. </p>
<p>Having said that, some network overlays allow you to connect two or more clusters with each other. For example, <a href="https://docs.projectcalico.org/v2.0/usage/configuration/bgp" rel="nofollow noreferrer">Calico BGP Peering</a> and <a href="https://cilium.io/blog/2018/12/10/cilium-14-preview" rel="nofollow noreferrer">Cillium Multi-cluster</a>.</p>
|
<p>I would like to install Kubernetes on Alpine Linux which runs on a Raspberry Pi (ARM architecture). So far I've only found K8s packages for x86_64 but nothing for armhf...
Any ideas/workarounds would be very much appreciated.</p>
<p>Thank you!</p>
| <p>Since <a href="https://github.com/kubernetes/kubernetes/issues/17981" rel="nofollow noreferrer">April/2016</a> Kubernetes on ARM archtectures is native.</p>
<p>And there is an <a href="https://pkgs.alpinelinux.org/package/edge/testing/x86_64/kubernetes" rel="nofollow noreferrer">official package</a> in Alpine repositories.</p>
<p>So run <code>apk add kubernetes</code> should works.</p>
|
<p>How can I tell whether or not I am running inside a kubernetes cluster? With docker I can check if <code>/.dockerinit</code> exist. Is there an equivalent?</p>
| <p>You can check for <code>KUBERNETES_SERVICE_HOST</code> environment variable.</p>
<p>This variable is always exported in an environment where the container is executed.</p>
<p>Refer to <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables</a></p>
|
<p>We are attempting to make several private Kubernetes clusters. We can find limited documentation on specific settings for the private cluster, therefore we are running into issues related to the subnetwork IP ranges. </p>
<p>Say we have 3 clusters: We set the Master Address Range to 172.16.0.0/28, 172.16.0.16/28 and 172.16.0.32/28 respectively.</p>
<p>We leave Network and Subnet set to "default". We are able to create 2 clusters that way, however, upon spin-up of the 3rd cluster, we receive the error of "Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5." We suspect that we are setting up the subnetwork IP ranges incorrectly, but we are not sure what we are doing wrong, or why there is more than 1 secondary range per subnetwork, to begin with. </p>
<p>Here is a screenshot of the configuration for one of the clusters:
<a href="https://i.stack.imgur.com/Tlp5Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tlp5Z.png" alt="kubernetes configuration screenshot"></a></p>
<p>We are setting these clusters up through the UI.</p>
| <blockquote>
<p>For anyone who lands here from Google and is wondering how to list / see the subnet names that have been created using GKE as described in OP's question:</p>
</blockquote>
<p>To list subnets for a region (and potentially modify or delete a Subnet, since you won't know the name) use the beta gcloud command:</p>
<p><code>gcloud beta container subnets list-usable</code></p>
<p>I landed here while looking for the answer and figured others trying to determine the best way to structure their subnets / ranges might be able to use the above command (which took me forever to track down).</p>
|
<p>I am using below line of code to get details about a particular PVC</p>
<pre><code>response = await `serverModule.kubeclient.api.v1.namespaces(ns).persistentvolumeclaims(pvc).get();`
</code></pre>
<p>The corresponding API for above call is <a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/CoreV1Api.md#readnamespacedpersistentvolumeclaim" rel="nofollow noreferrer">readNamespacedPersistentVolumeClaim</a> with below format</p>
<pre><code>GET /api/v1/namespaces/{namespace}/persistentvolumeclaims/{name}
</code></pre>
<p>Now, I am trying to call <a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/StorageV1Api.md#readstorageclass" rel="nofollow noreferrer">readStorageClass</a> using same convention as above</p>
<blockquote>
<p>response = await serverModule.kubeclient.apis.storage.k8s.io.v1.storageclasses(sc).get();</p>
</blockquote>
<p>As you can see in the link, <code>GET /apis/storage.k8s.io/v1/storageclasses/{name}</code> is the format, I have used above syntax. But for some reason the code fails with error</p>
<pre><code>Exported kubeclient, ready to process requests
TypeError: Cannot read property 'k8s' of undefined
</code></pre>
<p>What is the syntax error that I have made. I tried various combinations but none worked.</p>
| <p>This issue is listing <code>PersistentVolumeClaim</code> is a part of <code>coreV1Api</code> of kubernetes and listing <code>StorageClass</code> is the part of <code>StorageV1beta1Api</code>. Following it the simplest code for listing storage class using JAVA client:</p>
<pre><code>ApiClient defaultClient = Configuration.getDefaultApiClient();
// Configure API key authorization: BearerToken
ApiKeyAuth BearerToken = (ApiKeyAuth) defaultClient.getAuthentication("BearerToken");
BearerToken.setApiKey("YOUR API KEY");
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//BearerToken.setApiKeyPrefix("Token");
StorageV1beta1Api apiInstance = new StorageV1beta1Api();
try {
V1beta1StorageClassList result = apiInstance.listStorageClass();
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling StorageV1beta1Api#listStorageClass");
e.printStackTrace();
}
</code></pre>
<p>Following is the official documentation link for your reference:</p>
<p><a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/StorageV1beta1Api.md#listStorageClass" rel="nofollow noreferrer">https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/StorageV1beta1Api.md#listStorageClass</a></p>
<p>Hope this helps.</p>
|
<p>I'm new to Kubernertes and we have one app that can be customized to several costumers. </p>
<p>The deployments are fine: they are running pods correctly. The problem is to access the API outside the cluster.</p>
<p>The AWS routes are being created as expected by Kubernetes Ingress.</p>
<p>The existing ones are working fine, but when I try reach the new one (lets say <code>client09</code>), always return <code>default-backend-404</code>.</p>
<p>Plus, when <code>curl</code> the url, it shows a <code>Kubernetes Ingress Controller Fake Certificate</code> message.</p>
<p>kubectl version
Client 1.6
Server 1.9</p>
<p>Also my user does not have full access, so I can't provide any information about <code>nginx controller</code>. We just have the same paste and copy for new costumers, but don't know what might be wrong.</p>
<p>Any thoughts what is wrong?</p>
<p><strong>Service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: client09-svc
labels:
run: client09-deploy
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: api
selector:
run: client09-deploy
</code></pre>
<p><strong>Deploy</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: client09-deploy
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
run: client09-deploy
spec:
terminationGracePeriodSeconds: 60
containers:
- name: client09
image: myContainer
ports:
- containerPort: 8080
name: api
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
imagePullPolicy: Always
resources:
limits:
cpu: 1800m
memory: 2000Mi
requests:
cpu: 400m
memory: 1000Mi
volumeMounts:
- mountPath: /secret-volume
name: secretvolume
imagePullSecrets:
- name: dockerhubkey
volumes:
- name: secretvolume
secret:
secretName: client09-secret
</code></pre>
<p><strong>Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/use-port-in-redirects: "true"
namespace: default
spec:
tls:
- hosts:
- client01.domain.com
- client02.domain.com
- client09.domain.com
secretName: my-ingress-tls
rules:
- host: client01.domain.com
http:
paths:
- backend:
serviceName: client01-svc
servicePort: 8080
path: /
- host: client02.domain.com
http:
paths:
- backend:
serviceName: client02-svc
servicePort: 8080
path: /
- host: client09.domain.com
http:
paths:
- backend:
serviceName: client09-svc
servicePort: 8080
path: /
</code></pre>
| <p>Looks like problem with selector.
Could you update Service YAML to this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: client09-svc
labels:
run: client09-deploy
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: api
selector:
name: client09-deploy
</code></pre>
|
<p>I'm trying to connect to an existing kubernetes that's running on AWS and run arbitrary commands on it using Java. Specifically, we are using fabric8 (although I am open to another api if you can provide a sufficient answer using one). The reason I need to do this in Java is because we plan to eventually incorporate this into our existing junit live tests.</p>
<p>For now I just need an example of how to connect to the sever and get all of the pod names as an array of Strings. Can somebody show me a simple, concise example of how to do this.</p>
<p>i.e. I want the equivalent of this bash script using a java api (again preferably using fabric8, but I'll accept another api if you know one)</p>
<pre><code>#!bin/bash
kops export kubecfg --name $CLUSTER --state=s3://$STATESTORE
kubectl get pod -o=custom-colums=NAME:.metadata.name -n=$NAMESPACE
</code></pre>
| <p>Here is the official Java client for Kubernetes.</p>
<p><a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a></p>
<p>It gives you a clean interface and write code in java to execute against kubernetes.</p>
<p>As listed in the documentation page to list all pods,</p>
<pre><code>import io.kubernetes.client.ApiClient;
import io.kubernetes.client.ApiException;
import io.kubernetes.client.Configuration;
import io.kubernetes.client.apis.CoreV1Api;
import io.kubernetes.client.models.V1Pod;
import io.kubernetes.client.models.V1PodList;
import io.kubernetes.client.util.Config;
import java.io.IOException;
public class Example {
public static void main(String[] args) throws IOException, ApiException{
ApiClient client = Config.defaultClient();
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null);
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
}
}
</code></pre>
<p>Hope it helps.</p>
|
<p>I have installed a local instance of Kubernetes via Docker on my Mac. </p>
<p>Following the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="noreferrer">walkthrough</a> on how to activate autoscaling on a deployment I have experienced an issue. The autoscaler can't read the metrics.</p>
<p>When I am running <code>kubectl describe hpa</code> the current cpu usage comes back as <strong>unknown / 50%</strong> with the warnings:</p>
<blockquote>
<p>Warning FailedGetResourceMetric:
horizontal-pod-autoscaler unable to get metrics for resource cpu:
unable to fetch metrics from API: the server could not find the
requested resource (get pods.metrics.k8s.io) </p>
<p>Warning FailedComputeMetricsReplicas
horizontal-pod-autoscaler failed to get cpu utilization: unable to
get metrics for resource cpu: unable to fetch metrics from API: the
server could not find the requested resource (get pods.metrics.k8s.io)</p>
</blockquote>
<p>I have installed the metrics-server via <code>git clone https://github.com/kubernetes-incubator/metrics-server.git</code>and installed it with <code>kubectl create -f deploy/1.8+</code></p>
| <p>I finally got it working..
Here are the full steps I took to get things working:</p>
<ol>
<li><p>Have Kubernetes running within Docker</p>
</li>
<li><p>Delete any previous instance of metrics-server from your Kubernetes instance with <code>kubectl delete -n kube-system deployments.apps metrics-server</code></p>
</li>
<li><p>Clone metrics-server with <code>git clone https://github.com/kubernetes-incubator/metrics-server.git</code></p>
</li>
<li><p>Edit the file <strong>deploy/1.8+/metrics-server-deployment.yaml</strong> to override the default command by adding a <strong>command</strong> section that didn't exist before. The new section will instruct metrics-server to allow for an insecure communications session (don't verify the certs involved). Do this only for Docker, and not for production deployments of metrics-server:</p>
<pre><code>containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
</code></pre>
</li>
<li><p>Add metrics-server to your Kubernetes instance with <code>kubectl create -f deploy/1.8+</code> (if errors with the .yaml, write this instead: <code>kubectl apply -f deploy/1.8+</code>)</p>
</li>
<li><p>Remove and add the autoscaler to your deployment again. It should now show the current cpu usage.</p>
</li>
</ol>
<p><strong>EDIT July 2020:</strong></p>
<p>Most of the above steps hold true except the <a href="https://github.com/kubernetes-sigs/metrics-server" rel="noreferrer">metrics-server</a> has changed and that file does not exist anymore.</p>
<p>The repo now recommends installing it like this:</p>
<pre><code>apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
</code></pre>
<p>So we can now download this file,</p>
<pre><code>curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml --output components.yaml
</code></pre>
<p>add <code>--kubelet-insecure-tls</code> under <code>args</code> (L88) to the <code>metrics-server</code> deployment and run</p>
<pre><code>kubectl apply -f components.yaml
</code></pre>
|
<p>I'm using kaniko image to push an image to a private docker register, and it gives me <code>No matching credentials were found, falling back on anonymous</code>.</p>
<pre><code>docker run -v $PWD:/workspace \
-v /root/.docker/config.json:/kaniko/config.json \
--env DOCKER_CONFIG=/kaniko \
gcr.io/kaniko-project/executor:latest \
-d gitlab.xxx.org/xxx/xxx
</code></pre>
<p>the config.json file is valid as I verified with docker login
I aslo follow <a href="https://docs.gitlab.com/ee/ci/docker/using_kaniko.html" rel="nofollow noreferrer">kaniko gitlab</a> to run kaniko in k8s, and get the same error</p>
| <p>Fixed, It's just about image download. I'm using it in an internal network. It's not about pushing images to registery.</p>
|
<p>I want to run an app on any node. It should always have at least one instance per node, but more instances are allowed, primarily during an update to prevent downtime of that pod (and node).</p>
<p>Kubernetes deployment updates usually work by launching a new pod, and as soon as it is available the old one is terminated. That's perfect, but in my case I need a DaemonSet to launch a specific app on all nodes at all times. However, when updating this DaemonSet, Kubernetes kills a pod one by one (i.e. node by node) and <strong>then</strong> launches a new pod, which means that on any given time during an update the pod may not be running on a node.</p>
<p>It seems that DaemonSets are, compared to Deployments, the correct way to do that, but I couldn't find any way to prevent downtime when updating the DaemonSet. Is there any way to do this? I also thought of using Deployments and update a replica amount manuall and antiPodAffinity so only one pod gets deployed per node, but this is kind of hacky.</p>
| <p>There was a very long set of discussions about adding this feature. You can see them <a href="https://github.com/kubernetes/kubernetes/issues/48841" rel="nofollow noreferrer">here</a> and <a href="https://github.com/kubernetes/enhancements/issues/373" rel="nofollow noreferrer">here</a></p>
<p>Long story short, this isn't really possible. You can try and combine <code>maxUnavailable: 0</code> and <code>type: rollingUpdate</code> in your <code>updateStrategy</code> but I don't think that's formally supported.</p>
<p>Example:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-daemonset
labels:
service: my-daemonset
spec:
selector:
matchLabels:
service: my-daemonset
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
service: my-daemonset
spec:
containers:
- name: daemonset-update
image: my-image:latest
</code></pre>
|
<p>I have been trying to convert <code>kubectl cp</code> command to it's equivalent <code>kubernetes python client</code> program. I have following code for that:</p>
<pre><code>from kubernetes import client, config
from kubernetes.stream import stream
import tarfile
from tempfile import TemporaryFile
# create an instance of the API class
config.load_kube_config()
api_instance = client.CoreV1Api()
exec_command = ['tar', 'xvf', '-', '-C', '/']
resp = stream(api_instance.connect_get_namespaced_pod_exec, "nginx-deployment-6bb6554bf-9sdtr", 'default',
command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
source_file = '/tmp/abc.txt'
with TemporaryFile() as tar_buffer:
with tarfile.open(fileobj=tar_buffer, mode='w') as tar:
tar.add(source_file)
tar_buffer.seek(0)
commands = []
commands.append(tar_buffer.read())
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
print("STDOUT: %s" % resp.read_stdout())
if resp.peek_stderr():
print("STDERR: %s" % resp.read_stderr())
if commands:
c = commands.pop(0)
# print("Running command... %s\n" % c)
resp.write_stdin(c)
else:
break
resp.close()
</code></pre>
<p>The above code gives me following error:</p>
<pre><code>/home/velotio/venv/bin/python /home/velotio/PycharmProjects/k8sClient/testing.py
Traceback (most recent call last):
File "/home/velotio/PycharmProjects/k8sClient/testing.py", line 38, in <module>
resp.write_stdin(c)
File "/usr/local/lib/python3.6/site-packages/kubernetes/stream/ws_client.py", line 160, in write_stdin
self.write_channel(STDIN_CHANNEL, data)
File "/usr/local/lib/python3.6/site-packages/kubernetes/stream/ws_client.py", line 114, in write_channel
self.sock.send(chr(channel) + data)
TypeError: must be str, not bytes
</code></pre>
<p>I am using <code>Python 3.6.3</code> and on <code>kubernetes 1.13</code> version.</p>
| <p>You have to convert bytes back to string which is what <code>write_stdin</code> method is expecting to get.</p>
<p>for example:</p>
<pre><code>resp.write_stdin(c.decode())
</code></pre>
<p>Another example:</p>
<pre><code># Array with two byte objects
In [1]: a = [b'1234', b'3455']
# Pop one of them
In [2]: c = a.pop(0)
# it is of byte type
In [3]: c
Out[3]: b'1234'
# wrapping in str won't help if you don't provide decoding
In [4]: str(c)
Out[4]: "b'1234'"
# With decoding
In [5]: str(c, 'utf-8')
Out[5]: '1234'
# Or simply use the decode str method
In [6]: c.decode()
Out[6]: '1234'
</code></pre>
<p>More on byte to string conversion here:
<a href="https://stackoverflow.com/questions/606191/convert-bytes-to-a-string">Convert bytes to a string?</a></p>
|
<p>I am trying to learn DNS in kubernetes with <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p>
<ol>
<li><p>I deployed the busybox</p></li>
<li><p><code>kubectl get pods busybox -o wide</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
busybox 1/1 Running 0 16m 10.200.1.5 worker-1
</code></pre></li>
<li><p><code>kubectl exec -ti busybox -- nslookup kubernetes.default</code></p>
<pre><code>Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
</code></pre></li>
<li><p>Do I need to modify the /etc/resolv.conf file of the worker-1 node. currently the /etc/resolv.conf content is below</p>
<pre><code>nameserver 169.254.169.254
search c.k8s-project-193906.internal google.internal**
</code></pre></li>
<li><p>Also the version of the worker-1
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic</p></li>
</ol>
<p>Please help me figure out which configuration causes the resolve error. Do I need to change resolve.conf file and based on what?</p>
| <p>You have encountered a bug in the latest versions of the busybox docker image. Use the tag <code>busybox:1.28</code> instead of <code>latest</code>. This <a href="https://github.com/docker-library/busybox/issues/48" rel="noreferrer">bug link is here</a>:</p>
<pre><code>"Nslookup does not work in latest busybox image"
"1.27/1.28 are working , 1.29/1.29.1 are not"
</code></pre>
<p>Here it is <strong>failing</strong> with the <code>busybox:latest</code> tag.</p>
<pre><code>$ kubectl run busybox --image busybox:latest --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10:53
** server can't find kubernetes.default: NXDOMAIN
*** Can't find kubernetes.default: No answer
/ # exit
pod "busybox" deleted
</code></pre>
<p>Here's the same command <strong>succeeding</strong> with the <code>busybox:1.28</code> tag.</p>
<pre><code>$ kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # exit
pod "busybox" deleted
</code></pre>
|
<p>I am trying to add new master , and I copy cert and keys i.e. <strong>/etc/kubernetes/pki/apiserver-kubelet-client.crt</strong> from current master to a new one. I noticed that after I do '<strong>kubeadm init --config=config.yaml</strong>' this key (probably all of them) is changing (<em>kubeadm init</em> itself is successful).. Why is this happening and could it be a root cause of my new master being in <strong><em>NotReady</em></strong> status ?</p>
<p><strong>systemctl status kubelet</strong> shows a lot of *Failed to list <em>v1.Node: Unauthorized</em>, *Failed to list <em>v1.Secret: Unauthorized</em>..</p>
<pre><code>docker@R90HE73F:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-0 Ready master 7d1h v1.13.1
k8s-master-1 Ready master 7d v1.13.1
k8s-master-2 NotReady master 104m v1.13.1
k8s-worker-0 Ready <none> 7d v1.13.1
k8s-worker-1 Ready <none> 7d v1.13.1
k8s-worker-2 Ready <none> 7d v1.13.1
</code></pre>
<p>Btw etcd cluster is healthy</p>
| <p>To add a new master to kubernetes cluster, you need to copy four files from your existing kubernetes master certificate directory before doing <code>kubeadm init</code> on new master. Those files are <code>ca.crt, ca.key, sa.pub, sa.key</code> and copy these files to <code>/etc/kubernetes/pki</code> folder on new master. If you don't copy the <code>sa*</code> files your kubernetes master will be into <code>NotReady</code> state and will have those errors.</p>
<p>For more information on how to setup kubernetes multi master, please check out my blog on kubernetes high availability:</p>
<p><a href="https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm" rel="nofollow noreferrer">https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm</a></p>
|
<p>I want to fetch the list of broker ids in a cluster using kubectl exec command.</p>
<p>I am able to run the commands from inside the pod and fetch the list of broker ids, however I need to find the list without having to go inside.</p>
<p>I am using kafka <a href="https://github.com/helm/charts/tree/master/incubator/kafka" rel="nofollow noreferrer">helm charts from incubator</a> and Kubernetes distribution which comes along with docker for Mac.</p>
<pre><code>kubectl exec hissing-warthog-kafka-1 -- /usr/bin/zookeeper-shell hissing-warthog-zookeeper:2181 <<< "ls /brokers/ids"
</code></pre>
<p><strong>Expected result:</strong>
Welcome to ZooKeeper!
JLine support is enabled</p>
<p>WATCHER::</p>
<p>WatchedEvent state:SyncConnected type:None path:null
[zk: hissing-warthog-zookeeper:2181(CONNECTED) 0] ls /brokers/ids
[0, 1, 2]</p>
<p><strong>Actual result:</strong>
Connecting to hissing-warthog-zookeeper:2181
Welcome to ZooKeeper!
JLine support is enabled</p>
<p>WATCHER::</p>
<p>WatchedEvent state:SyncConnected type:None path:null
[zk: hissing-warthog-zookeeper:2181(CONNECTED) 0]</p>
| <p>It should work in following way:</p>
<pre><code>kubectl exec hissing-warthog-kafka-1 -- /usr/bin/zookeeper-shell hissing-warthog-zookeeper:2181 -c ls /brokers/ids
</code></pre>
<p>Hope this helps.</p>
|
<p>I tried this but didn't work:</p>
<pre><code>minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user/app1:/minikube-host/app1 \
--mount-string /home/user/app2:/minikube-host/app2
</code></pre>
<p>but only <code>/home/user/app2</code> was mounted.</p>
| <p>You can run multiple <code>mount</code> commands after starting your <code>minikube</code> to mount the different folders:</p>
<pre><code>minikube mount /home/user/app1:/minikube-host/app1
minikube mount /home/user/app2:/minikube-host/app2
</code></pre>
<p>This will mount multiple folders in minikube . </p>
|
<p>I have build a custom tcserver image exposing port 80 8080 and 8443. Basically you have an apache and inside the configuration you have a proxy pass to forward it to the tcserver tomcat.<br/></p>
<pre><code>EXPOSE 80 8080 8443
</code></pre>
<p>After that I created a kubernetes yaml to build the pod exposing only port 80.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: tcserver
namespace: default
spec:
containers:
- name: tcserver
image: tcserver-test:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
</code></pre>
<p>And the service along with it.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: tcserver-svc
labels:
app: tcserver
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: tcserver
</code></pre>
<p>But the problem is that I'm unable to access it.<br/>
If I log to the pod (<code>kubectl exec -it tcserver -- /bin/bash</code>), I'm able to do a <code>curl -k -v http://localhost</code> and it will reply.<br/><br/></p>
<p>I believe I'm doing something wrong with the service, but I don't know what.<br/>
Any help will be appreciated. </p>
<p><strong>SVC change</strong> <br/>
As suggested by sfgroups, I added the <code>targetPort: 80</code> to the svc, but still not working.</p>
<p>When I try to curl the IP, I get a <strong>No route to host</strong></p>
<pre><code>[root@testmaster tcserver]# curl -k -v http://172.30.62.162:30080/
* About to connect() to 172.30.62.162 port 30080 (#0)
* Trying 172.30.62.162...
* No route to host
* Failed connect to 172.30.62.162:30080; No route to host
* Closing connection 0
curl: (7) Failed connect to 172.30.62.162:30080; No route to host
</code></pre>
<p>This is the describe from the svc:</p>
<pre><code>[root@testmaster tcserver]# kubectl describe svc tcserver-svc
Name: tcserver-svc
Namespace: default
Labels: app=tcserver
Annotations: <none>
Selector: app=tcserver
Type: NodePort
IP: 172.30.62.162
Port: <unset> 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
</code></pre>
| <p>When you look at the <code>kubectl describe service</code> output, you'll see it's not actually attached to any pods:</p>
<pre><code>Endpoints: <none>
</code></pre>
<p>That's because you say in the service spec that the service will attach to pods <em>labeled</em> with <code>app: tcserver</code></p>
<pre><code>spec:
selector:
app: tcserver
</code></pre>
<p>But, in the pod spec's metadata, you don't specify any labels at all</p>
<pre><code>metadata:
name: tcserver
namespace: default
# labels: {}
</code></pre>
<p>And so the fix here is to add to the pod spec the appropriate label</p>
<pre><code>metadata:
labels:
app: tcserver
</code></pre>
<p>Also note that it's a little unusual in practice to deploy a bare pod. Usually they're wrapped up in a higher-level controller, most often a deployment, that actually creates the pods. The deployment spec has a template pod spec and it's the pod's labels that matter.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tcserver
# Labels here are useful, but the service doesn't look for them
spec:
template:
metadata:
labels:
# These labels are what the service cares about
app: tcserver
spec:
containers: [...]
</code></pre>
|
<p>Team, I am trying to create a replica set but getting error as </p>
<p>error validating data: </p>
<blockquote>
<p>[ValidationError(ReplicaSet): unknown field "replicas" in
io.k8s.api.apps.v1.ReplicaSet, ValidationError(ReplicaSet): unknown
field "selector" in io.k8s.api.apps.v1.ReplicaSet,
ValidationError(ReplicaSet.spec): missing required field "selector" in
io.k8s.api.apps.v1.ReplicaSetSpec]; if you choose to ignore these
errors, turn validation off with --validate=false</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-pod-10sec-via-rc1
labels:
app: pod-label
spec:
template:
metadata:
name: test-pod-10sec-via-rc1
labels:
app: feature-pod-label
namespace: test-space
spec:
containers:
- name: main
image: ubuntu:latest
command: ["bash"]
args: ["-xc", "sleep 10"]
volumeMounts:
- name: in-0
mountPath: /in/0
readOnly: true
volumes:
- name: in-0
persistentVolumeClaim:
claimName: 123-123-123
readOnly: true
nodeSelector:
kubernetes.io/hostname: node1
replicas: 1
selector:
matchLabels:
app: feature-pod-label
</code></pre>
| <p>You have indentation issue in your yaml file, the correct yaml will be:</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-pod-10sec-via-rc1
labels:
app: pod-label
spec:
template:
metadata:
name: test-pod-10sec-via-rc1
labels:
app: feature-pod-label
namespace: test-space
spec:
template:
spec:
containers:
- name: main
image: ubuntu:latest
command: ["bash"]
args: ["-xc", "sleep 10"]
volumeMounts:
- name: in-0
mountPath: /in/0
readOnly: true
volumes:
- name: in-0
persistentVolumeClaim:
claimName: 123-123-123
readOnly: true
nodeSelector:
kubernetes.io/hostname: node1
replicas: 1
selector:
matchLabels:
app: feature-pod-label
</code></pre>
|
<p>Unable to create the Kubernetes Dashboard</p>
<p>I have setup my Kubernetes cluster by using Kubeadm in google cloud platform. I have followed <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard</a> docs for creating the Kubernetes Dashboard, but I am unable to create the same.</p>
<p>Please let me know how to create the Kubernetes Dashboard in Kubeadm method. </p>
| <p><a href="https://github.com/kubernetes/dashboard/wiki/Installation" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard/wiki/Installation</a></p>
<p>kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kube-system
(your certs need to be in the home directory)</p>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml</a></p>
<p>Then grant privileges as you see fit... </p>
|
<p>Im trying to launch two Cassandra statefulset instances and respective PVC in a cluster created in AWS AZ (Across 3 zones, <em>eu-west-1a</em>, <em>eu-west-1b</em> & <em>eu-west-1c</em>). </p>
<p>I created a node group with the following 2 nodes so as shown these nodes attach in the zones <em>eu-west-1a</em> and <em>eu-west-1b</em></p>
<pre><code>ip-192-168-47-86.eu-west-1.compute.internal - failure-domain.beta.kubernetes.io/zone=eu-west-1a,node-type=database-only
ip-192-168-3-191.eu-west-1.compute.internal - failure-domain.beta.kubernetes.io/zone=eu-west-1b,node-type=database-only
</code></pre>
<p>When I launch the Cassandra instances (using Helm) only one instance starts. The other instance shows the error,</p>
<pre><code>0/4 nodes are available: 2 node(s) didn't match node selector, 2 node(s) had no available volume zone.
</code></pre>
<p>The PVCs for these instances are bounded,</p>
<pre><code>kubectl get pvc -n storage -o wide
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cc-cassandra-0 Bound pvc-81e30224-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO gp2 4m
cassandra-data-cc-cassandra-1 Bound pvc-abd30868-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO gp2 3m
</code></pre>
<p>However, the PVs show that they are in zones <em>eu-west-1b</em> & <em>eu-west-1c</em></p>
<pre><code>kubectl get pv -n storage --show-labels
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
pvc-81e30224-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO Delete Bound storage/cassandra-data-cc-cassandra-0 gp2 7m failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1b
pvc-abd30868-14c5-11e9-aa4e-06d38251f8aa 10Gi RWO Delete Bound storage/cassandra-data-cc-cassandra-1 gp2 6m failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c
</code></pre>
<p>I have tried adding the following topology to the <code>StorageClass</code> to no avail,</p>
<pre><code>allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- eu-west-1a
- eu-west-1b
</code></pre>
<p>But despite of this I can still see the PVs in the zones, <code>eu-west-1b</code> & <code>eu-west-1c</code>. </p>
<p>Using K8 1.11.</p>
<p>Any other possible fixes ?</p>
| <p>Looking at <a href="https://v1-11.docs.kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">https://v1-11.docs.kubernetes.io/docs/concepts/storage/storage-classes/</a> <code>allowedTopologies</code> doesnt exist.</p>
<p>So I used <code>zones: eu-west-1a, eu-west-1b</code> in the <code>StorageClass</code> which seems to have worked.</p>
<pre><code>provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zones: eu-west-1a, eu-west-1b
</code></pre>
|
<p>When we run <code>helm install ./ --name release1 --namespace namespace1</code>, it creates the chart only if none of the deployments exist then it fails saying that the deployment or secret or any other objects already exist. </p>
<p>I want the functionality to create Kubernetes deployment or objects as part of helm install only those objects or deployments already not exists if exists helm should apply the templates instead of creating.</p>
<p>I have already tried 'helm install' by having a secret and the same secret is also there in the helm templates, so helm installs fail.</p>
| <p>Short answer, I would try <a href="https://docs.helm.sh/helm/#helm-upgrade" rel="nofollow noreferrer"><code>helm upgrade</code></a>.</p>
|
<p>How do I set a Kubernentes Ingress and Controller to essentially do what the following nginx.conf file does: </p>
<pre><code>upstream backend {
server server1.example.com weight=5;
server server2.example.com:8080;
server backup1.example.com:8080 backup;
}
</code></pre>
<p>I want one http endpoint to map to multiple Kubernetes services with a preference for a primary one but also have a backup one. (For my particular project, I need to have multiple services instead of one service with multiple pods.)</p>
<p>Here's my attempted ingress.yaml file. I'm quite certain that the way I'm listing the multiple backends is incorrect. How would I do it? And how do I set the "backup" flag?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
# kubernetes.io/ingress.global-static-ip-name: "kubernetes-ingress"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: server1
servicePort:
- path: /
backend:
serviceName: server2
servicePort: 8080
- path: /
backend:
serviceName: backup1
servicePort: 8080
</code></pre>
<p>I'm running Kubernetes on GKE.</p>
| <p>You can do <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="noreferrer">simple fanout</a> based on path or <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting" rel="noreferrer">name based virtual hosting</a>. </p>
<p>However, you'd need to distinguish based on something (other than port, since it's an Ingress), so your two options would be virtual host or path. </p>
<p>Paths will not work with some services that expect a standard path. Judging based on your example you'd most likely want to have something like a.example.com and b.example.com. Here's the example from the Kubernetes docs:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
</code></pre>
|
<p>I have a testing Kubernetes cluster running in remote VMs (on VSphere), I have full access to the VMs through <code>ssh</code> (they have private IPs). How can I expose services and access them from outside the cluster (from my remote laptop trying to get access to the machines) knowing that I can remotely perform all kubectl commands.</p>
<p>For example: I tried with the dashboard, I installed it, I have changed the service to NodePort, and I tried to access to it from my laptop using this URL <code>http:master-private-ip:exposedport</code>, also with worker IPs, but it does not work. It returns in browser only <code>�</code> (binary output). When I try to connect through <code>https</code>, it trows a certificates error.</p>
<pre><code>$ kubectl get svc -n kube-system -l k8s-app=kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.97.143.110 <none> 443:30714/TCP 42m
$ kubectl proxy -p 8001
$ curl http://172.16.5.226:30714 --output -
</code></pre>
<p>I have expected that the output shows me the <code>html</code> from the UI of the Kubernetes dashboard</p>
| <blockquote>
<p>NOTE: Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.</p>
</blockquote>
<p>If you have done everything correctly it should work over <code>HTTPS</code></p>
<p>As it's explained in <a href="https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above" rel="nofollow noreferrer">Accessing Dashboard 1.7.X and above</a>.</p>
<p>In order to expose Dashboard using <code>NodePort</code> you need to edit <code>kubernetes-dashboard</code> service.</p>
<p><code>kubectl -n kube-system edit service kubernetes-dashboard</code></p>
<p>Find <code>type: ClusterIP</code> and change it to <code>type: NodePort</code>, then save the file.</p>
<p>Then, check which port was the Dashboard exposed to:</p>
<p><code>kubectl -n kube-system get service kubernetes-dashboard</code>
which might look:</p>
<pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.100.124.90 <nodes> 443:31707/TCP 21h
</code></pre>
<p>To access the Dashboard navigate your browser to <code>https://<server_IP>:31707</code></p>
<p><strong>EDIT:</strong></p>
<p>In your case with self-signed certificate, you need to put it into a secret. It has to be named <code>kubernetes-dashboard-certs</code> and it has to be in <code>kube-system</code> namespace.</p>
<p>You have to save the cert as <code>dashboard.crt</code> and <code>dashboard.key</code> and store them under <code>$HOME/certs</code>.</p>
<pre><code>kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kube-system
</code></pre>
<p>This installation process is explained <a href="https://github.com/kubernetes/dashboard/wiki/Installation#recommended-setup" rel="nofollow noreferrer">here</a>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.