prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>When the Ingress Nginx controller reach its full capacity does it auto scale? Is Kubernetes Ingress even scalable?</p>
<p>In principle, the NGINX ingress controller is indeed scalable -- it pulls its entire configuration from the Kubernetes API server and is in itself basically stateless.</p> <p>In practice, this depends very much on how your ingress controller is set up. First of all, the ingress controller will not auto-scale by itself. If you have deployed it using a <code>Deployment</code> controller, you can use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">horizontal pod autoscaling</a> as described in the documentation. If you have deployed it using a <code>DaemonSet</code>, the ingress controller will automatically scale up and down with your cluster (maybe even automatically, if you're using the <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">cluster autoscaler</a>).</p> <p>In both scenarios, you're going to need a <code>Service</code> definition (possibly of type <code>NodePort</code> or <code>LoadBalancer</code>, to allow for external traffic) that matches all pods created by the deployment/daemon set to distribute traffic among them.</p>
<p>I have a node with 2 pods. Each pod will need to speak to each other. Problem being that i do not want to hardcode the IPs in for the communication. </p> <p>Similar to a DNS service, i need something (that does not change) that can sit above the pod ip and allow me to still communicate between the pods.</p>
<p>I can see two solution</p> <p><strong>Solution 1:</strong> Use Service</p> <p>If they are not from same controller and if they have their own Service, they can use Service to communicate with each other.</p> <pre><code>Service 1 - &gt; Pod 1 Service 2 - &gt; Pod 2 </code></pre> <p>Use this Service</p> <pre><code>Pod 1: Service 2 -&gt; Pod 2 Pod 2: Service 1 -&gt; Pod 1 </code></pre> <p><strong>Solution 2:</strong> Use StatefulSet</p> <p>If they need same Controller, you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSet</a>. </p> <blockquote> <p>For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, in the range [0,N], that is unique over the Set</p> </blockquote> <p>That means Pod name will be: <code>$(statefulset name)-$(ordinal)</code></p> <p>If you have StatefulSet named <code>nginx</code> with replica 2, you will have two Pod <code>nginx-0</code> &amp; <code>nginx-1</code>.</p> <blockquote> <p>As each Pod is created, it gets a matching DNS subdomain, taking the form: <code>$(podname).$(governing service domain)</code></p> </blockquote> <p><strong>Pod <code>nginx-0</code>:</strong> <code>nginx-1.{service-account}.{namespace}.svc.cluster.local</code> -> <strong><code>nginx-1</code></strong></p> <p><strong>Pod <code>nginx-1</code>:</strong> <code>nginx-0.{service-account}.{namespace}.svc.cluster.local</code> -> <strong><code>nginx-0</code></strong></p> <p>Using this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="noreferrer">Stable Network ID</a>, these Pods can communicate with each other.</p>
<p>I am running a KOPS Kubernetes cluster on AWS, trying to make <a href="https://github.com/Yolean/kubernetes-kafka" rel="nofollow noreferrer">kubernetes-kafka</a> example work with an Elastic Load Balancer. Here is the external services portion for two of the brokers:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: outside-0 namespace: kafka spec: selector: app: kafka kafka-broker-id: "0" ports: - protocol: TCP targetPort: 9094 port: 32400 nodePort: 32400 type: NodePort --- kind: Service apiVersion: v1 metadata: name: outside-1 namespace: kafka spec: selector: app: kafka kafka-broker-id: "1" ports: - protocol: TCP targetPort: 9094 port: 32401 nodePort: 32401 type: NodePort </code></pre> <p>Here is my attempt to expose those brokers via an ELB (actual FQDN replaced with <code>my.copmany.com</code>).</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-0 annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com spec: type: LoadBalancer ports: - port: 32400 name: outside targetPort: 32400 selector: app: outside-0 --- apiVersion: v1 kind: Service metadata: name: kafka-1 annotations: dns.alpha.kubernetes.io/external: kafka-1.kafka.my.company.com spec: type: LoadBalancer ports: - port: 32401 name: outside targetPort: 32401 selector: app: outside-1 </code></pre> <p>Looking at AWS ELB console shows 0 of 3 instances available for each of the Kafka ELB brokers and producing to <code>kafka-1.kafka.my.company.com:9094</code> using Kafka command line client times out. How can <code>outside-0</code> NodePort service be exposed via <code>kafka-0</code> LoadBalancer service? Or are there other approached to be considered?</p>
<p>Kakfa is very particular about clients needing direct access to the server who is the leader of a topic. To achieve this, I had done the following:</p> <p>1) Setup the configmap to dynamically override values for the advertised.listeners based on the pod's ordinal value </p> <pre><code>POD_ID=${HOSTNAME##*-} kafka-server-start.sh server.properties \ --override advertised.listeners=INSIDE://`hostname -f`:9092,OUTSIDE://kafka-${POD_ID}.kafka.my.company.com:9094 \ --override broker.id=${POD_ID} \ --override listeners=INSIDE://:9092,OUTSIDE://:9094 </code></pre> <p>2) Create a LoadBalancer service for each Kafka pod. Change the selector to match your kafka-pod-id. </p> <pre><code> apiVersion: v1 kind: Service metadata: name: kafka-0 annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com spec: externalTrafficPolicy: Local type: LoadBalancer ports: - port: 9094 name: outside targetPort: 9094 selector: app: kafka kafka-pod-id: "0" --- apiVersion: v1 kind: Service metadata: name: kafka-1 annotations: dns.alpha.kubernetes.io/external: kafka-1.kafka.my.company.com spec: externalTrafficPolicy: Local type: LoadBalancer ports: - port: 9094 name: outside targetPort: 9094 selector: app: kafka kafka-pod-id: "1" </code></pre>
<p>I am a newbie with both Kubernetes and CockroachDB. I have a secured multinode node database running on a Kubernetes cluster and have been able to expose it externally and monitor it using the cockroach UI.</p> <p><a href="https://i.stack.imgur.com/C9Ozc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C9Ozc.png" alt="service list"></a></p> <p>Now I want to access it from a node app using either pg for Node or sequelize. However I get connection timeouts. </p> <p>I believe I have to create a client certificate and use it to make the connection but I can find no Kubernetes specific docs on how to do this. If anyone has succeeded with this, how did you create the cert, and how did you use it in your node app? </p>
<p>I've figured out how to do this by picking up the certificates I generated for the local sql client and re-using them in my node app - like this</p> <pre><code>const pool = new Pool({ host: 'xxxx', max: 20, idleTimeoutMillis: 30000, connectionTimeoutMillis: 2000, port:26257, user:"root", database:"xxxx", ssl : { rejectUnauthorized : false, ca : fs.readFileSync("./ca/ca.crt").toString(), key : fs.readFileSync("./ca/client.xxxx.key").toString(), cert : fs.readFileSync("./ca/client.xxxx.crt").toString() } }); </code></pre> <p>I exposed the public pod as a service in kubernetes like this</p> <pre><code>kubectl expose service cockroachdb-public --port=26257 --target-port=26257 --name=cp --type=LoadBalancer </code></pre> <p>and picked up the external ip address that eventually got assigned to the service. </p> <p>Pretty straightforward actually, but head scratching when you approach it for the first time. Thanks to those that took the time to comment.</p> <p>@samstride just noticed your comment. Probably better to use a user other than root, but you can get these certs like this (probably other ways too).</p> <p>ca (using the cockroachdb-client-secure pod if you still have it running)</p> <pre><code>kubectl exec cockroachdb-client-secure -it -- cat /cockroach-certs/ca.crt &gt; ./ca.crt </code></pre> <p>key</p> <pre><code>kubectl get secret default.client.root -o jsonpath='{.data.key}' | base64 --decode &gt; client.root.key </code></pre> <p>cert</p> <pre><code>kubectl get secret default.client.root -o jsonpath='{.data.cert}' | base64 --decode &gt; client.root.crt </code></pre>
<p>I deploy kube-scheduler using <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/</a> .</p> <p>I followed the steps exactly at the beginning however it does not schedule the node using "my-scheduler" the node is pending instead.</p> <p>The log of "my-scheduler" pod is </p> <pre><code>E0207 20:35:43.079477 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kube-system:default" cannot list poddisruptionbudgets.policy at the cluster scope E0207 20:35:43.080416 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope E0207 20:35:43.081490 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:default" cannot list persistentvolumes at the cluster scope E0207 20:35:43.082515 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:593: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:default" cannot list pods at the cluster scope E0207 20:35:43.083566 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:default" cannot list nodes at the cluster scope E0207 20:35:43.084795 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:default" cannot list replicationcontrollers at the cluster scope E0207 20:35:44.077899 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:serviceaccount:kube-system:default" cannot list persistentvolumeclaims at the cluster scope E0207 20:35:44.078410 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:serviceaccount:kube-system:default" cannot list replicasets.extensions at the cluster scope E0207 20:35:44.079496 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:kube-system:default" cannot list statefulsets.apps at the cluster scope E0207 20:35:44.080585 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:serviceaccount:kube-system:default" cannot list poddisruptionbudgets.policy at the cluster scope E0207 20:35:44.081675 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:default" cannot list services at the cluster scope E0207 20:35:44.082726 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:kube-system:default" cannot list persistentvolumes at the cluster scope E0207 20:35:44.083811 1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:593: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:default" cannot list pods at the cluster scope E0207 20:35:44.084887 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:default" cannot list nodes at the cluster scope E0207 20:35:44.085921 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:87: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:serviceaccount:kube-system:default" cannot list replicationcontrollers at the cluster scope </code></pre> <p>It seems it does not have permission to access resources. I tried configured RBAC as the link says but it does not help.</p> <p>Please help me if you ever tried this.</p>
<p>I don't know why the new scheduler use "system:serviceaccount:kube-system:default" instead of "system:kube-system". The quick solution is:</p> <p>kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin</p>
<p>When i run <code>kubeadm init</code> it shows a token that I should use from the other hosts to connect to this host. Where is that token stored? I'm trying to figure out more ways to automate this by adding kubeadm inside of Terraform. </p> <p>Thanks for any help!</p>
<p>The instruction for Kubernetes 1.9.x (and above) can be found <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/#cmd-token-create" rel="noreferrer">here</a>.</p> <p>The commands I used are:</p> <pre><code>kubeadm token generate kubeadm token create &lt;generated-token&gt; --print-join-command --ttl=0 </code></pre>
<p>Have been trying to get the python client for kubernetes work as expected. Had good luck with <code>config.load_kube_config()</code> Now i'd like to use the Python client from a remote machine which neither has <code>kubectl</code> nor <code>~/.kube/config</code> on it. Tried the python client API reference such as <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_pod_for_all_namespaces" rel="noreferrer">this</a></p> <p>Snippet:</p> <pre><code>from __future__ import print_function import time import kubernetes.client from kubernetes.client.rest import ApiException from pprint import pprint # Configure API key authorization: BearerToken configuration = kubernetes.client.Configuration() configuration.api_key['authorization'] = 'YOUR_API_KEY' # Uncomment below to setup prefix (e.g. Bearer) for API key, if needed # configuration.api_key_prefix['authorization'] = 'Bearer' </code></pre> <p>At first it seemed i need to use the default service-account-token where it says:</p> <p><code>configuration.api_key['authorization'] = 'YOUR_API_KEY</code></p> <p>Soon realized that is not the case after seeing a lot of <code>[SSL: CERTIFICATE_VERIFY_FAILED]</code></p> <p>Can someone please give a hint to where i can obtain this from my new install? Used kubeadm to bring up this cluster.</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T21:10:44Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>&lt;--------- Edit below ----------></p> <p>Managed to get token from master using:</p> <pre><code>$ kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t' </code></pre> <p>The above result is named <code>ApiToken</code> in below Python snippet.</p> <pre><code>from kubernetes import client, config from kubernetes.client.rest import ApiException ApiToken = 'eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tbXF0eG4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA5NWQ3ZGVhLTA5MDgtMTFlOC04NTFiLTA4MDAyNzk0OGE2OSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.DBk6gyh4BFy-Gc94dihasYXLbspMvMEjuzFS-AEyNUwk6pR1zsdYOqxo5J-0t6qHN09JyyNK5Oz75cR6bYOGxir1a7SveQpXly4S2Iu3K3o6n8ys_kdP4lNMgBZy--rE0h4neG9s91ven36XP4nYZMwvWal56w39nCUmkomR2-DfhaD4-_Mqq2bd7lmETNinD2hpzTa9cf46VTTY0kcIwhk8FzxEtPA3kxoZul0AfpZT2QlyzLk9fTBRPjd57XbktBgQmiO2wppa_A1KN1Kg83fk1p40hSfY4Vf7Dr76rmKgAUVae-qkN725FWj-4NqzktjyAqalli5jcHo2leJv0A' configuration = client.Configuration() configuration.host = 'https://192.168.0.110:6443' configuration.verify_ssl=False configuration.debug = True configuration.api_key={"authorization":"Bearer "+ ApiToken} client.Configuration.set_default(configuration) kubeApi = client.CoreV1Api() try: allPods = kubeApi.list_pod_for_all_namespaces(watch=False) except ApiException as e: print("Exception when calling CoreV1Api-&gt;list_pod_for_all_namespaces: %s\n" % e) </code></pre> <p>Response is a <code>HTTP/1.1 403 Forbidden</code> message.</p> <pre><code>reply: 'HTTP/1.1 403 Forbidden\r\n' 2018-02-10 09:37:49,801 DEBUG https://192.168.0.110:6443 "GET /api/v1/pods?watch=False HTTP/1.1" 403 243 header: Content-Type header: X-Content-Type-Options header: Date header: Content-Length Exception when calling CoreV1Api-&gt;list_pod_for_all_namespaces: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Sat, 10 Feb 2018 04:07:46 GMT', 'Content-Length': '243'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:default:default\" cannot list pods at the cluster scope","reason":"Forbidden","details":{"kind":"pods"},"code":403} </code></pre> <p>Troubleshooting this stage now.</p>
<p>Maybe you need token.</p> <p>Can use anywhere secret in kube-system namespace: </p> <pre><code>$ kubectl get secrets -n kube-system $ kubectl describe secret/{secret_name} -n kube-system </code></pre>
<p>I'm trying to mount mongo <code>/data</code> directory on to a NFS volume in my kubernetes master machine for persisting mongo data. I see the volume is mounted successfully but I can see only <code>configdb</code> and <code>db</code> dirs but not their subdirectories. And I see the data is not even persisting in the volume. when I <code>kubectl describe &lt;my_pv&gt;</code> it shows <code>NFS (an NFS mount that lasts the lifetime of a pod)</code> </p> <p>Why is that so?</p> <p>I see in kubernetes docs stating that:</p> <blockquote> <p>An nfs volume allows an existing NFS (Network File System) share to be mounted into your pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of an nfs volume are preserved and the volume is merely unmounted. This means that an NFS volume can be pre-populated with data, and that data can be “handed off” between pods. NFS can be mounted by multiple writers simultaneously.</p> </blockquote> <p>I'm using kubernetes version 1.8.3.</p> <p>mongo-deployment.yml:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: mongo labels: name: mongo app: mongo spec: replicas: 3 selector: matchLabels: name: mongo app: mongo template: metadata: name: mongo labels: name: mongo app: mongo spec: containers: - name: mongo image: mongo:3.4.9 ports: - name: mongo containerPort: 27017 protocol: TCP volumeMounts: - name: mongovol mountPath: "/data" volumes: - name: mongovol persistentVolumeClaim: claimName: mongo-pvc </code></pre> <p>mongo-pv.yml:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mongo-pv labels: type: NFS spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: "/mongodata" server: 172.20.33.81 </code></pre> <p>mongo-pvc.yml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mongo-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi storageClassName: slow selector: matchLabels: type: NFS </code></pre> <p>The way I mounted my nfs share on my kubernetes master machine:</p> <pre><code>1) apt-get install nfs-kernel-server 2) mkdir /mongodata 3) chown nobody:nogroup -R /mongodata 4) vi /etc/exports 5) added the line "/mongodata *(rw,sync,all_squash,no_subtree_check)" 6) exportfs -ra 7) service nfs-kernel-server restart 8) showmount -e ----&gt; shows the share </code></pre> <p>I logged into the bash of my pod and I see the directory is mounted correctly but data is not persisting in my nfs server (kubernetes master machine). </p> <p>Please help me see what I am doing wrong here.</p>
<p>It's possible that pods don't have permission to create files and directories. You can <code>exec</code> to your pod and try to <code>touch</code> a file in NFS share if you get permission error you can ease up permission on file system and <code>exports</code> file to allow write access.</p> <p>It's possible to specify <code>GID</code> in PV object to avoid permission denied issues. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#access-control" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#access-control</a></p>
<p>Im trying to limit the number of concurrent connection to servers in my Nginx ingress.</p> <p>is max_conns supported in Ngnix ingress? how can i edit or add it?</p> <blockquote> <p>max_conns=number limits the maximum number of simultaneous active connections to the proxied server (1.11.5). Default value is zero, meaning there is no limit. If the server group does not reside in the shared memory, the limitation works per each worker process.</p> </blockquote> <p><a href="http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream" rel="nofollow noreferrer">http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream</a></p> <p>exmple of an Nginx conf using max_conn</p> <pre><code>upstream backend { server backend1.example.com max_conns=3; server backend2.example.com;} </code></pre> <p>thanks</p>
<p>So, what needed to be done in order to add max_conns (or any other parameter that is not supported by the ingress configmap) - is to change the template.</p> <p>changing the template /etc/nginx/template/nginx.tmpl like this:</p> <pre><code>upstream {{ $upstream.Name }} { # Load balance algorithm; empty for round robin, which is the default {{ if ne $cfg.LoadBalanceAlgorithm "round_robin" }} {{ $cfg.LoadBalanceAlgorithm }}; {{ end }} {{ if $upstream.UpstreamHashBy }} hash {{ $upstream.UpstreamHashBy }} consistent; {{ end }} {{ if (gt $cfg.UpstreamKeepaliveConnections 0) }} keepalive {{ $cfg.UpstreamKeepaliveConnections }}; {{ end }} {{ range $server := $upstream.Endpoints }}server {{ $server.Address | formatIP }}:{{ $server.Port }} max_fails={{ $server.MaxFails }} fail_timeout={{ $server.FailTimeout }} max_conns=1; {{ end }} } </code></pre> <p>(you can get the full file from the pod nginx-ingress-controller, just run bash on the pod and cat it) will do the trick. now create a configmap with the local nginx.tmpl:</p> <pre><code>kubectl create configmap nginx-template --from-file=nginx.tmpl=/localpath/nginx.tmpl </code></pre> <p>and then mount a volume to the deployment with this yaml:</p> <pre><code> volumeMounts: - mountPath: /etc/nginx/template name: nginx-template-volume readOnly: true volumes: - name: nginx-template-volume configMap: name: nginx-template items: - key: nginx.tmpl path: nginx.tmpl </code></pre> <ul> <li>i needed to restart my NGINX ingress manually but i edited the ReplicationController because i didn't have a deployment (i guess its because im on minikube)</li> </ul>
<p>I'm interested in using Kubernetes NetworkPolicy to control network policy. I want to know if the NetworkPolicy is blocking traffic so I can either fix the policies or fix/stop whatever is in violation. </p> <p>We use Calico and they view this as a paid feature. <a href="https://github.com/projectcalico/calico/issues/1035" rel="noreferrer">https://github.com/projectcalico/calico/issues/1035</a></p> <p>Cilium has <code>cilium monitor</code> which sounds like it would work if we started using Cilium. <a href="http://docs.cilium.io/en/latest/troubleshooting/" rel="noreferrer">http://docs.cilium.io/en/latest/troubleshooting/</a></p> <p>Is there a general, vendor-neutral way to monitor network traffic that violates Kuberenetes NetworkPolicy?</p>
<p>AFAIU, there is no way to create such vendor-neutral tool because NetworkPolicy is just an abstraction. Each networking plugin enforces them differently, (Cilium does that mostly in BPF for L3 and L4 and Envoy for L7), so each plugin needs to provide its own means of accessing this information.</p> <p>AFAIK, there is no initiative in Kubernetes community to store this information and provide an interface for CNI plugins to provide this information, but it seems like it would be a fun project.</p> <p>Disclaimer: I am on Cilium dev team.</p>
<p>I was using <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">kuberntes-plugin</a>. In its README it has given how to write scripted pipeline with multiple container images, like </p> <pre><code>podTemplate(label: 'mypod', containers: [ containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'), containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat') ]) { node('mypod') { </code></pre> <p>I tried the following for declarative pipeline.</p> <pre><code>pipeline { agent { kubernetes { //cloud 'kubernetes' label 'mypod' containerTemplate { name 'maven' image 'maven:3.3.9-jdk-8-alpine' ttyEnabled true command 'cat' } containerTemplate { name 'containtertwo' image 'someimage' ttyEnabled true } } } </code></pre> <p>It creates a pod with only one container. </p> <p>how to use multiple containerTemplates with declarative pipeline?</p>
<p>This isnt a solution to your problem, but is some information I found after looking.</p> <p>The <a href="https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/pipeline/KubernetesDeclarativeAgent.java#L28" rel="nofollow noreferrer">KubernetesDeclarativeAgent</a> only has a single <code>containerTemplate</code>. Whichever <code>containerTemplate</code> is at the bottom of your collection of containers will be the one that is used.</p> <p>In your example it will be <code>containtertwo</code>.</p> <p>You cant have multiple top level <code>agents</code>, and you cant have multiple <code>kubernetes</code> within an agent. And now you cant have multiple containers. I would prefer if an error or warning of some kind was thrown for this.</p> <p>There are 2 work arounds I can think of. If you must use declarative, then you can add an <code>agent</code> to your <code>stage</code>, but this can lead to its own issues. The other is the scripted pipeline, which is what I am going to do.</p> <p>The documentation on this leaves much to be desired.</p>
<p>When I follow the kubernetes guide to install kubernetes on my local machine which runs fedora 27. I have already installed docker-ce 17.12(followed the Docker CE documentation).</p> <p>bellow is the install output. is there any body can figure out what should I do to make kubernetes installed but also keep my docker-ce?</p> <pre><code>[ian@thinkpad ~]$ sudo LANG=en_US.UTF-8 dnf install kubernetes Last metadata expiration check: 7:31:34 ago on Sat 13 Jan 2018 12:51:44 PM CST. Error: Problem: package kubernetes-node-1.7.3-1.fc27.x86_64 requires docker, but none of the providers can be installed - package docker-ce-17.12.0.ce-1.fc27.x86_64 conflicts with docker provided by docker-2:1.13.1-42.git4402c09.fc27.x86_64 - package docker-ce-17.12.0.ce-1.fc27.x86_64 conflicts with docker provided by docker-2:1.13.1-26.gitb5e3294.fc27.x86_64 - problem with installed package docker-ce-17.12.0.ce-1.fc27.x86_64 - package kubernetes-1.7.3-1.fc27.x86_64 requires kubernetes-node = 1.7.3-1.fc27, but none of the providers can be installed - conflicting requests (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages) [ian@thinkpad ~]$ docker version Client: Version: 17.12.0-ce API version: 1.35 Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:12:17 2017 OS/Arch: linux/amd64 Server: Engine: Version: 17.12.0-ce API version: 1.35 (minimum version 1.12) Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:14:50 2017 OS/Arch: linux/amd64 Experimental: false [ian@thinkpad ~]$ </code></pre>
<p>Just requested for push of f27 update of kubernetes-1.9.1 [1] to the stable repository that allows to install either docker or docker-ce. It should take day or 2 to transit. Meantime you can install the kubernetes-node rpm from the update [1]. Tested that locally. Let me know if it does not work for you. </p> <p>[1] <a href="https://bodhi.fedoraproject.org/updates/FEDORA-2018-d0a64cb26c" rel="nofollow noreferrer">https://bodhi.fedoraproject.org/updates/FEDORA-2018-d0a64cb26c</a></p> <p>Regards Jan</p>
<p>I'm trying to run two services with k8s running on minikube installation. So I have the sevice <code>Foo</code> and <code>Bar</code>, so when I'm accessing the service <code>Foo</code> it must trigger service <code>Bar</code> to get the data.</p> <p>Here is the deployment configuration for <code>Foo</code> service <code>foo-deployment.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: foo-server labels: app: foo-server spec: replicas: 1 template: metadata: labels: app: foo-server serving: "true" spec: containers: - name: foo-server image: foo-container env: - name: BAR_SERVICE_URL value: http://bar-server:8081 ports: - containerPort: 8080 name: http </code></pre> <p><code>foo-svc.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: foo-server labels: app: foo-server spec: type: NodePort ports: - port: 8080 targetPort: 8080 name: http selector: app: foo-server serving: "true" </code></pre> <p>The <code>BAR_SERVICE_URL</code> evn variable is used inside the <code>Foo</code> so the service knows the host to <code>Bar</code>. And here is the service configuration for <code>Bar</code> service <code>bar-svc.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: bar-server labels: app: bar-server spec: type: NodePort ports: - port: 8081 targetPort: 8081 name: http selector: app: bar-server </code></pre> <p><code>bar-deployment.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: bar-server-v1 labels: app: bar-server version: "1.0" spec: replicas: 2 template: metadata: labels: app: bar-server version: "1.0" serving: "true" spec: containers: - name: bar-server image: bar-container ports: - containerPort: 8081 name: http </code></pre> <p>When I'm accessing the <code>Foo</code> service with the minikube ip and exposed service port I receive the following error in the pod logs:</p> <pre><code>2018-02-08 14:32:25.875 ERROR 1 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://bar-server:8081/endpoint": bar-server; nested exception is java.net.UnknownHostException: bar-server] with root cause java.net.UnknownHostException: bar-server </code></pre> <p>However, I can access the <code>bar-server/endpoint</code> using the minikube ip and exposed port like this <code>http://192.168.99.100:31168/endpoint</code>.</p> <p>So looks like I can access the <code>Foo</code> and <code>Bar</code> services from outside the cluster. But <code>Foo</code> service cannot resolve the <code>Bar</code> service host and cannot access it.</p> <p><strong>UPD</strong>: Here are some update with troubleshouting details. </p> <ul> <li>On <code>minikube</code> the <code>kube-dns</code> addon is enabled. </li> <li>Trying from the iside of <code>Foo</code> pod to <code>wget</code> to the <code>Bar</code> service endpoint. I'm able to do it via IP address of the <code>bar-server</code> but not with <code>bar-server:8081/endpoint</code>.</li> <li><p>Trying <code>nslookup kubernetes.default.svc.cluster.local localhost</code> looks good. Here is the ouptput:</p> <p>Server: 127.0.0.1 Address 1: 127.0.0.1 localhost</p></li> <li><p>Also <code>dns</code> pod looks up and running. Here is the output for <code>kubectl get pods -n=kube-system | grep -i dns</code></p> <p>kube-dns-6777479f6b-vxd7m 3/3 Running 9 1d</p></li> </ul>
<p>Kubernetes does service discovery using DNS (<code>kube-dns</code>). If DNS in k8s is working, service <code>Foo</code> should be able to <code>ping</code> or <code>curl</code> service <code>Bar</code>, and vice-versa.</p> <p>A way to check if <code>kube-dns</code> is working is to see if the following commands work inside any pod:</p> <pre><code>nslookup kubernetes.default.svc.cluster.local localhost nslookup kubernetes.default localhost </code></pre> <p><code>kube-dns</code> pod's health can be checked in the output of:</p> <pre><code>kubectl get pods -n=kube-system | grep -i dns </code></pre>
<p>I have a Linux build machine that I have installed minikube too. Within the minikube instance I have installed artifactory which I will be using for storing various build artifacts </p> <p>I now want to be able to do some work on my dev machine (which is an unrelated laptop on the same network as the Linux build machine) and push some built artifacts into artifactory. </p> <p>However I can't figure out how to get to artifactory. When I ssh to the Linux server and check the minikube service I can see that the artifactory instance is running on a 192.168 address. </p> <p>Is there any way to expose artifactory ie access it on the windows machine? Or is this not possible and I should just install artifactory on the Linux machine rather than in minikube? </p>
<p>Expose you artifactory Service</p> <pre><code>$ minikube service &lt;artifactory-service&gt; -n &lt;namespace&gt; </code></pre> <p>Or get the URL</p> <pre><code>$ minikube service &lt;artifactory-service&gt; -n &lt;namespace&gt; --url </code></pre> <p>If you want to access from remote, you need to do something else.</p> <p>Suppose, when you run <code>minikube service &lt;artifactory-service&gt; -n &lt;namespace&gt; --url</code>, you get following </p> <pre><code>http://192.168.99.100:30654 </code></pre> <p>You can access artifactory in minikube using this URL. But can't access from remote.</p> <p>Now do this, expose port <strong>30654</strong></p> <pre><code>ssh -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) -L \*:30654:0.0.0.0:30654 </code></pre> <p>You will be able to access from other network.</p>
<p>Kubernetes' <code>.spec.revisionHistoryLimit</code> is used in keeping track of changes to the Deployment definition. Since these definitions are small yaml files, it doesn't seem like much (given the typical conditions of a modern cloud server) to keep 100's or more of these definitions around if necessary.</p> <p>According to the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#clean-up-policy" rel="noreferrer">documentation</a>, it is going to be set to 2 (instead of unlimited). Why is this?</p>
<p>Keeping it unlimited would eventually clog up etcd and iiuc etcd isn't designed for big data usages. Also the Kubernetes control plane is syncing and downloading regularly, which means there would be much unnecessary data to process. Just imagine a service with daily deployments, which runs for over a year.</p> <p>And setting this to a high-ish number like 100 seems very random to me. Why not 1000 or 9001? Besides that, I cannot imagine anyone who might want to roll back something a hundred versions.</p> <p>Anyway, we are only talking about a default setting, so you can set it to a very high number, if your use case requires it.</p>
<pre><code> I have an application running on 1001. I deployed it to a container in port 80. </code></pre> <p>How can I use Ingress for that? I tried using NodePort with --targetPort=1001 and and I added 80 in servicePort in <code>Ingress.yaml</code>.</p> <p>kubectl expose deployment test --target-port=1001 --type=NodePort</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: backend: serviceName: test servicePort: 80 </code></pre> <p>I am getting error </p> <blockquote> <p>Backend not found- 404</p> </blockquote> <p>Am I using the correct way or should I need to follow something else?</p>
<p>Setup nginx ingress using helm from the official kubernetes charts repo at <a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress</a>.</p> <p>Along with the nginx ingress controller, you'll need an ingress resource too. Refer <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example</a> for examples.</p>
<p>Consider a Statefulset (Cassandra using offical K8S example) across 3 Availability zones:</p> <ul> <li>cassandra-0 -> zone a</li> <li>cassandra-1 -> zone b</li> <li>cassandra-2 -> zone c</li> </ul> <p>Each Cassandra pod uses an EBS volume. So there is automatically an affinity. For instance, cassandra-0 cannot move to "zone-b" because its volume is in "zone-a". All good.</p> <p>If some Kubernetes nodes/workers fail, they will be replaced. The pods will start again on the new node and be re-attached their EBS volume. Looking like nothing happened.</p> <p>Now if the entire AZ "zone-a" goes down and is unavailable for some time (meaning cassandra-0 cannot start anymore due to affinity for EBS in the same zone). You are left with:</p> <ul> <li>cassandra-1 -> zone b</li> <li>cassandra-2 -> zone c</li> </ul> <p>Kubernetes will never be able to start cassandra-0 for as long as "zone-a" is unavailable. That's all good because cassandra-1 and cassandra-2 can serve requests.</p> <p>Now if on top of that, another K8S node goes down or you have setup auto-scaling of your infrastructure, you could end up with cassandra-1 or cassandra-2 needed to move to another K8S node. It shouldn't be a problem.</p> <p><strong>However</strong> from my testing, K8S will not do that because the pod cassandra-0 is offline. It will never self-heal cassandra-1 or cassandra-2 (or any cassandra-X) because it wants cassandra-0 back first. And cassandra-0 cannot start because it's volume is in a zone which is down and not recovering.</p> <p>So if you use <strong>Statefulset + VolumeClaim + across zones</strong> AND you experience an <strong>entire AZ failure</strong> AND you experience an EC2 failure in another AZ or have auto-scaling of your infrastructure</p> <p>=> then you will loose all your Cassandra pods. Up until zone-a is back online</p> <p>This seems like a dangerous situation. Is there a way for a stateful set to not care about the order and still self-heal or start more pod on cassandra-3, 4, 5, X?</p>
<p>Starting with Kubernetes 1.7 you can tell Kubernetes to relax the StatefulSet ordering guarantees using the <code>podManagementPolicy</code> option (<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#statefulsetspec-v1-apps" rel="nofollow noreferrer">documentation</a>). By setting that option to <code>Parallel</code> Kubernetes will no longer guarantee any ordering when starting or stopping pods and start pods in parallel. This can have an impact on your service discovery, but should resolve the issue you're talking about.</p>
<p>I have a couple of NodeJS backends running as pods in a Kubernetes setup, with Ingress-managed nginx over it.</p> <p>These backends are API servers, and can return 400, 404, or 500 responses during normal operations. These responses would provide meaningful data to the client; besides the status code, the response has a JSON-serialized structure in the body informing about the error cause or suggesting a solution.</p> <p>However, Ingress will intercept these error responses, and return an error page. Thus the client does not receive the information that the service has tried to provide.</p> <p>There's a closed ticket in the kubernetes-contrib repository suggesting that it is now possible to turn off error interception: <a href="https://github.com/kubernetes/contrib/issues/897" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/issues/897</a>. Being new to kubernetes/ingress, I cannot figure out how to apply this configuration in my situation.</p> <p>For reference, this is the output of <code>kubectl get ingress &lt;ingress-name&gt;</code>: (redacted names and IPs)</p> <pre><code>Name: ingress-name-redacted Namespace: default Address: 127.0.0.1 Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- public.service.example.com / service-name:80 (&lt;none&gt;) Annotations: rewrite-target: / service-upstream: true use-port-in-redirects: true Events: &lt;none&gt; </code></pre>
<p>I have solved this on Tectonic 1.7.9-tectonic.4.</p> <p>In the Tectonic web UI, go to Workloads -> Config Maps and filter by namespace tectonic-system.</p> <p>In the config maps shown, you should see one named "tectonic-custom-error".</p> <p>Open it and go to the YAML editor.</p> <p>In the data field you should have an entry like this:</p> <blockquote> <p>custom-http-errors: '404, 500, 502, 503'</p> </blockquote> <p>which configures which HTTP responses will be captured and be shown with the custom Tectonic error page.</p> <p>If you don't want some of those, just remove them, or clear them all.</p> <p>It should take effect as soon as you save the updated config map.</p> <p>Of course, you could to the same from the command line with <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#edit" rel="nofollow noreferrer">kubectl edit</a>:</p> <blockquote> <p>$> kubectl edit cm tectonic-custom-error --namespace=tectonic-system</p> </blockquote> <p>Hope this helps :)</p>
<p>Suppose I have a one node Database service (PostgreSQL, MySQL, whatever...) deployed on kubernetes using a PersistentVolumeClaim of 10G That will be running on GKE or AWS or Azure (It does not really matter). What is the procedure to scale up the disk to 20G? Is there a way, for instance, to have a PVC bind to a existing disk (a snapshot of the 10G disk) or something like that? </p> <p>What I want is to increase the storage size of a disk that belongs to a PVC AND maintain the old data (the disk will not necessarily be a database, so I'm not looking to restore a database backup or something like that). </p> <p>I'm looking for something like: take a snapshot of the old disk, create a bigger disk from the snapshot and "make the PVC use the new disk".</p> <p>Thank you</p>
<p>You have a PVC with PV 10G. You want to increase its size. Unfortunately resize is not supported yet. So, you need to create new PVC with 20G size.</p> <p>Lets say, your existing PVC with 10G called <code>older</code>.</p> <p>Follow these steps:</p> <p><strong>Step 1:</strong> Create new PVC with 20G, lets say its called <code>latest</code>.</p> <p><strong>Step 2:</strong> Mount <code>older</code> &amp; <code>latest</code> both in a container. Copy data from <code>older</code> to <code>latest</code>. </p> <p><strong>Step 3:</strong> Delete PVC <code>older</code>, we do not need <code>older</code> any more. Data copied to <code>latest</code> PV.</p> <p><strong>Step 4:</strong> Make PV of <code>latest</code> Available.</p> <pre><code>$ kubectl get pvc latest NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE latest Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 30s </code></pre> <p>Edit PV <code>pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6</code> to set persistentVolumeReclaimPolicy to <code>Retain</code>. So that deleting PVC will not delete PV.</p> <p>Now, delete PVC <code>latest</code>. </p> <pre><code>$ kubectl delete pvc latest $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Released default/latest standard 3m </code></pre> <p>See the status, PV is Released.</p> <p>Now, make this <code>latest</code> PV available to be claimed by another PVC, our <code>older</code> as we want to use 20G under this PVC <code>older</code>.</p> <p>Edit PV again to remove claimRef</p> <pre><code>$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Available standard 6m </code></pre> <p>Now the status of PV is Available. </p> <p><strong>Step 5:</strong> Claim <code>latest</code> PV by <code>older</code> PVC</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: older spec: accessModes: - ReadWriteOnce volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 resources: requests: storage: 10Gi </code></pre> <p>Use volumeName <code>pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6</code></p> <pre><code>$ kubectl get pvc,pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc/older Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 9s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Bound default/older standard 9m </code></pre> <p><strong>Finally:</strong> Set persistentVolumeReclaimPolicy to <code>Delete</code></p> <p>This is how, your PVC <code>older</code> has had <code>latest</code> PV with 20G.</p>
<p>On my master node</p> <pre><code>root@k8smaster:~# kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port? root@k8smaster:~# exit logout yoda@k8smaster:~/bin$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster Ready master 5d v1.9.2 k8sworker Ready &lt;none&gt; 51s v1.9.2 </code></pre> <p>Why do I need to run kubectl as my own user ? </p>
<p>What Michael said is exactly accurate; <code>kubectl</code> looks in the <em>current user's</em> home directory, which for <code>yoda</code> will likely be <code>/home/yoda</code> but for root is almost certainly <code>/root</code>.</p> <p>You can very quickly test this theory by re-running your <code>kubectl</code> command with an explicit <code>--kubeconfig ~yoda/.kube/config</code>:</p> <pre><code>kubectl --kubeconfig ~yoda/.kube/config get nodes </code></pre> <p>You can also export the shell variable <code>KUBECONFIG</code> to avoid having to constantly include that long <code>--kubeconfig</code> syntax:</p> <pre><code>export KUBECONFIG=~yoda/.kube/config kubectl get nodes </code></pre> <p>Ensure you don't put any characters between the <code>~</code> and <code>yoda</code> or it will look for a <code>yoda</code> directory inside the current user's home directory.</p>
<p>I am still in the initial phase of understanding k8, so please forgive if you feel this lame. But I wanted to know if there is a way to have some kind of delay or ordered the execution of the containers in 1 single pod. Let's say we have 1 Pod A and has 2 containers. For example, a kafka container exposing port 9092 and port 8081 and another container say a Kafka connect image which hits both of these ports. I wanted to know if there is a way to execute the Kafka container first and then execute Kafka connect container so that it won't get killed. </p> <p>1) Can we have some delay or a sleep between 2 container creation?</p> <p>2) Can we have some kind of priority of execution of containers within pods so that we can accomplish the above-mentioned situation?</p> <p>PS: I have considered creating 2 pods and currently working on that. But also wanted to try to do this and also learn something new at the same time.Thanks.</p>
<p>What you want does not exist per se in Kubernetes as of early 2018 but you can use <a href="https://blog.openshift.com/kubernetes-pods-life/" rel="nofollow noreferrer">init containers</a> to address some parts of your use case.</p>
<p>After deploying nginx (or any application for that matter) , I try to expose it using the following config file: </p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: app: nginx </code></pre> <p>The result:</p> <pre><code>C:\install\kube&gt;kubectl get service -l app=nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.108.193.215 &lt;pending&gt; 80:31035/TCP 21h </code></pre> <p>Note the port number 31035 is randomly assigned. Is it possible to define a specific port? </p> <p>Versions:</p> <pre><code>C:\install\kube&gt;minikube version minikube version: v0.24.1 C:\install\kube&gt;kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p><code>LoadBalancer</code> service uses <code>NodePort</code> internally. Yes, it possible to specify a <code>NodePort</code> for a service. There are two ways to do it:</p> <ol> <li>Using <code>nodePort: 31036</code> in the YAML spec file for the service.</li> <li>Run <code>kubectl edit &lt;service name&gt;</code>, edit the <code>nodePort</code> field under <code>ports</code> and save it.</li> </ol> <p>Keep in mind that <code>NodePort</code> must be set to a number in the flag-configured range range <code>30000-32767</code>. Otherwise, kubernetes throws an error. This <code>NodePort</code> range can be changed using the flag <code>--service-node-port-range</code> passed to <code>kube-apiserver</code> per <a href="https://kubernetes.io/docs/reference/generated/kube-apiserver/" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kube-apiserver/</a>:</p> <pre><code> --service-node-port-range portRange A port range to reserve for services with NodePort visibility. Example: '30000-32767'. Inclusive at both ends of the range. (default 30000-32767) </code></pre>
<p>I was just curious if there are known workarounds for the Minikube Vpn issue apart from clean wipe of the minikube from the machine and reinstallation. Minikube works just fine when it is connected to either office network or home network but goes dead when connected to a vpn. When I do a <code>kubectl get pods</code> gives an <code>i/o timeout</code>. I did read couple of git issues in their Issues section but none of the solutions seemed comprehensive.</p> <p>Using Mac OS Minikube V0.25.Any suggestion is truly appreciated!</p>
<p>There are <a href="https://github.com/kubernetes/minikube/search?q=vpn&amp;type=Issues&amp;utf8=%E2%9C%93" rel="nofollow noreferrer">some issues reported around VPN for minikube</a>.</p> <p>In particular, <a href="https://github.com/kubernetes/minikube/issues/2344" rel="nofollow noreferrer">issue 2344</a> shows the exact error message:</p> <pre><code>$kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Unable to connect to the server: Gateway Time-out </code></pre> <blockquote> <p>Not sure any good solution to correct routing but <a href="https://github.com/robertluwang/docker-hands-on-guide/blob/master/minikube-no-tls-verify.md" rel="nofollow noreferrer">there is remedy for this to disable TLS verification</a>.</p> </blockquote> <p>From <a href="https://github.com/kubernetes/minikube/issues/2325#issuecomment-352995998" rel="nofollow noreferrer">issue 2325</a>:</p> <blockquote> <p>On a side node, on Windows 10 1709 update, Hyper-V comes with a build in "Default Switch".<br> If use it in <code>minikube start</code>, then everything works in minikube. All Host VPN (I have 2, Cisco Anyconnect and ArraySSL VPN) works inside minikube.</p> </blockquote> <p>Some networking settings are suggested in "<a href="https://github.com/kubernetes/minikube/issues/38" rel="nofollow noreferrer">vpn or proxy for easier local development (issue 38)</a>".</p> <p><a href="https://github.com/kubernetes/minikube/issues/1413" rel="nofollow noreferrer">Issue 1413</a> suggests:</p> <blockquote> <p>In order to remedy VPN issues, it is helpful to specify the hostonly nic used by minikube.<br> This way other VMs on that NIC will have no issues talking to each other even when Cisco wrecks the local network. Docker Machine t<a href="https://docs.docker.com/machine/reference/create/" rel="nofollow noreferrer"><code>docker-machine create</code></a>) exposes this property:</p> </blockquote> <pre><code>--virtualbox-hostonly-nictype "82540EM" </code></pre> <p>This is not yet available with minikube though.</p> <blockquote> <p>As a workaround, specifying IP addresses in the same CIDR as minikube will set put the VM on the same host only network.<br> For example, applying an ip address 192.168.99.XX will use the same host only network that minikube uses with its IP address 192.168.99.10X.</p> </blockquote>
<p>I am still in the initial phase of understanding k8, so please forgive if you feel this lame. But I wanted to know if there is a way to have some kind of delay or ordered the execution of the containers in 1 single pod. Let's say we have 1 Pod A and has 2 containers. For example, a kafka container exposing port 9092 and port 8081 and another container say a Kafka connect image which hits both of these ports. I wanted to know if there is a way to execute the Kafka container first and then execute Kafka connect container so that it won't get killed. </p> <p>1) Can we have some delay or a sleep between 2 container creation?</p> <p>2) Can we have some kind of priority of execution of containers within pods so that we can accomplish the above-mentioned situation?</p> <p>PS: I have considered creating 2 pods and currently working on that. But also wanted to try to do this and also learn something new at the same time.Thanks.</p>
<p>No you can't, but you could change the startup command for the second image to poll the availability of the ports of the first container through <code>localhost</code>. When both ports accept connections, then you know you can continue with starting up the application in the second image.</p> <p>If you cannot modify the second image yourself, what you can do is create a config map which contains your own startup script using whatever scripting language you might have in the image. Mount the config map into the container at some path and then override the command run for the container in the pod definition (where depends on how you are deploying it), to invoke your script mounted in from the config map.</p> <p>Your script can then do the polling, and when happy that first container is running okay, then it will execute the original command the container used to start the application.</p> <p>If you were to use two pods instead, then you can still use the same idea, but rather than <code>localhost</code> use the hostname for the service of the first pod. In the two pod case, you could run the script in an init container instead of hijacking the original startup command of the actual image.</p>
<p>I'm running gitlab 10.4.3 with gitlab-runner 10.4.0 as a kubernetes deployment with kubernetes runner and a minio-server for caching. I installed it according to the <a href="https://docs.gitlab.com/runner/install/autoscaling.html#install-the-cache-server" rel="nofollow noreferrer">gitlab docs</a>.</p> <p>Everything worked with the cache as expected, when adding the appropriate settings to the <code>.gitlab-ci.yml</code>-file:</p> <pre><code>build: stage: build variables: GIT_STRATEGY: fetch cache: key: &quot;$CI_COMMIT_REF_NAME&quot; paths: - node_modules/ script: - compile-templates.sh - yarn install - yarn run build </code></pre> <p>The pipeline output did fill the cache the first time and subsequent runs on the same branch correctly pulled and pushed the cache:</p> <pre><code>Running on gitlab/runner-abcdefg-project-123-concurrent-123456 via gitlab-runner-123456-987654... Cloning repository for feature/gitlab-cache with git depth set to 1... Cloning into '/group/project'... Checking out b1348594 as feature/gitlab-cache... Skipping Git submodules setup Checking cache for feature/gitlab-cache... Downloading cache.zip from http://minio-service:9000/runner/runner/abcdefg/project/123/feature/gitlab-cache Successfully extracted cache $ docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY Login Succeeded [...snip...] Creating cache feature/gitlab-cache... node_modules/: found 26281 matching files Uploading cache.zip to http://minio-service:9000/runner/runner/abcdefg/project/123/feature/gitlab-cache Created cache </code></pre> <p>However, after some runs it suddenly stopped working - and I do not know why. I've also tried a job-global cache-definition to no avail - it seems that the gitlab-runner is simply ignoring the setting and just skips the &quot;Checking cache part&quot;:</p> <pre><code>Running on gitlab/runner-abcdefg-project-123-concurrent-123456 via gitlab-runner-123456-987654... Cloning repository for feature/gitlab-cache with git depth set to 1... Cloning into '/group/project'... Checking out b1348594 as feature/gitlab-cache... Skipping Git submodules setup $ docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY Login Succeeded [...snip...] Job succeeded </code></pre> <p>My gitlab-runner config.toml has the appropriate settings:</p> <pre><code>[[runners]] // ... [runners.kubernetes] // ... [runners.cache] Type = &quot;s3&quot; // I've also tried http://minio-service:9000 to no avail ServerAddress = &quot;minio-service:9000&quot; AccessKey = &quot;xxxxxxxxxxxxxxxxxxxxxxxx&quot; SecretKey = &quot;xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&quot; BucketName = &quot;runner&quot; Insecure = true </code></pre> <p><em>Note:</em> If I change the settings to invalid ones, I do not get any error messages in the runners logs.</p> <p>The port is reachable from the runner itself <em>and</em> from job pods:</p> <pre><code>$ curl -s &quot;http://minio-service:9000&quot; &lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;Error&gt;&lt;Code&gt;AccessDenied&lt;/Code&gt;&lt;Message&gt;Access Denied.&lt;/Message&gt;&lt;Key&gt;&lt;/Key&gt;&lt;BucketName&gt;&lt;/BucketName&gt;&lt;Resource&gt;/&lt;/Resource&gt;&lt;RequestId&gt;12345&lt;/RequestId&gt;&lt;HostId&gt;12345&lt;/HostId&gt;&lt;/Error&gt; </code></pre> <p>The bucket on minio exists as checked in the local volume <em>and</em> via the minio client <code>mc</code>.</p> <p>There are no errors in the logs of the pods of gitlab-runner or the minio-server.</p> <p>It's just like the cache-settings of the <code>.gitlab-ci.yml</code> are simply ignored and I'm out of ideas on what's left to check. Anyone with any ideas?</p>
<p>So, the problem was an invalid documentation in combination with a silent ignore of invalid cache:keys. If a branch is e.g. named <code>feature/some-new-thing</code>, the resulting key of <code>&quot;$CI_COMMIT_REF_NAME&quot;</code> would lead to an invalid cache:key containing a &quot;/&quot;-character - which in turn would render the whole cache-section invalid, but which is silently ignored.</p> <p>Instead, just use <code>&quot;${CI_BUILD_REF_SLUG}&quot;</code>, which will contain an url-safe - and therefore cache:key-safe - version of the branch name:</p> <pre><code>cache: key: &quot;${CI_BUILD_REF_SLUG}&quot; paths: - node_modules/ </code></pre> <p>A <a href="https://gitlab.com/gitlab-org/gitlab-runner/issues/3085" rel="nofollow noreferrer">bug report</a> has been opened.</p>
<p>I have installed minikube, kubectl in my laptop.</p> <p>When I run <code>kubectl cluster-info</code> in get the below </p> <pre><code> Kubernetes master is running at https://10.168.99.10:8443 </code></pre> <p>when I connect to <a href="https://10.168.99.10:8443" rel="nofollow noreferrer">https://10.168.99.10:8443</a> I get the below response.</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 } </code></pre> <p>When installed kubectl/minikube didn't prompt for user/password. What is the default user/password to connect.</p>
<p><strong>minikube</strong> doesn't start with basic-auth. So there is not username &amp; password for apiserver by default. To access apiserver, you need to use apiserver certificates. That`s how you will be authorized.</p> <pre><code>curl https://192.168.99.100:8443 --cert ~/.minikube/apiserver.crt --key ~/.minikube/apiserver.key --cacert ~/.minikube/ca.crt </code></pre> <p><strong>See details:</strong></p> <p>Get your minikube IP address</p> <pre><code>$ minikube ip 192.168.99.100 </code></pre> <p>The API server runs on <code>8443</code> by default</p> <p>Now try to connect apiserver using this</p> <pre><code>$ curl https://192.168.99.100:8443 curl: (60) SSL certificate problem: unable to get local issuer certificate </code></pre> <p>Need to provide CA certificate</p> <pre><code>$ curl https://192.168.99.100:8443 --cacert ~/.minikube/ca.crt { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }⏎ </code></pre> <p>Now you need to provide apiserver SSL certificate and key</p> <pre><code>curl https://192.168.99.100:8443 --cert ~/.minikube/apiserver.crt --key ~/.minikube/apiserver.key --cacert ~/.minikube/ca.crt { "paths": [ "/api", "/api/v1", .... "/ui", "/ui/", "/version" ] }⏎ </code></pre> <blockquote> <p>Note: You can proxy apiserver too</p> </blockquote> <pre><code>$ kubectl proxy --port=8433 $ curl 127.0.0.1:8433 </code></pre> <p>Now you do not need to provide any certificates. And you are authorized</p>
<p>I am running solr inside minikube for a POC. I am trying to figure out how to access the solr inside the minikube. As per my knowledge I cant access solr using my host IP, it is only accessible using the minikube IP - 192.168.99.100:8983/solr. My objective is to hit the solr server when accessing it from a remote box.</p> <p>One of my team mates suggested that I can maybe use something that will forward incoming request to a local IP. </p> <p>Any suggestions??</p> <p>Thanks</p>
<p>You would need to expose the solr service using the <code>kubectl expose</code> command for external access.</p> <p>There are four ways to expose a service for external access in k8s:</p> <ol> <li><code>LoadBalancer</code> service type which sets the ExternalIP automatically. This is used when there is an external non-k8s, cloud-provider's load-balancer like CGE, AWS or Azure, and this external load-balancer would provide the ExternalIP for the nginx ingress service per <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types</a>.</li> <li><code>ExternalIPs</code> per <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a>.</li> <li><code>NodePort</code>: In this approach, the service can be hit from outside the cluster using <code>NodeIP:NodePort/url/of/the/service</code>.</li> <li><code>Ingress</code>: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></li> </ol>
<p>I am running a cluster with 1 master and 1 node. Now, when I run daemon set it only shows 1 desired node, while it should be 2. There is no error I could find anywhere in the describe/logs, but the daemonset only chooses 1 node to run. I am using kubernetes 1.9.1. </p> <p>Any idea what I can be doing wrong? Or how to debug it? TIA.</p>
<p>This happens if the k8s master node has just the <code>node-role.kubernetes.io/master: NoSchedule</code> taint without toleration for it.</p> <p>The <code>the node-role.kubernetes.io/master: NoSchedule</code> toleration is needed in k8s 1.6 or later to schedule daemonsets on master nodes.</p> <p>Add the following toleration for the daemonset in the YAML file to make k8s schedule daemonsets on the master node too:</p> <pre><code>... kind: DaemonSet spec: ... template: ... spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule </code></pre> <p>Taints of the master node can be checked by:</p> <pre><code>kubectl describe node &lt;master node&gt; </code></pre> <p>Tolerations of a pod can be checked by:</p> <pre><code>kubectl describe pod &lt;pod name&gt; </code></pre> <p>More info about daemonsets is in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/</a>.</p>
<p>As <a href="https://github.com/kubernetes-helm/chartmuseum" rel="noreferrer">https://github.com/kubernetes-helm/chartmuseum</a>, I set up chartmuseum by running <code>helm install incubator/chartmuseum</code> in Kubernetes cluster.</p> <p>When I want to upload chart by running </p> <p><code>curl --data-binary "@mychart-0.1.0.tgz" http://$URL:$PORT/api/charts</code></p> <p>it returns <code>404 page not found</code> even if i run it in the container which chartmuseum running on.</p>
<p>By default helm chart <code>incubator/chartmuseum</code> installs with <code>DISABLE_API: true</code> parameter, so, that's why any request to <code>/api</code> don't work (returns 404).</p> <p>You need to install helm chart <code>incubator/chartmuseum</code> with <code>DISABLE_API: false</code> parameter:</p> <pre><code>helm install incubator/chartmuseum --set env.open.DISABLE_API=false </code></pre> <p><strong>EDIT:</strong></p> <p>Helm chart <code>incubator/chartmuseum</code> was <a href="https://github.com/helm/charts/pull/4601" rel="noreferrer">moved</a> to <code>stable/chartmuseum</code> on Apr 6, 2018.</p> <p>Now you need to install helm chart <code>stable/chartmuseum</code> with <code>DISABLE_API: false</code> parameter:</p> <pre><code>helm install stable/chartmuseum --set env.open.DISABLE_API=false </code></pre> <p>After that you can perform any operations with its API, for example:</p> <pre><code>$ curl --data-binary "@mychart-0.1.0.tgz" http://localhost:8080/api/charts {"saved":true} $ curl http://localhost:8080/api/charts {"mychart":[{"name":"mychart","version":"0.1.0","description":"A Helm chart for Kubernetes","apiVersion":"v1","urls":["charts/mychart-0.1.0.tgz"],"created":"2018-02-11T12:51:15.763951001Z","digest":"ae8d7138002d432014dc8638ec37202823e9207445caf08a660d154b26e936ea"}]} </code></pre>
<p>I have a running k8s-cluster, setup with kubeadm. I have the problem, that the <code>api-server</code> and <code>controller-manager</code> pod cant start, due to a bind-exception: </p> <pre><code>failed to create listener: failed to listen on 0.0.0.0:6443: listen tcp 0.0.0.0:6443: bind: address already in use </code></pre> <p>We recently downgraded <code>docker-ce</code> from version <code>18.01</code> to <code>17.09</code> on all nodes, due to a bug in docker at recreating containers. But after downgrading the cluster just worked fine, meaning api-server and controller-manager were running. </p> <p>Ive searched google and so, for issues related to bindexceptions for api-server and controller-manager, but couldnt find anything useful</p> <p>I checked, that no other process is running on that port on the master node. Things i tried:</p> <ul> <li>restarted kubelet on master: <code>systemctl restart kubelet</code> </li> <li>restarted docker daemon, watched for staled containers: didnt found anyone</li> <li>checked if any process is running on 6443: <code>lsof -i:6443</code> prints nothing, but <code>nmap localhost -p 6443</code> shows the port is open with <code>service unknown</code></li> <li>restarted system pod's as well</li> </ul> <p>restarting kubelet and docker daemon worked fine, but without any effect to the problem</p> <p>Kubeadm / kubectl - Version:</p> <pre><code> kubeadm version: &amp;version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Using <code>weave</code> as netcork-cni</p> <p>Edit:</p> <p><strong>docker ps of master node</strong></p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 59239d32b1e4 weaveworks/weave-npc "/usr/bin/weave-npc" About an hour ago Up About an hour k8s_weave-npc_weave-net-74vsh_kube-system_99f6ee35-0f56-11e8-95e1-1614e1ecd749_0 7cb888c1ab4d weaveworks/weave-kube "/home/weave/launc..." About an hour ago Up About an hour k8s_weave_weave-net-74vsh_kube-system_99f6ee35-0f56-11e8-95e1-1614e1ecd749_0 1ad50c15f816 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_weave-net-74vsh_kube-system_99f6ee35-0f56-11e8-95e1-1614e1ecd749_0 ecb845f1dfae gcr.io/google_containers/etcd-amd64 "etcd --advertise-..." 2 hours ago Up 2 hours k8s_etcd_etcd-kube01_kube-system_1b6fafb5dc39ea18814d9bc27da851eb_6 001234690d7a gcr.io/google_containers/kube-scheduler-amd64 "kube-scheduler --..." 2 hours ago Up 2 hours k8s_kube-scheduler_kube-scheduler-kube01_kube-system_69c12074e336b0dbbd0a1666ce05226a_3 0ce04f222f08 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-scheduler-kube01_kube-system_69c12074e336b0dbbd0a1666ce05226a_3 0a3d9eabd961 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-apiserver-kube01_kube-system_95c67f50e46db081012110e8bcce9dfc_3 c77767104eb9 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_etcd-kube01_kube-system_1b6fafb5dc39ea18814d9bc27da851eb_4 319873797a8a gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-controller-manager-kube01_kube-system_f64b9b5ba10a00baa5c176d5877e8671_4 </code></pre> <p>journalctl - full:</p> <pre><code>Feb 11 19:51:03 kube01 kubelet[3195]: I0211 19:51:03.205824 3195 kuberuntime_manager.go:758] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)" Feb 11 19:51:03 kube01 kubelet[3195]: I0211 19:51:03.205991 3195 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671) Feb 11 19:51:03 kube01 kubelet[3195]: E0211 19:51:03.206039 3195 pod_workers.go:186] Error syncing pod f64b9b5ba10a00baa5c176d5877e8671 ("kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)" Feb 11 19:51:03 kube01 kubelet[3195]: I0211 19:51:03.206161 3195 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:gcr.io/google_containers/kube-apiserver-amd64:v1.9.2 Command:[kube-apiserver --client-ca-file=/etc/kubernetes/pki/ca.crt --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=207.154.252.249 --service-cluster-ip-range=10.96.0.0/12 --insecure-port=0 --enable-bootstrap-token-auth=true --requestheader-allowed-names=front-proxy-client --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-username-headers=X-Remote-User --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-group-headers=X-Remote-Group --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:&lt;nil&gt;} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:&lt;nil&gt;} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/healthz,Port:6443,Host:207.154.252.249,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil Terminat Feb 11 19:51:03 kube01 kubelet[3195]: ionMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Feb 11 19:51:03 kube01 kubelet[3195]: I0211 19:51:03.206234 3195 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" Feb 11 19:51:03 kube01 kubelet[3195]: I0211 19:51:03.206350 3195 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc) Feb 11 19:51:03 kube01 kubelet[3195]: E0211 19:51:03.206381 3195 pod_workers.go:186] Error syncing pod 95c67f50e46db081012110e8bcce9dfc ("kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" Feb 11 19:51:12 kube01 kubelet[3195]: E0211 19:51:12.816797 3195 fs.go:418] Stat fs failed. Error: no such file or directory Feb 11 19:51:14 kube01 kubelet[3195]: I0211 19:51:14.203327 3195 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:gcr.io/google_containers/kube-apiserver-amd64:v1.9.2 Command:[kube-apiserver --client-ca-file=/etc/kubernetes/pki/ca.crt --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=207.154.252.249 --service-cluster-ip-range=10.96.0.0/12 --insecure-port=0 --enable-bootstrap-token-auth=true --requestheader-allowed-names=front-proxy-client --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-username-headers=X-Remote-User --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-group-headers=X-Remote-Group --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:&lt;nil&gt;} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:&lt;nil&gt;} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/healthz,Port:6443,Host:207.154.252.249,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil Terminat Feb 11 19:51:14 kube01 kubelet[3195]: ionMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Feb 11 19:51:14 kube01 kubelet[3195]: I0211 19:51:14.203631 3195 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" Feb 11 19:51:14 kube01 kubelet[3195]: I0211 19:51:14.203833 3195 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc) Feb 11 19:51:14 kube01 kubelet[3195]: E0211 19:51:14.203886 3195 pod_workers.go:186] Error syncing pod 95c67f50e46db081012110e8bcce9dfc ("kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" Feb 11 19:51:15 kube01 kubelet[3195]: I0211 19:51:15.203837 3195 kuberuntime_manager.go:514] Container {Name:kube-controller-manager Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2 Command:[kube-controller-manager --leader-elect=true --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --address=127.0.0.1 --use-service-account-credentials=true --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:200 scale:-3} d:{Dec:&lt;nil&gt;} s:200m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:&lt;nil&gt;} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:&lt;nil&gt;} {Name:kubeconfig ReadOnly:true MountPath:/etc/kubernetes/controller-manager.conf SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Feb 11 19:51:15 kube01 kubelet[3195]: I0211 19:51:15.205830 3195 kuberuntime_manager.go:758] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)" Feb 11 19:51:15 kube01 kubelet[3195]: I0211 19:51:15.207429 3195 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671) Feb 11 19:51:15 kube01 kubelet[3195]: E0211 19:51:15.207813 3195 pod_workers.go:186] Error syncing pod f64b9b5ba10a00baa5c176d5877e8671 ("kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)" Feb 11 19:51:26 kube01 kubelet[3195]: I0211 19:51:26.203361 3195 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:gcr.io/google_containers/kube-apiserver-amd64:v1.9.2 Command:[kube-apiserver --client-ca-file=/etc/kubernetes/pki/ca.crt --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=207.154.252.249 --service-cluster-ip-range=10.96.0.0/12 --insecure-port=0 --enable-bootstrap-token-auth=true --requestheader-allowed-names=front-proxy-client --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-username-headers=X-Remote-User --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-group-headers=X-Remote-Group --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:&lt;nil&gt;} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:&lt;nil&gt;} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/healthz,Port:6443,Host:207.154.252.249,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil Terminat Feb 11 19:51:26 kube01 kubelet[3195]: ionMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Feb 11 19:51:26 kube01 kubelet[3195]: I0211 19:51:26.205258 3195 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" Feb 11 19:51:26 kube01 kubelet[3195]: I0211 19:51:26.205670 3195 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc) Feb 11 19:51:26 kube01 kubelet[3195]: E0211 19:51:26.205965 3195 pod_workers.go:186] Error syncing pod 95c67f50e46db081012110e8bcce9dfc ("kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" Feb 11 19:51:29 kube01 kubelet[3195]: I0211 19:51:29.203234 3195 kuberuntime_manager.go:514] Container {Name:kube-controller-manager Image:gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2 Command:[kube-controller-manager --leader-elect=true --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --address=127.0.0.1 --use-service-account-credentials=true --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:200 scale:-3} d:{Dec:&lt;nil&gt;} s:200m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:&lt;nil&gt;} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:&lt;nil&gt;} {Name:kubeconfig ReadOnly:true MountPath:/etc/kubernetes/controller-manager.conf SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Feb 11 19:51:29 kube01 kubelet[3195]: I0211 19:51:29.207713 3195 kuberuntime_manager.go:758] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)" Feb 11 19:51:29 kube01 kubelet[3195]: I0211 19:51:29.208492 3195 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671) Feb 11 19:51:29 kube01 kubelet[3195]: E0211 19:51:29.208875 3195 pod_workers.go:186] Error syncing pod f64b9b5ba10a00baa5c176d5877e8671 ("kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kube01_kube-system(f64b9b5ba10a00baa5c176d5877e8671)" Feb 11 19:51:32 kube01 kubelet[3195]: E0211 19:51:32.369188 3195 fs.go:418] Stat fs failed. Error: no such file or directory Feb 11 19:51:39 kube01 kubelet[3195]: I0211 19:51:39.203802 3195 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:gcr.io/google_containers/kube-apiserver-amd64:v1.9.2 Command:[kube-apiserver --client-ca-file=/etc/kubernetes/pki/ca.crt --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=207.154.252.249 --service-cluster-ip-range=10.96.0.0/12 --insecure-port=0 --enable-bootstrap-token-auth=true --requestheader-allowed-names=front-proxy-client --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-username-headers=X-Remote-User --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-group-headers=X-Remote-Group --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --authorization-mode=Node,RBAC --etcd-servers=http://127.0.0.1:2379] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:&lt;nil&gt;} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:&lt;nil&gt;} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/healthz,Port:6443,Host:207.154.252.249,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil Terminat Feb 11 19:51:39 kube01 kubelet[3195]: ionMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Feb 11 19:51:39 kube01 kubelet[3195]: I0211 19:51:39.205508 3195 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" Feb 11 19:51:39 kube01 kubelet[3195]: I0211 19:51:39.206071 3195 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc) Feb 11 19:51:39 kube01 kubelet[3195]: E0211 19:51:39.206336 3195 pod_workers.go:186] Error syncing pod 95c67f50e46db081012110e8bcce9dfc ("kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kube01_kube-system(95c67f50e46db081012110e8bcce9dfc)" </code></pre> <hr> <p><code>kubeadm.conf</code></p> <pre><code>[Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0" Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS </code></pre> <hr> <p>docker-info - cgroup</p> <pre><code>WARNING: No swap limit support Cgroup Driver: cgroupfs </code></pre> <hr> <p>kernel:</p> <pre><code>Linux kube01 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <hr> <p>distri:</p> <pre><code>Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial </code></pre>
<p>The problem is simply that some service is already bound on <code>6443</code> to check that out you can use <code>netstat -lutpn | grep 6443</code> and kill that process and restart kubelet service.</p> <pre><code>$ netstat -lutpn | grep 6443 tcp6 0 0 :::6443 :::* LISTEN 11395/some-service $ kill 11395 $ service kubelet restart </code></pre> <p>This should fix the situation.</p> <p>With kubernetes this usually happens if the kubernetes is not properly rested and containers are not properly cleaned out.</p> <p>To do so...</p> <pre><code>$ kubeadm reset $ docker rm -f $(docker ps -a -q) $ kubeadm init &lt;options&gt; # new initialization </code></pre> <p>Which would mean the nodes will have to rejoin again.</p>
<p>I have a situation where I have zero endpoints available for one service. To test this, I specially crafted a yaml descriptor that uses a simple node server to set and retrieve the ready/live status for a pod:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nodejs-deployment labels: app: nodejs spec: replicas: 3 selector: matchLabels: app: nodejs template: metadata: labels: app: nodejs spec: containers: - name: nodejs image: nodejs_server ports: - containerPort: 8080 livenessProbe: httpGet: path: /is_alive port: 8080 initialDelaySeconds: 5 timeoutSeconds: 3 periodSeconds: 10 readinessProbe: httpGet: path: /is_ready port: 8080 initialDelaySeconds: 5 timeoutSeconds: 3 periodSeconds: 10 --- apiVersion: v1 kind: Service metadata: name: nodejs-service labels: app: nodejs spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: nodejs --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nodejs-ingress spec: backend: serviceName: nodejs-service servicePort: 80 </code></pre> <p>The node server has methods to set and retrieve the liveness and readiness. </p> <p>When the app start I can see that 3 replicas are created and the status of them is ready. OK then now I trigger manually the status of their readiness to set to false [from outside the ingress]. One pod is correctly removed from the endpoint so no traffic is routed to it[that's OK as this is the expected behavior]. When I set all the ready-statuses to false for all pods the endpoints list is empty [still the expected behavior].</p> <p>At that point I cannot set ready=true from outside the ingress as the traffic is not routed to any pod. Is there a way here for example of triggering a restart of the pod when the ready is not achieved after n-timer or n-seconds? Or when the endpoints list is empty?</p>
<p>Well, that is perfectly normal and expected behaviour. What you can do, on the side, is to forward traffic from localhost to a particular pod with <code>kubectl port-forward</code>. That way you can access the pod directly, without ingresses etc. and set it's readiness back to ok. If you want to restart when host it not ready for to long, just use the same endpoint for liveness probe, but trigger it after more tries.</p>
<p>I am running solr inside minikube for a POC. I am trying to figure out how to access the solr inside the minikube. As per my knowledge I cant access solr using my host IP, it is only accessible using the minikube IP - 192.168.99.100:8983/solr. My objective is to hit the solr server when accessing it from a remote box.</p> <p>One of my team mates suggested that I can maybe use something that will forward incoming request to a local IP. </p> <p>Any suggestions??</p> <p>Thanks</p>
<p>First you need to get URL of your Service for solr.</p> <pre><code>$ minikube service &lt;service-name&gt; --url http://192.168.99.100:30000 </code></pre> <p>Here, <code>30000</code> is your solr Service NodePort.</p> <p>Now you need to create SSH tunnel. </p> <p>For that try this</p> <pre><code>$ ssh -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) -L \*:30000:0.0.0.0:30000 </code></pre> <blockquote> <p>Note: To keep SSH tunneling in Background, add <code>&amp;</code> at the end of <code>(ssh -i .....) &amp;</code></p> </blockquote> <p>Now you can access this solr Service using your Host IP address</p>
<p>Using python and k8 API I want to get all configmaps in a namespace and read the data. </p> <p>Can I do this using Kubernetes API?</p>
<p>Using <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">the official python client</a> it seems that you can. <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#list_namespaced_config_map" rel="nofollow noreferrer">This is the example from the docs</a></p> <pre><code>from __future__ import print_function import time import kubernetes.client from kubernetes.client.rest import ApiException from pprint import pprint # Configure API key authorization: BearerToken configuration = kubernetes.client.Configuration() configuration.api_key['authorization'] = 'YOUR_API_KEY' # Uncomment below to setup prefix (e.g. Bearer) for API key, if needed # configuration.api_key_prefix['authorization'] = 'Bearer' # create an instance of the API class api_instance = kubernetes.client.CoreV1Api(kubernetes.client.ApiClient(configuration)) namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional) _continue = '_continue_example' # str | The continue option should be set when retrieving more results from the server. Since this value is server defined, kubernetes.clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server the server will respond with a 410 ResourceExpired error indicating the kubernetes.client must restart their list without the continue field. This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. (optional) field_selector = 'field_selector_example' # str | A selector to restrict the list of returned objects by their fields. Defaults to everything. (optional) include_uninitialized = true # bool | If true, partially initialized resources are included in the response. (optional) label_selector = 'label_selector_example' # str | A selector to restrict the list of returned objects by their labels. Defaults to everything. (optional) limit = 56 # int | limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and kubernetes.clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, kubernetes.clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a kubernetes.client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. (optional) resource_version = 'resource_version_example' # str | When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv. (optional) timeout_seconds = 56 # int | Timeout for the list/watch call. (optional) watch = true # bool | Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. (optional) try: api_response = api_instance.list_namespaced_config_map(namespace, pretty=pretty, _continue=_continue, field_selector=field_selector, include_uninitialized=include_uninitialized, label_selector=label_selector, limit=limit, resource_version=resource_version, timeout_seconds=timeout_seconds, watch=watch) pprint(api_response) except ApiException as e: print("Exception when calling CoreV1Api-&gt;list_namespaced_config_map: %s\n" % e) </code></pre>
<p>How to configure kafka cluster in lagom framework in kubernetes in production mode.</p> <p>Its expecting <code>kafka_native</code> service while starting service container.</p> <p>Cassandra <code>cas_native</code> successfully configured but kafka not working</p>
<p>As you are using Kubernetes you can use <a href="https://github.com/lightbend/service-locator-dns" rel="nofollow noreferrer">service locator DNS</a> and take advantage of Kube DNS Service... You have to have a headless service in front of Kafka...</p>
<p>I've been running Kafka on Kubernetes without any major issue for a while now; however, I recently introduced a cluster of Cassandra pods and started having performance problems with Kafka.</p> <p>Even though Cassandra doesn't use page cache like Kafka does, it does make frequent writes to disk, which presumably effects the kernel's underlying cache.</p> <p>I understand that Kubernetes pods are managing memory resources through cgroups, which can be configured by setting memory requests and limits in Kubernetes, but I've noticed that Cassandra's utilization of page cache can increase the number of page faults in my Kafka pods even when they don't seem to be competing for resources (i.e., there's memory available on their nodes).</p> <p>In Kafka more page faults leads to more writes to disk, which hamper the benefits of sequential IO and compromise disk performance. If you use something like AWS's EBS volumes, this will eventually deplete your burst balance and eventually cause catastrophic failures across your cluster.</p> <p>My question is, is it possible to isolate page cache resources in Kubernetes or somehow let the kernel know that pages owned by my Kafka pods should be kept in the cache longer than those in my Cassandra pods?</p>
<p>I thought this was an interesting question, so this is a posting of some findings from a bit of digging.</p> <p>Best guess: there is no way with k8s OOB to do this, but enough tooling is available such that it could be a fruitful area for research and development of a tuning and policy application that could be deployed as a DaemonSet. </p> <p>Findings: </p> <p>Applications can use the fadvise() system call to provide guidance to the kernel regarding which file-backed pages are needed by the application and which are not and can be reclaimed.</p> <p><a href="http://man7.org/linux/man-pages/man2/posix_fadvise.2.html" rel="noreferrer">http://man7.org/linux/man-pages/man2/posix_fadvise.2.html</a></p> <p>Applications can also use O_DIRECT to attempt to avoid the use of page cache when doing IO:</p> <p><a href="https://lwn.net/Articles/457667/" rel="noreferrer">https://lwn.net/Articles/457667/</a></p> <p>There is some indication that Cassandra already uses fadvise in a way that attempts to optimize for reducing its page cache footprint:</p> <p><a href="http://grokbase.com/t/cassandra/commits/122qha309v/jira-created-cassandra-3948-sequentialwriter-doesnt-fsync-before-posix-fadvise" rel="noreferrer">http://grokbase.com/t/cassandra/commits/122qha309v/jira-created-cassandra-3948-sequentialwriter-doesnt-fsync-before-posix-fadvise</a></p> <p>There is also some recent (Jan 2017) research from Samsung patching Cassandra and fadvise in the kernel to better utilize multi-stream SSDs:</p> <p><a href="http://www.samsung.com/us/labs/pdfs/collateral/Multi-stream_Cassandra_Whitepaper_Final.pdf" rel="noreferrer">http://www.samsung.com/us/labs/pdfs/collateral/Multi-stream_Cassandra_Whitepaper_Final.pdf</a></p> <p>Kafka is page cache architecture aware, though it doesn't appear to use fadvise directly. The knobs available from the kernel are sufficient for tuning Kafka on a dedicated host:</p> <ul> <li>vm.dirty* for guidance on when to get written-to (dirty) pages back onto disk</li> <li>vm.vfs_cache_pressure for guidance on how aggressive to be in using RAM for page cache</li> </ul> <p>Support in the kernel for device-specific writeback threads goes way back to the 2.6 days:</p> <p><a href="https://www.thomas-krenn.com/en/wiki/Linux_Page_Cache_Basics" rel="noreferrer">https://www.thomas-krenn.com/en/wiki/Linux_Page_Cache_Basics</a></p> <p>Cgroups v1 and v2 focus on pid-based IO throttling, not file-based cache tuning:</p> <p><a href="https://andrestc.com/post/cgroups-io/" rel="noreferrer">https://andrestc.com/post/cgroups-io/</a></p> <p>That said, the old linux-ftools set of utilities has a simple example of a command-line knob for use of fadvise on specific files:</p> <p><a href="https://github.com/david415/linux-ftools" rel="noreferrer">https://github.com/david415/linux-ftools</a></p> <p>So there's enough there. Given specific kafka and cassandra workloads (e.g. read-heavy vs write-heavy), specific prioritizations (kafka over cassandra or vice versa) and specific IO configurations (dedicated vs shared devices), one could emerge with a specific tuning model, and those could be generalized into a policy model. </p>
<p>I have some burstable pods running in cluster, which I do not want to be killed in case of memory/cpu pressure.</p> <p>Is there any way to increase it's priority or something, so that we do not have to change it's namespace (kube-system for making it critical) and it can be saved?</p> <p>I have found some <a href="https://github.com/kubernetes/kubernetes/issues/47604" rel="nofollow noreferrer">issues</a> regarding adding priority to pods. But couldn't figure out the solution. There should be some way to set priority, or am I missing something big here ?</p>
<p>Pod priority feature is not available prior to Kubernetes <code>v1.7</code></p> <p>From <code>v1.8+</code>, you can add pod's priority in PodSpec. You need to create PriorityClass object with priority value and use that PriorityClass name in Pod.</p> <p>But upto <code>v1.9</code>, PriorityClass is still in alpha phase. </p> <pre><code>apiVersion: scheduling.k8s.io/v1alpha1 kind: PriorityClass metadata: name: high-priority value: 1000000 globalDefault: false </code></pre> <p>Here,</p> <ul> <li><code>value</code> indicates priority. The higher the value, the higher the priority</li> <li><code>globalDefault</code> indicates that the value of this PriorityClass should be used for Pods without a PriorityClassName. Only one PriorityClass with globalDefault set to true can exist in the system.</li> </ul> <p>Now, lets create a Pod with this Priority</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx priorityClassName: high-priority </code></pre> <p><strong>Preemption</strong></p> <p>When Pod is created, if no Node is found that satisfies all the specified requirements of that Pod, using preemption logic one or more lower priority Pods get deleted from the Node. After the Pods are gone, Pod with higher priority is scheduled on the Node.</p> <p>See details about <a href="https://v1-8.docs.kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="noreferrer">pod priority</a>.</p>
<p>My application communicates to some services via hostnames. When running my application as a docker container i used to add hostnames to the /etc/hosts of the hostmachine and run the container using <code>--net=host</code>. </p> <p>Now I'm running my containers in kubernetes cluster. I would like to know how can i add the /etc/hosts entries to the pod via <code>yaml</code>.</p> <p>I'm using kubernetes v1.5.3. </p>
<p>From k8s 1.7 you can add <code>hostAliases</code>. <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="noreferrer">Example from the docs</a>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: hostaliases-pod spec: restartPolicy: Never hostAliases: - ip: "127.0.0.1" hostnames: - "foo.local" - "bar.local" - ip: "10.1.2.3" hostnames: - "foo.remote" - "bar.remote" </code></pre>
<p>Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.) </p> <p><strong>Question:</strong> I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end? </p> <p>The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point. </p> <p>If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.</p>
<p>Pods running on the cluster can talk to each other using a <code>ClusterIP</code> service, which is the default service type. You don't need a <code>LoadBalancer</code> service to make two pods talk to each other. According to <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">the docs on this topic</a></p> <blockquote> <p>ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.</p> </blockquote> <p>As explained in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">the Discovery documentation</a>, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.</p> <p>If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.</p> <blockquote> <p>For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.</p> </blockquote> <p>You can find more info about how DNS works on kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">in the docs</a>.</p>
<p>I have a landoop kafka image running on a Pod on minikube k8 cluster on my mac. I have 2 different services to expose the port 8081 for schema registry and 9092 for broker. I have mapped the ports 8081 -> 30081 and 9092 -> 30092 in my NodePort services so that I can access it from outside the cluster. But when I try to run a console consumer or my consumer app, Kafka never consumes messages. To verify broker 9092 port is reachable outside k8 cluster:</p> <pre><code>nc &lt;exposed-ip&gt; 30092, it says the port is open. </code></pre> <p>To verify Schema registry 8081 is reachable:</p> <pre><code>curl -X GET http://192.168.99.100:30081/subjects </code></pre> <p>It returns the schemas that are available.</p> <p>I had a couple of questions. 1) Can we not access Kafka out of k8 cluster in an above-mentioned way outside of k8 cluster?If so am I doing it wrong in some way? 2) If the port is open, doesn't that mean that broker is available?</p> <p>Any help is appreciated.Thanks</p>
<p>Accessing a Kafka cluster from outside a container network is rather complicated if you cannot route directly from the outside to the pod.</p> <p>When you first connect to a Kafka cluster you connect to a single broker and the broker returns the list of all brokers and partitions inside the Kafka cluster. The Kafka client then uses the list to interact with the brokers where the specific topic lays. </p> <p>The problem is that the broker lists contains by default the internal IP of the Kafka broker. Which would be in your case the container network ip. You can overwrite this value by setting <code>advertised.listeners</code> inside each broker's configuration. </p> <p>To make a Kafka cluster available from outside Kubernetes you need to configure a nodeport service per each of your brokers and set the <code>advertised.listeners</code> setting of each broker to the external ip of the corresponding nodeport service. But note that this adds additional latency and failure points when you try to use Kafka from inside your Kubernetes cluster. </p>
<p>I'm running a wordpress / woocommerce site on GCE Kubernetes and having trouble scaling because of session persistence.</p> <p>The LoadBalancer (GCE Ingress) sends all traffic to a reverse proxy that then sends the traffic to different services I have set up, one of which is wordpress. </p> <p>If I use <code>SessionAffinity: ClientIP</code> on the WordPress service all of the traffic goes to one pod and the others are ignored. This seems to because the service is seeing the LoadBalancer's ip address rather than the Client's. This is in spite of <code>externalTrafficPolicy: Local</code> set on both nginx reverse proxy and the wordpress NodePort services.</p> <p>I've also tried using the wordpress service as the default backend and I managed to get traffic to go to all pods but lost session affinity.</p> <p>The Ingress also performs TLS termination, which I've seen can effect ClientIP visibility, but I think that issue is resolved by the external traffic policy.</p> <p>We are also using Cloudflare, I'm wondering if that could have an effect. But we are using the ngx_http_realip_module to try to get the correct Client IP address. </p>
<p>I had a similar issue in one of the PHP services deployed in my cluster. Sessions are evil :) but sometimes you do need to use them. You can cluster your session data in PHP in couple ways, so that you do not need to use sticky sessions on loadbalancer(s).</p> <ul> <li>shared RWX volume in your pod(s) that will keep the session files available to all instances in your deployment. Unless you use something like S3 for wordpress uploads, you probably already do something similar for binaries, as I suggest for session files.</li> <li>session handler with Memcached or Redis as the session storage (this is what I have now)</li> <li>you can even keep them in your MySQL, same as WP database, although I've seen that it can be of significant performance impact.</li> </ul> <p>You can find simple mamcache example <a href="http://php.net/manual/en/memcached.sessions.php" rel="noreferrer">here</a>. If you'd need a clustered storage, you could look into Redis clustering, or, as I would, into Couchbase</p>
<p>I have 2 pods running on minikube. Pod A has to <code>expose 2 ports</code> 8081 and 9092 to both Pod B <code>inside the k8 cluster</code> and also <code>outside k8 cluster</code> so that pod B can reach them and also outside the cluster my consumer can reach them.So my question is <code>should</code> I have <code>2 services</code> created for each port? One to expose as cluster ip and one for node port? Or is there a way to have a single service created to expose this as both cluster IP and NodePort?</p> <p>Also if I have to expose my node port as a particular port, would specifying <code>nodePort</code> mess up with the cluster ip? </p>
<p>When you choose the <code>NodePort</code> service type, besides having a Cluster IP, kubernetes will expose the service on a port on each node of the cluster (the same port on each node). So choosing NodePort gets you both: the ClusterIP and the port on each node. As said <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">in the docs</a></p> <blockquote> <p>Note that this Service will be visible as both :spec.ports[<em>].nodePort and spec.clusterIP:spec.ports[</em>].port.</p> </blockquote>
<p>Is there a way to define in which interface Flannel should be listening? According to <a href="https://github.com/coreos/flannel/blob/master/Documentation/troubleshooting.md#vagrant" rel="noreferrer">his documentation</a> adding FLANNEL_OPTIONS="--iface=enp0s8" in /etc/sysconfig/flanneld should work, but it isn't. </p> <p>My master node configuration is running in a xenial(ubuntu 16.04) vagrant:</p> <pre><code>$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 10.0.0.10 $ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created ubuntu@master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml clusterrole "flannel" configured clusterrolebinding "flannel" configured </code></pre> <p>Host ip addresses:</p> <pre><code>$ ip addr 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 02:63:8e:2c:ef:cd brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global enp0s3 valid_lft forever preferred_lft forever inet6 fe80::63:8eff:fe2c:efcd/64 scope link valid_lft forever preferred_lft forever 3: enp0s8: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:fb:ad:bb brd ff:ff:ff:ff:ff:ff inet 10.0.0.10/24 brd 10.0.0.255 scope global enp0s8 valid_lft forever preferred_lft forever 4: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:da:aa:6e:13 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever 5: flannel.1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 5e:07:a1:7f:97:53 brd ff:ff:ff:ff:ff:ff inet 10.244.0.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::5c07:a1ff:fe7f:9753/64 scope link valid_lft forever preferred_lft forever 6: cni0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff inet 10.244.0.1/24 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::7037:fcff:fe41:b1fb/64 scope link valid_lft forever preferred_lft forever </code></pre> <p>Pods names:</p> <pre><code>$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 0 2m kube-system kube-apiserver-master 1/1 Running 0 2m kube-system kube-controller-manager-master 1/1 Running 0 2m kube-system kube-dns-545bc4bfd4-gjjth 0/3 ContainerCreating 0 3m kube-system kube-flannel-ds-gdz8f 1/1 Running 0 1m kube-system kube-flannel-ds-h4fd2 1/1 Running 0 33s kube-system kube-flannel-ds-rnlsz 1/1 Running 1 33s kube-system kube-proxy-d4wv9 1/1 Running 0 33s kube-system kube-proxy-fdkqn 1/1 Running 0 3m kube-system kube-proxy-kj7tn 1/1 Running 0 33s kube-system kube-scheduler-master 1/1 Running 0 2m </code></pre> <p>Flannel Logs:</p> <pre><code>$ kubectl logs -n kube-system kube-flannel-ds-gdz8f -c kube-flannel I1216 12:00:35.817207 1 main.go:474] Determining IP address of default interface I1216 12:00:35.822082 1 main.go:487] Using interface with name enp0s3 and address 10.0.2.15 I1216 12:00:35.822335 1 main.go:504] Defaulting external address to interface address (10.0.2.15) I1216 12:00:35.909906 1 kube.go:130] Waiting 10m0s for node controller to sync I1216 12:00:35.909950 1 kube.go:283] Starting kube subnet manager I1216 12:00:36.987719 1 kube.go:137] Node controller sync successful I1216 12:00:37.087300 1 main.go:234] Created subnet manager: Kubernetes Subnet Manager - master I1216 12:00:37.087433 1 main.go:237] Installing signal handlers I1216 12:00:37.088836 1 main.go:352] Found network config - Backend type: vxlan I1216 12:00:37.089018 1 vxlan.go:119] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false I1216 12:00:37.295988 1 main.go:299] Wrote subnet file to /run/flannel/subnet.env I1216 12:00:37.296025 1 main.go:303] Running backend. I1216 12:00:37.296048 1 main.go:321] Waiting for all goroutines to exit I1216 12:00:37.296084 1 vxlan_network.go:56] watching for new subnet leases </code></pre> <p>How do I do to configure flannel in kubernetes <strong>to listen in enp0s8 instead of enp0s3</strong>?</p>
<p>I've the same problem, trying to use k8s and Vagrant. I've found this note in the <a href="https://github.com/coreos/flannel/blob/master/Documentation/troubleshooting.md#vagrant" rel="noreferrer">documentation of flannel</a>:</p> <blockquote> <p>Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address 10.0.2.15, is for external traffic that gets NATed.</p> <p>This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the --iface eth1 flag to flannel so that the second interface is chosen.</p> </blockquote> <p>So I look for it in the flannel's pod configuration. If you download the <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="noreferrer">kube-flannel.yml</a> file, you should look at DaemonSet spec, specifically at the "kube-flannel" container. There, you should add the required "--iface=enp0s8" argument (Don't forget the "="). Part of the code I've used.</p> <pre><code> containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=enp0s8 </code></pre> <p>Then run <code>kubectl apply -f kube-flannel.yml</code></p> <p>Hope helps.</p>
<p>What's the best/easier way to connect an AWS RDS instance to my kubernetes cluster running on gcloud?</p>
<p>Likely the only way there is, is using their <code>public availability</code> option upon creating the instance. That really is just fine. You will get a DNS hostname that also powers their loadbalancing feature.</p> <p>Note, their are similar question for connection within AWS, which report DNS problems, that do not apply in your case Ref: <a href="https://stackoverflow.com/questions/40277075/accessing-postgres-rds-from-kubernetes-cluster-in-aws">Accessing Postgres RDS from Kubernetes cluster in AWS</a></p>
<p>I run the following commands and when I check if the pods are running I get the following errors:</p> <blockquote> <p>Failed to pull image "tomcat": rpc error: code = Unknown desc = no matching manifest for linux/amd64 in the manifest list entries</p> </blockquote> <pre><code>kubectl run tomcat --image=tomcat --port 8080 </code></pre> <p>and</p> <blockquote> <p>Failed to pull image "ngnix": rpc error: code = Unknown desc = Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login'</p> </blockquote> <pre><code>kubectl run nginx3 --image ngnix --port 80 </code></pre> <p>I seen a post in git about how to complete this when private repos cause an issue but not public. Has anyone ran into this before?</p>
<p><strong>First Problem</strong></p> <p>From github <a href="https://github.com/docker-library/official-images/issues/3835" rel="noreferrer">issue</a></p> <blockquote> <p>Sometimes, we'll have non-amd64 image build jobs finish before their amd64 counterparts, and due to the way we push the manifest list objects to the library namespace on the Docker Hub, that results in amd64-using folks (our primary target users) getting errors of the form "no supported platform found in manifest list" or "no matching manifest for XXX in the manifest list entries"</p> </blockquote> <p>Docker Hub manifest list is not up-to-date with <code>amd64</code> build for <code>tomcat:latest</code>.</p> <p>Try another tag</p> <pre><code>kubectl run tomcat --image=tomcat:9.0 --port 8080 </code></pre> <p><strong>Second Problem</strong></p> <p>Use <code>nginx</code> not <code>ngnix</code>. <em>Its a typo</em>.</p> <pre><code>$ kubectl run nginx3 --image nginx --port 80 </code></pre>
<p>I have spun up a 3 node Kubernetes cluster (version: 1.5.8) on AWS using the kube-up.sh script following this walkthrough:</p> <p><a href="https://ryaneschinger.com/blog/building-a-kubernetes-cluster-on-aws/" rel="nofollow noreferrer">https://ryaneschinger.com/blog/building-a-kubernetes-cluster-on-aws/</a></p> <p>I'm able to successfully access the cluster and view the UI. Output of <code>kubectl cluster-info</code> command:</p> <p><a href="https://i.stack.imgur.com/sJYhc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sJYhc.png" alt="enter image description here"></a></p> <p>I wrote a simple Spring Boot microservice:</p> <pre><code>@RestController public class AddCustomerController { private static final String template = "Customer %s is added."; @RequestMapping("/addcustomer") public Message addcustomer(@RequestParam(value="name") String name) { //Retrieve the hostname of the "node"/"container" String hostname = null; try { hostname = InetAddress.getLocalHost().getHostName(); } catch (UnknownHostException e) { e.printStackTrace(); } return new Message(ThreadLocalRandom.current().nextLong(), String.format(template, name), hostname); } } </code></pre> <p>and packaged it in a Docker container after the Gradle build and am able to successfully use it locally. I have pushed the image to DockerHub.</p> <pre><code>FROM openjdk:8-jdk-alpine VOLUME /tmp ADD build/libs/*.jar app.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"] </code></pre> <p>Now I'm using Helm Charts to deploy this application to Kubernetes.</p> <p>Deployment descriptor:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: add-customer-deployment spec: replicas: 3 template: metadata: name: add-customer-microservice labels: app: add-customer spec: containers: - image: {{ .Values.dockerHubUsername }}/add-customer-microservice:latest name: add-customer-microservice imagePullPolicy: Always ports: - containerPort: 8080 </code></pre> <p>Service descriptor:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: add-customer-service spec: selector: app: add-customer ports: - port: 1000 protocol: TCP targetPort: 8080 name: access-port type: NodePort </code></pre> <p>I have followed the same procedure for 3 other similar Spring Boot microservices.</p> <p>Ingress descriptor:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: quantiphi-poc-ingress-dns annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: crud.qdatalabs.com http: paths: - path: /service1 backend: serviceName: add-customer-service servicePort: 1000 - path: /service1/* backend: serviceName: add-customer-service servicePort: 1000 - path: /service2 backend: serviceName: get-customer-service servicePort: 2000 - path: /service2/* backend: serviceName: get-customer-service servicePort: 2000 - path: /service3 backend: serviceName: update-customer-service servicePort: 3000 - path: /service3/* backend: serviceName: update-customer-service servicePort: 3000 - path: /service4 backend: serviceName: delete-customer-service servicePort: 4000 - path: /service4/* backend: serviceName: delete-customer-service servicePort: 4000 </code></pre> <p>First I install the nginx controller on my cluster using the Helm Charts:</p> <pre><code>helm install --name my-release stable/nginx-ingress </code></pre> <p>Then I install my Chart using:</p> <pre><code>helm install folder-conataining-helm-chart/ </code></pre> <p>Then I point the alias of crud.qdatalabs.com (Type A) from Route53 to the ELB spawned by the Ingress resource.</p> <p>The URL <a href="http://crud.qdatalabs.com/healthz" rel="nofollow noreferrer">crud.qdatalabs.com/healthz</a> is giving 200 OK response</p> <p>When I try to access the microservices using the URL <a href="http://crud.qdatalabs.com/service1/addcustomer?name=starman" rel="nofollow noreferrer">crud.qdatalabs.com/service1/addcustomer?name=starman</a></p> <p>I'm treated with the WhiteLabel Error Page:</p> <p><a href="https://i.stack.imgur.com/pu9Oc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pu9Oc.png" alt="enter image description here"></a></p> <p>I think I have made some configuration error, but can't put my finger on it. Please help me with any direction. I'll be happy to provide more details. Thank you.</p>
<p>As I stated in <a href="https://stackoverflow.com/questions/48452556/setting-up-a-kuberentes-cluster-with-http-load-balancing-ingress-for-rstudio-and/48452808#48452808">Setting up a Kuberentes cluster with HTTP Load balancing ingress for RStudio and Shiny results in error pages</a>, the most likely problem you have is that when you go with this ingress you attached your URI is different then with direct accesc ( /service1/ vs / ) so your app is lost and has no content for that uri.</p> <p>With Nginx Ingress Controller you can use ingress.kubernetes.io/rewrite-target: / annotation to mitigate this and make sure that / is accessed even when there is a subfolder in the ingress path.</p> <p>So, you either need to use proper rewrite annotation or support the path you use in the ingress inside your service.</p>
<p><strong>Problem statement:</strong></p> <p>Azure disk dynamic Persistent Volume Claim: Mount failed with timeout expired. Pod is in “ContainerCreating” status for ever.</p> <p><code>kubectl describe pod myPod</code> gives following information:</p> <pre><code>Warning FailedMount 1m (x5 over 12m) kubelet, k8-node-2 Unable to mount volumes for pod "mongodb-76bd56459f-hxjdc_kubeapps(8189f2e4-0017-11e8-82ac-000d3aa33484)": timeout expired waiting for volumes to attach/mount for pod "kubeapps"/"mongodb-76bd56459f-hxjdc". list of unattached/unmounted volumes=[data] Warning FailedMount 21s (x8 over 12m) kubelet, k8-node-2 (combined from similar events): MountVolume.SetUp failed for volume "pvc-516aeece-ff9d-11e7-82ac-000d3aa33484" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8189f2e4-0017-11e8-82ac-000d3aa33484/volumes/kubernetes.io~azure-disk/pvc-516aeece-ff9d-11e7-82ac-000d3aa33484 --scope -- mount -t ext4 -o bind /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m3083936425 /var/lib/kubelet/pods/8189f2e4-0017-11e8-82ac-000d3aa33484/volumes/kubernetes.io~azure-disk/pvc-516aeece-ff9d-11e7-82ac-000d3aa33484&lt;br&gt; Output: Running scope as unit run-rf9126bab6fba44d9a499370260ed5fe8.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/m3083936425 does not exist </code></pre> <p><strong>Kubernetes Cluster info:</strong></p> <p>It is a bare-metal installation with One Master and Two minions. All three Ubuntu 16.04 LTS VMs are on Azure. Cluster is created with “kubeadm”.</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p><strong>Further useful information from my own investigation:</strong></p> <p><strong>1.</strong> PVC and PV are created and bound. See below:</p> <pre><code>~$ kubectl -n kubeapps get pvc NAME|STATUS|VOLUME|CAPACITY|ACCESS|MODES|STORAGECLASS|AGE mongodb-data|Bound|pvc-516aeece-ff9d-11e7-82ac-000d3aa33484|8Gi||RWO|k8storage |14h </code></pre> <hr> <pre><code>~$ kubectl -n kubeapps get pv NAME|CAPACITY|ACCESS|MODES|RECLAIM POLICY|STATUS|CLAIM|STORAGECLASS|REASON|AGE pvc-516aeece-ff9d-11e7-82ac-000d3aa33484|8Gi|RWO|Retain|Bound|kubeapps/mongodb-data|k8storage| |14h </code></pre> <p><strong>2.</strong> Azure managed disks are automatically created and attached to the appropriate node. See screenshots from Azure portal: <a href="https://i.stack.imgur.com/3X4Nu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3X4Nu.jpg" alt="enter image description here"></a></p> <p>Thanks in advance!!!</p>
<p>Based on the information there, you can proceed to the kubelet point of view of the issue ("kubectl events", "journalctl -u kubelet") as well as possible operating system issues, including interaction with Azure "journalctl -p 3" - the "-p 3" is to filter only for major issues (-p can go from 0 to 7).</p>
<p>I have problems to delete automatically the completed jobs which were started by a CronJob. As I read in this QA <a href="https://stackoverflow.com/a/43116061/5843405">here</a> and <a href="https://stackoverflow.com/questions/36384873/kubernetes-job-cleanup?noredirect=1&amp;lq=1">here</a>, and in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#jobs-history-limits" rel="noreferrer">official documentation</a>, this can be done establishing the job history limit <code>.spec.successfulJobsHistoryLimit</code> and <code>.spec.failedJobsHistoryLimit</code>. I have done it, and it is established to 1 for both cases. But my cluster is saving more than 1 completed job, in fact, it is saving more than the default which is 3 successful jobs and 1 for failed jobs.</p> <p>My current CronJob configuration is</p> <pre><code> apiVersion: batch/v1beta1 kind: CronJob metadata: name: some-name-cron spec: schedule: "00 05 1/1 * ?" successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 1 jobTemplate: spec: ... </code></pre> <p>And my versions are: - Client version: <code>Client Version: version.Info{Major:"1", Minor:"9",</code> - Server version: <code>Server Version: version.Info{Major:"1", Minor:"8+",</code></p> <p>So, what I am doing wrong?</p> <p>I've attached a image with the k8s dashboard.</p> <p><a href="https://i.stack.imgur.com/WdAcw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/WdAcw.png" alt="Dashboard screenshot with the completed Jobs and Pods"></a></p>
<p>You can use jobs history limits <strong>successfulJobsHistoryLimit</strong> and <strong>failedJobsHistoryLimit</strong> to set how many completed and failed jobs should be kept.</p> <p>You can check for more details on it in the Kubernetes docs here:</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#jobs-history-limits" rel="noreferrer" title="Kubernetes - Cronjob: Jobs History Limits">Kubernetes - Cronjob: Jobs History Limits</a></p>
<p>I have received a public ip address for my kubernetes service which i can configure as a loadbalancer ip in my NGINX ingress. This public ip address can be accessed from public internet.</p> <p>Is there a way or some configuration through which i can make these services accessible only from my client network in kubernetes?</p>
<p>With Kubernetes Nginx Ingress it is as simple as setting an annotation on your ingress object like :</p> <pre><code>kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/whitelist-source-range: '8.8.8.8/32' </code></pre> <p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#whitelist-source-range" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#whitelist-source-range</a></p>
<p>I was trying to modify my ingress as my ingress controller and ingress service are runnign in nginx-ingress name space.</p> <p>I tried to to use: <code>kubectl edit ing lhingress</code> where ingress name is lhingress. It opens up an editor but whne i try to modify and then save it. i get below error.</p> <pre><code>error: the namespace from the provided object "nginx-ingress" does not match the namespace "default". You must pass '--namespace=nginx-ingress' to perform this operation. </code></pre> <p>Could you please suggest me how to point this ingress to different namespace</p>
<p>You cannot change namespace of existing object. You need to create a new resource in the other namespace and delete the old one.</p>
<p>I create a service which type is loadbalancer, kubernetes create an ELB and listen protocol is <code>tcp 80---&gt;tcp 80</code>, then I modified the ELB by changing the protocol to <code>http 80---&gt;http 80</code>, but after a few days, I find that the protocol went back to <code>tcp 80---&gt;tcp 80</code> again, how could this happen? Any answers would be appreciated </p>
<p>By default kubernetes will add, TCP protocol in loadbalancer. If we modify manually, It will get changed automatically by kubernetes. If you wish to keep HTTP protocol in kubernets. Please add below annotations in your service yaml.</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http </code></pre>
<p>How is concurrency managed in volumes when the volumes are shared by multiple containers?</p>
<p>It is the same way how a shared path / directory / file is managed by multiple processes / applications without containers.</p> <ol> <li>Avoid parallel / simultaneous writes using locks, semaphores, mutexes and mutual exclusion.</li> <li>Be careful about stale / old data when reading from shared volume.</li> <li>Keep in mind things like eventual consistency, volume backups and data migration since the volume is shared by multiple containers.</li> <li>Make sure that the shared volume is not corrupted. If corrupted, have a recovery plan.</li> <li>Use dedicated storage servers so that containers can move in the cluster (<strong><code>hostPath</code></strong> and <strong><code>emptyDir</code></strong> in kubernetes do not move with pods).</li> <li>The underlying OS, disk, storage software / driver, LVM, filesystem will also handle IO concurrency.</li> </ol>
<p>I am new to Kubernetes and Nginx Ingress tools and now i am trying to host MySql service using VHost in Nginx Ingress on AWS. I have created a file something like :</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: type: NodePort ports: - port: 3306 protocol: TCP selector: app: mysql --- apiVersion: apps/v1beta2 kind: Deployment metadata: name: mysql labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql imagePullPolicy: IfNotPresent env: - name: MYSQL_ROOT_PASSWORD value: password ports: - name: http containerPort: 3306 protocol: TCP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mysql labels: app: mysql annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: mysql.example.com http: paths: - path: / backend: serviceName: mysql servicePort: 3306 </code></pre> <p>My LoadBalancer (created by Nginx Ingress) port configuration looks like :</p> <pre><code>80 (TCP) forwarding to 32078 (TCP) Stickiness options not available for TCP protocols 443 (TCP) forwarding to 31480 (TCP) Stickiness options not available for TCP protocols </code></pre> <p><strong>mysql.example.com</strong> is pointing to my ELB.</p> <p>I was expecting something like, from my local box i can connect to MySql if try something like :</p> <pre><code>mysql -h mysql.example.com -u root -P 80 -p </code></pre> <p>Which is not working out. Instead of <strong>NodePort</strong> if i try with <strong>LoadBalancer</strong>, its creating a new ELB for me which is working as expected. </p> <p>I am not sure if this is right approach for what i want to achieve here. Please help me out if there is a way for achieving same using the Ingress with NodePort. </p>
<p>Kubernetes Ingress as a generic concept does not solve the issue of exposing/routing TCP/UDP services, as stated in <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md</a> you should use custom configmaps if you want that with ingress. And please mind that it will never use hostname for routing as that is a feature of HTTP, not TCP.</p>
<p>I am trying to understand different <strong>access modes for Persistent Volume Claims in Openshift</strong>. Found the following information from <a href="https://docs.openshift.com/container-platform/3.6/architecture/additional_concepts/storage.html#pv-access-modes" rel="nofollow noreferrer">here</a></p> <pre><code>Access Mode CLI Abbreviation Description ReadWriteOnce RWO The volume can be mounted as read-write by a single node. ReadOnlyMany ROX The volume can be mounted read-only by many nodes. ReadWriteMany RWX The volume can be mounted as read-write by many nodes. </code></pre> <p>I know that PVC are bound to single project/namespace and can be extended to different projects as well.</p> <p>But <strong>the confusion here is, what does "single node" or "many nodes" mean here</strong>. For example, in <code>RWO</code> mode, <code>"The volume can be mounted as read-write by a single node"</code>. <strong>What node it is referring to</strong>.</p> <p>Can someone give me the <strong><em>significance of these modes with respect to single project/namespace</em></strong>. <strong>Does the storage with RWO can have write permission for only one application or all the applications within the project</strong>?</p>
<p>The whole RWO vs RWX concept is related to the issue of mounting the same filesystem on multiple hosts, which requires support for things like ie. distributed locking. There are specific implementations that can handle this like ie. NFS, Ceph, GlusterFS etc. generally network/cluster oriented filesystems. Other filesystems are unable to operate correctly if you try to mount them on different servers at the same time (usually they will just not allow this). </p> <p>So, node, in this case, means particular kubernetes cluster node (be it baremetal server or vm). But, by extension, you should think about it in scope of POD as well, cause in most cases pods can spin up on different nodes, meaning they could not use the same volume or you can not assume that this volume will have coherent shared state, as would happen ie. using HostPath volumes that are unique per every node in cluster.</p> <p>To clarify for the question below : </p> <p>RWO volumes have 1:1 relation to pod in general. While in some cases you can define RWO volumes to point to the same physical resource like hostPath, technically they will always be tightly coupled to exclusively one POD. This is specially visible if you use PhysicalVolumes / PhysicalVolumeClaims objects, which will take into account these restrictions for binding PV to PVC. <strong>Only RWX volumes give you a storage shared by multiple pods with all pods being able to write to it</strong>.</p>
<p>I have two simple Springboot microservice which connect with each other using Spring Eureka.</p> <p>Steps -</p> <ol> <li>First I run Eureka server. </li> <li>Then I run both the microservices.</li> <li>So both microservices register &amp; discovery from Eureka server.</li> </ol> <p>I want to achieve the same in Openshift v3. I know Openshift uses Kubernates Service for achieving load-balancing &amp; pod-discovery. But can I use Eureka server in Openshift?</p> <p>In Openshift I have 3 pods..</p> <ol> <li>1 pod for eureka and 2 pods for microservices. </li> <li>Both microservices register with eureka.</li> <li><p>But in Eureka, it is registering as microservice's pod IP:PORT.</p></li> <li><p>So when discovering the microservice tries to make the call to POD IP &amp; fails.</p></li> </ol> <p>Generally, to access POD IP we need to invoke service layer in Openshift. So how can I make eureka server register server layer IP:PORT instead of POD's IP:PORT</p>
<p>for Spring Cloud Eureka Server project:application.yml</p> <pre><code>server: port: 8761 eureka: instance: hostname: server.eureka.svc # it should be service url in openshift cluster. client: fetch-registry: false register-with-eureka: false service-url: defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/ </code></pre> <p>For Spring Cloud Eureka Client project: application.yml</p> <pre><code>eureka.client.serviceUrl.defaultZone=http://server.eureka.svc8761/eureka/ eureka.instance.preferIpAddress=false </code></pre>
<p>I am setting some properties in configmap on basis of some flags' values. To achieve this I am using "if/else" conditions in my configmap.yaml. But these "if" conditions are working fine if there is only one expression inside "if" block but for multiple expression clubbed with "or" and "and" is being evaluated incorrectly.</p> <p><strong>configmap.yaml:</strong></p> <pre><code>{{- else if (eq .Values.A "oracle") and (eq .Values.B "true") or (eq .Values.A "postgresql") }} </code></pre> <p>The above condition is being evaluated to false however it was supposed to be evaluated as true because the last condition is true. </p> <pre><code>MYPROP1 = {{ (eq .Values.A"oracle") and (eq .Values.B "true") or (eq .Values.databaseType "postgresql") }} ==&gt;printing false MYPROP2 = {{ (eq .Values.A"oracle") and (eq .Values.B "true") }} ==&gt;printing false MYPROP3 = {{ (eq .Values.A"postgresql") }} ===&gt; printing true </code></pre>
<p>The expression is evaluated left to right and will exited as as soon as the <code>and</code> operator is evaluated as <code>false</code>. The <code>or</code> expression will never be evaluated. </p> <p>You can achieve your expected behaviour when you use parenthesis: </p> <pre><code>((eq .Values.A "oracle") and (eq .Values.B "true")) or (eq .Values.A "postgresql") </code></pre>
<p>I want to see all the resource groups in my cluster(s). Is there a way to use <code>az aks</code> to get all the credentials of the cluster(s) without mentioning the names of the resource-group/cluster?</p> <p>I tried <code>az aks get-credentials</code>, <code>az aks show</code>, <code>az aks list</code> but they all require the resource group name and I don't know it.</p> <p>Any help is greatly appreciated.</p> <p><strong>Update:</strong> I have found a way to list all resources in my azure subscription: <code>az resource list</code>. I can see some resources here that when i try to find using `az</p>
<p>After alot of digging, you can find the list of resource groups using this command - <code>az group list</code>. Note: its useful to go through the <code>-h</code> of for <code>az</code> to see what it has to offer :)</p>
<p>Made my way into kubernetes through GKE, currently trying out via kubeadm on bare metal.</p> <p>In the later environment, there is no need of any specific load balancer; using nginx-ingress and ingresses let one serve service to the www.</p> <p>Oppositely, on gke, using the same nginx-ingress, or using the gke provided l7, you always end up with a billed load balancer.</p> <p>What's the reason about that, as it seemed not to be ultimately needed ?</p>
<p>(Reposting my comment above) </p> <p>In general, when one is receiving traffic from the outside world, that traffic is being sent to one or more non-ACLd public IP addresses. </p> <p>If you run k8s on bare metals, those BMs can have public IPs, and you can just run ingress on one or more of them. </p> <p>A managed k8s environment, however, for security reasons, will not permit nodes to have public IPs. </p> <p>Instead, managed load balancers are allowed to have public IPs. Those are configured to know the private node IPs hosting ingress for your cluster and will direct traffic accordingly. </p>
<p>Do you guys know if a <code>ClusterIP</code> service distributes the workload between the target deployment replicas?</p> <p>I have 5 replicas of a backend with a ClusterIP service selecting them. I also have another 5 replicas of nginx pod pointing to the this back end deployment. But when I run a heavy request the back end stops responding other requests until it finishes the heavy one.</p> <h1>Update</h1> <p>Here is my configuration:</p> <blockquote> <p>Note: I've replaced some info that are related to the company.</p> </blockquote> <h2>Content provider deployment:</h2> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: webapp tier: frontend spec: containers: - name: python-gunicorn image: &lt;my-user&gt;/webapp:1.1.2 command: ["/env/bin/gunicorn", "--bind", "0.0.0.0:8000", "main:app", "--chdir", "/deploy/app", "--error-logfile", "/var/log/gunicorn/error.log", "--timeout", "7200"] resources: requests: # memory: "64Mi" cpu: "0.25" limits: # memory: "128Mi" cpu: "0.4" ports: - containerPort: 8000 imagePullPolicy: Always livenessProbe: httpGet: path: /login port: 8000 initialDelaySeconds: 30 timeoutSeconds: 1200 imagePullSecrets: # NOTE: the secret has to be created at the same namespace level on which this deployment was created - name: dockerhub </code></pre> <h2>Content provider service:</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: frontend labels: app: webapp tier: frontend spec: # type: LoadBalancer ports: - port: 8000 targetPort: 8000 selector: app: webapp tier: frontend </code></pre> <h2>Nginx deployment:</h2> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 5 template: metadata: labels: app: nginx spec: volumes: - name: secret-volume secret: secretName: nginxsecret - name: configmap-volume configMap: name: nginxconfigmap containers: - name: nginxhttps image: ymqytw/nginxhttps:1.5 command: ["/home/auto-reload-nginx.sh"] ports: - containerPort: 443 - containerPort: 80 livenessProbe: httpGet: path: /index.html port: 80 initialDelaySeconds: 30 timeoutSeconds: 1200 resources: requests: # memory: "64Mi" cpu: "0.1" limits: # memory: "128Mi" cpu: "0.25" volumeMounts: - mountPath: /etc/nginx/ssl name: secret-volume - mountPath: /etc/nginx/conf.d name: configmap-volume </code></pre> <h2>Nginx service:</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: nginxsvc labels: app: nginxsvc spec: type: LoadBalancer ports: - port: 80 protocol: TCP name: http - port: 443 protocol: TCP name: https selector: app: nginx </code></pre> <h2>Nginx config file:</h2> <pre><code>server { server_name local.mydomain.com; rewrite ^(.*) https://local.mydomain.com$1 permanent; } server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; listen 443 ssl; root /usr/share/nginx/html; index index.html; keepalive_timeout 70; server_name www.local.mydomain.com local.mydomain.com; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; location / { proxy_pass http://localhost:8000; proxy_connect_timeout 7200; proxy_send_timeout 7200; proxy_read_timeout 7200; send_timeout 7200; } } </code></pre>
<p>Yes, service type <code>ClusterIP</code> uses <code>kube-proxy</code>'s <code>iptables</code> rules to distribute the requests roughly evenly in a <code>round robin</code> manner.</p> <p>The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace" rel="noreferrer">documentation</a> says:</p> <blockquote> <p>By default, the choice of backend is round robin.</p> </blockquote> <p>Although, the <code>round robin</code> distribution of requests may be affected by things like:</p> <ol> <li>Busy backends</li> <li>Sticky Sessions</li> <li>Connection-based (if a backend pod has established a TCP session or a secure tunnel with the user hitting the <code>ClusterIP</code> multiple times)</li> <li>Custom host-level / node-level <code>iptables</code> rules outside kubernetes</li> </ol>
<p>i'm trying to test the <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume" rel="noreferrer">local persistent volume</a> in kubernetes v1.9.2.</p> <p>from what i gather (and i may be wrong!) i cannot use <code>kubeadm</code> to add these feature gates:</p> <pre><code>$ sudo kubeadm version kubeadm version: &amp;version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} $ kubeadm init --help ... --feature-gates string A set of key=value pairs that describe feature gates for various features. Options are: CoreDNS=true|false (ALPHA - default=false) DynamicKubeletConfig=true|false (ALPHA - default=false) SelfHosting=true|false (ALPHA - default=false) StoreCertsInSecrets=true|false (ALPHA - default=false) ... </code></pre> <p>sooo... i do a normal <code>kubeadm</code> init and then proceed to hack:</p> <pre><code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf </code></pre> <p>with </p> <pre><code>Environment="KUBELET_FEATURE_GATES_ARGS=--feature-gates=PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS $KUBELET_FEATURE_GATES_ARGS </code></pre> <p>and reload/restart <code>kubelet</code>.</p> <p>okay... let try creating the pv:</p> <pre><code>$ cat local-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-local-pv annotations: "volume.alpha.kubernetes.io/node-affinity": '{ "requiredDuringSchedulingIgnoredDuringExecution": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "kubernetes.io/hostname", "operator": "In", "values": ["dhcp-nebula-129-230"] } ]} ]} }' spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt/disks/fs2 $ kubectl create -f local-pv.yaml The PersistentVolume "example-local-pv" is invalid: * metadata.annotations: Forbidden: Storage node affinity is disabled by </code></pre> <p>feature-gate * spec.local: Forbidden: Local volumes are disabled by feature-gate</p> <p>aha! i say... i have to change the <code>kube-apiserver</code> too!</p> <p>so i edit <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> and append the following to the <code>Command</code>:</p> <pre><code>--feature-gates=PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true </code></pre> <p>and the api server dies and i'm stuck without <code>kubectl</code> as the <code>kubeapi-server</code> dies. :( </p> <p>help?</p>
<p>You need to set the feature gates on api, scheduler and controller in a kubeadm config:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration apiServerExtraArgs: service-node-port-range: 80-32767 feature-gates: "PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true" controllerManagerExtraArgs: feature-gates: "PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true" schedulerExtraArgs: feature-gates: "PersistentLocalVolumes=true,VolumeScheduling=true,MountPropagation=true" </code></pre> <p>Storage class example:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: mysql-data provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p>PVC example:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app: mariadb name: mysql-mariadb-0 namespace: mysql spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: mysql-data selector: matchLabels: pod-name: mariadb-0 </code></pre> <p>PV example:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mysql-mariadb-0 labels: pod-name: mariadb-0 annotations: "volume.alpha.kubernetes.io/node-affinity": '{ "requiredDuringSchedulingIgnoredDuringExecution": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "kubernetes.io/hostname", "operator": "In", "values": ["prod-mysql-0"] } ]} ]}}' spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: mysql-data local: path: /mnt/local-storage/mysql-data-0 </code></pre>
<p>I have created pods using below yaml.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: kubia-liveness spec: containers: - image: luksa/kubia-unhealthy name: kubia livenessProbe: httpGet: path: / port: 8080 </code></pre> <p>Then I created pods using the below command.</p> <pre><code>$ kubectl create -f kubia-liveness-probe.yaml </code></pre> <p>It created a pod successfully.</p> <p>Then I'm trying to create load balancer service to access from the external world. For that I'm using the below command.</p> <pre><code>$ kubectl expose rc kubia-liveness --type=LoadBalancer --name kubia-liveness-http </code></pre> <p>For this, I'm getting below error.</p> <pre><code>Error from server (NotFound): replicationcontrollers "kubia-liveness" not found </code></pre> <p>I'm not sure how to create replicationControllers. Could anybody please give me the command to do the same.</p>
<p>You are mixing two approaches here, one is creating stuff from yaml definition, which is good by it self (but bare in mind that it is really rare to create a POD rather then Deployment or ReplicationController) with exposing via CLI, which has some assumptions made (ie. it expects replication controller) and with these assumptions it creates appropriate service. My suggestion would be to go for creating Service from yaml manifest as well, so you can tailor it to fit your case.</p>
<p>I'm running Kubernetes in a GKE cluster and need to run a DB migration script on every deploy. For staging this is easy: we have a permanent, separate MySQL service with its own volume. For production however we make use of GCE SQL, resulting in the job having two containers - one more for the migration, and the other for Cloud Proxy.</p> <p>Because of this new container, the job always shows as 1 active when running <code>kubectl describe jobs/migration</code> and I'm at a complete loss. I have tried re-ordering the containers to see if it checks one by default but that made no difference and I cannot see a way to either a) kill a container or b) check the status of just one container inside the Job.</p> <p>Any ideas? </p>
<p>I know it's a year too late, but best practice would be to run single cloudsql proxy service for all app's purposes, and then configure DB access in app's image to use this service as a DB hostname.</p> <p>This way you will not require putting cloudsql proxy container into every pod which uses DB.</p>
<p>I would like to be able to automate deployments to my Kubernetes cluster using Helm charts executed by Jenkins (as part of the build cycle). The Jenkins machine is on a separate network to the Kubernetes cluster (rather than part of it as documented in numerous blogs).</p> <p>I have a chart repo hosted inside a private GitHub account. I followed the process here: <a href="https://hackernoon.com/using-a-private-github-repo-as-helm-chart-repo-https-access-95629b2af27c" rel="noreferrer">https://hackernoon.com/using-a-private-github-repo-as-helm-chart-repo-https-access-95629b2af27c</a> and was able to add it as a repo in Helm on an Azure server using a command of the format:</p> <pre><code>helm repo add sample 'https://[email protected]/kmzfs/helm-repo-in-github/master/' </code></pre> <p>I've been trying to get the ElasticBox Kubernetes CI/CD (v1.3) plugin inside Jenkins to connect to this chart repo, but whenever I press "Test Connection", I get a 400 Bad Request error. I have tried to enter the details in a variety of ways:</p> <ol> <li>Using the same format (and token) as above and no credentials</li> <li>Using the private token (same as in the query above) in the credentials, and the url of <a href="https://raw.githubusercontent.com/kmzfs/helm-repo-in-github/master/" rel="noreferrer">https://raw.githubusercontent.com/kmzfs/helm-repo-in-github/master/</a></li> <li>Using my username and password in the credentials, and the url of <a href="https://raw.githubusercontent.com/kmzfs/helm-repo-in-github/master/" rel="noreferrer">https://raw.githubusercontent.com/kmzfs/helm-repo-in-github/master/</a> </li> </ol> <p>I have this plugin to connect to the Kubernetes Cloud and it can connect to the repository at <a href="https://github.com/helm/charts" rel="noreferrer">https://github.com/helm/charts</a> and deploy a RabbitMQ container.</p> <p>Is it possible to get this plugin to connect to a private Github repository as a chart repository, and if so, how do I go about doing so?</p> <p>If not, is there an alternative means of deploying Helm charts (in a private repo) from Jenkins? I couldn't find any other plugins that used Helm.</p> <p>Thanks Duncan</p>
<p>What we use in our CI is completely skip anyfunky Jenkins plugins and just go for the native tooling. We bake kubectl/helm into the jenkins/worker image provide credentials to them so they can speak to the cluster and then take the private Helm chart not from published charts, but directly from private git repo holding that chart. And then we simply run <code>helm</code> against this localy cloned chart with a usual script step.</p> <p>Example of kube config part defining the ca cert (related to comment below):</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: &lt;base64 ca cert&gt; server: https://cluster_api_url name: mycluster </code></pre>
<p>I am new to kubernetes and have started building a cluster on RHEL 7.3. I have kubernetes cluster setup on the master. When i try to use</p> <pre><code>kubeadm join --token ancdbs.askdcvasdckasd x.x.x.:6443 </code></pre> <p>the node is not able to join with following error message:</p> <pre><code>[discovery] Trying to connect to API Server "10.26.24.37:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.26.24.37:6443" [discovery] Failed to connect to API Server "10.26.24.37:6443": there is no JWS signed token in the cluster-info ConfigMap. This token id "a7d815" is invalid for this cluster, can't connect </code></pre> <p>Did notice this being listed as an issue in version 1.7 I am currently on versions shown below.</p> <pre><code>Kubeadm version node: kubeadm version: &amp;version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T22:55:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Kubeadm version master: kubeadm version: &amp;version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T22:55:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I also have seen some mention about workarounds...would like to know.</p> <p>How the work around can be implemeted, if possible.</p>
<p>In my case, the token was <strong>expired</strong>.</p> <p>By default, tokens expire 24 hours after creation, unless you explicitly set the TTL value for the token when it is created.</p> <p>If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node</p> <pre><code>kubeadm token create </code></pre>
<p>I am getting the following error when I run</p> <pre><code>kubectl create -f pod.yaml </code></pre> <p>error</p> <pre><code>Error from server (BadRequest): error when creating &quot;pod.yaml&quot;: Pod in version &quot;applicant:v1&quot; cannot be handled as a Pod: no kind &quot;Pod&quot; is registered for version &quot;applicant:v1&quot; </code></pre> <p>minikube is up and running and I even tried to change it to <code>kind: Deployment</code> but I got another error saying:</p> <pre><code>error: unable to recognize &quot;pod.yaml&quot;: no matches for /, Kind=Deployment </code></pre> <p>yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: customer-applicant labels: app: applicant-vue spec: replicas: 1 selector: matchLabels: app: applicant-vue template: metadata: labels: app: applicant-vue spec: containers: - name: api-applicant image: api-applicant ports: - containerPort: 8080 - containerPort: 8000 resources: {} volumeMounts: - mountPath: /usr/local/tomcat/logs name: api-applicant-claim # import - name: applicant-import image: applicant-import resources: {} # cache - name: cache image: cache resources: limits: memory: &quot;536870912&quot; # storage - name: storage image: storage ports: - containerPort: 7000 - containerPort: 7001 - containerPort: 7199 - containerPort: 9042 - containerPort: 9160 resources: {} # view - name: view image: view ports: - containerPort: 3000 resources: {} volumes: - name: applicant-claim persistentVolumeClaim: claimName: applicant-claim # tomcat - name: tomcat image: tomcat # node - name: node image: node resources: {} # openJdk - name: node - image: node resources: {} </code></pre>
<p>You have a lot of issues here. I've described some of them:</p> <p><strong>1.</strong> <code>pod.yaml</code> file's structure is a structure of a <code>Deployment</code> object.</p> <p><strong>2.</strong> <code>apiVersion</code> for <code>Deployment</code> depends of kubernetes version: </p> <ul> <li><code>apps/v1beta1</code> for versions before 1.8.0</li> <li><code>apps/v1beta2</code> for versions starting from 1.8.0 before 1.9.0</li> <li><code>apps/v1</code> for versions starting from 1.9.0</li> </ul> <p>So, if you deploy your <code>pod.yaml</code> on the latest kubernetes cluster it should be started from:</p> <pre><code>apiVersion: apps/v1 kind: Deployment </code></pre> <p><strong>3.</strong> The part:</p> <pre><code>spec: replicas: 1 accessModes: - ReadWriteOnce resources: requests: storage: 100Mi template: metadata: labels: app: product-ratings-vue </code></pre> <p>should be changed to:</p> <pre><code>spec: replicas: 1 template: metadata: labels: app: product-ratings-vue </code></pre> <p><strong>4.</strong> Second <code>spec</code> block should be moved on the same level as <code>spec.template.metadata</code>:</p> <pre><code> spec: replicas: 1 template: metadata: labels: app: product-ratings-vue spec: containers: </code></pre> <p>The final <code>deployment.yaml</code> is:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: customer-ratings labels: app: product-ratings-vue spec: replicas: 1 selector: matchLabels: app: product-ratings-vue template: metadata: labels: app: product-ratings-vue spec: containers: - name: api-service image: api-service ports: - containerPort: 8080 - containerPort: 8000 resources: {} volumeMounts: - mountPath: /usr/local/tomcat/logs name: api-service-claim # ekomi-import - name: ekomi-import image: ekomi-import resources: {} # cache - name: cache image: cache resources: limits: memory: "536870912" # storage - name: storage image: storage ports: - containerPort: 7000 - containerPort: 7001 - containerPort: 7199 - containerPort: 9042 - containerPort: 9160 resources: {} # view - name: view image: view ports: - containerPort: 3000 resources: {} # tomcat - name: tomcat image: tomcat # node - name: node image: node resources: {} # openJdk - name: node image: node resources: {} volumes: - name: api-service-claim persistentVolumeClaim: claimName: api-service-claim </code></pre>
<p>Has someone successfully run Flink jobs with this kind of setup (Github CI CD and Kubernetes)? Since Flink jobs can’t be dockerized and deployed in a natural way as part of the container I am not very sure of how is the best way of doing this.</p> <p>Thanks</p>
<p>Yes, this can be done. For the dockerizing portion, see the the docs about <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/docker.html" rel="nofollow noreferrer">running Flink on Docker</a> and <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/kubernetes.html" rel="nofollow noreferrer">running Flink on Kubernetes</a>, as well as <a href="https://berlin.flink-forward.org/kb_sessions/flink-in-containerland/" rel="nofollow noreferrer">Patrick Lukas' Flink Forward talk on "Flink in Containerland"</a>. You'll find links to docker hub, github, slideshare, and youtube behind these links.</p> <p><a href="https://data-artisans.com/da-platform-2" rel="nofollow noreferrer">dA Platform 2</a> is a commercial offering from data Artisans that supports CI/CD integrations for Flink on Kubernetes. The <a href="https://www.youtube.com/watch?v=lOZqh5pQcKM&amp;t=15m32s" rel="nofollow noreferrer">demo video from the product announcement at Flink Forward Berlin 2017</a> illustrates this.</p>
<p>I recently updated minishift on mac:</p> <pre><code>minishift update: ... Starting OpenShift using openshift/origin:v3.6.1 ... .. -- Finding server IP ... ... FAIL Error: could not start OpenShift container "origin" Details: No log available from "origin" container Error during 'cluster up' execution: Error starting the cluster. $ minishift version minishift v1.12.0+daa0943 </code></pre> <p>There is an issue with v3.6.1.</p> <p>If I revert back to v3.6.0 with:</p> <blockquote> <p>brew cask reinstall minishift </p> </blockquote> <p>OR </p> <blockquote> <p>minishift start --openshift-version=v3.6.0</p> </blockquote> <p>With version v3.6.0, it works it and does not do Finding server IP ...</p> <p>Please advise how to fix v3.6.1 issue with:</p> <blockquote> <p>No log available from "origin" container</p> </blockquote>
<p>It seems you already had a Minishift instance which was in stopped state when you ran the update command. Because the newer version has v3.6.1 as the default OpenShift version you are seeing this issue. Minsihift recommends [1] deleting the current instance and then starting will fix the issue.</p> <p>[1] <a href="https://docs.openshift.org/latest/minishift/getting-started/updating.html#update-instructions" rel="nofollow noreferrer">https://docs.openshift.org/latest/minishift/getting-started/updating.html#update-instructions</a> </p>
<p>I want to run an MPI job on my Kubernetes cluster. The context is that I'm actually running a modern, nicely containerised app but part of the workload is a legacy MPI job which isn't going to be re-written anytime soon, and I'd like to fit it into a kubernetes "worldview" as much as possible.</p> <p>One initial question: has anyone had any success in running MPI jobs on a kube cluster? I've seen <a href="https://github.com/ChristianKniep" rel="noreferrer">Christian Kniep's</a> work in getting MPI jobs to run in docker containers, but he's going down the docker swarm path (with peer discovery using consul running in each container) and I want to stick to kubernetes (which already knows the info of all the peers) and inject this information into the container from the outside. I do have full control over all the parts of the application, e.g. I can choose which MPI implementation to use.</p> <p>I have a couple of ideas about how to proceed:</p> <ol> <li><p>fat containers containing slurm and the application code -> populate the slurm.conf with appropriate info about the peers at container startup -> use srun as the container entrypoint to start the jobs</p></li> <li><p>slimmer containers with only OpenMPI (no slurm) -> populate a rankfile in the container with info from outside (provided by kubernetes) -> use mpirun as the container entrypoint</p></li> <li><p>an even slimmer approach, where I basically "fake" the MPI runtime by setting a few environment variables (e.g. the OpenMPI ORTE ones) -> run the mpicc'd binary directly (where it'll find out about its peers through the env vars)</p></li> <li><p>some other option</p></li> <li><p>give up in despair</p></li> </ol> <p>I know trying to mix "established" workflows like MPI with the "new hotness" of kubernetes and containers is a bit of an impedance mismatch, but I'm just looking for pointers/gotchas before I go too far down the wrong path. If nothing exists I'm happy to hack on some stuff and push it back upstream.</p>
<p>I tried MPI Jobs on Kubernetes for a few days and solved it by using <code>dnsPolicy:None</code> and <code>dnsConfig</code> (<code>CustomDNS=true</code> feature gate will be needed).</p> <p>I pushed my manifests (as Helm chart) here.</p> <p><a href="https://github.com/everpeace/kube-openmpi" rel="nofollow noreferrer">https://github.com/everpeace/kube-openmpi</a></p> <p>I hope it would help.</p>
<p>I've got prometheus running ontop of kubernetes with the following scrape config, as described by the documentation. Where the .pem files are located on disk within the prometheus container.</p> <p><a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/configuration/configuration/#</a></p> <pre><code>scrape_configs: - job_name: etcd static_configs: - targets: ['10.0.0.222:2379','10.0.0.221:2379','10.0.0.220:2379'] tls_config: # CA certificate to validate API server certificate with. ca_file: /prometheus/ca.pem cert_file: /prometheus/cert.pem key_file: /prometheus/key.pem </code></pre> <p>I see that etcd as a target in prometheus, however its returning garbage.</p> <p><a href="https://i.imgur.com/rdRI4V7.png" rel="nofollow noreferrer">https://i.imgur.com/rdRI4V7.png</a></p> <p>I am able to hit the metrics endpoint doing a local curl by passing in the client certificate information like so.</p> <p>What am I doing wrong?</p> <pre><code>sudo curl --cacert /etc/ssl/etcd/ssl/ca.pem https://127.0.0.1:2379/metrics -L --cert /etc/ssl/etcd/ssl/node-kubemaster-rwva1-prod-2.pem --key /etc/ssl/etcd/ssl/node-kubemaster-rwva1-prod-2-key.pem^C </code></pre>
<p>You need to add <code>scheme: https</code> for HTTPS scraping.</p>
<p>When creating a service I see in the list of internal endpoints an entry the looks like <code>myservice.sandbox-cluster:0 TCP</code> along with the ports I explicitly opened. What is the port 0 entry for?</p>
<p>If you are using the GCE Load Balancer, <code>port 0</code> may be created. <code>port 0</code> usually means <em>"use any random free port"</em>.</p> <p>Here is the <a href="https://github.com/kubernetes/ingress-gce/issues/75" rel="noreferrer">GitHub issue</a>. <a href="https://stackoverflow.com/questions/45738404/gce-loadbalancer-invalid-value-for-field-namedports0-port-0-must-be-gr/45974827#45974827">Here</a> is more info. Email thread about this is <a href="https://groups.google.com/forum/#!topic/kubernetes-users/5a-F38hGePs" rel="noreferrer">here</a>.</p>
<p>I am using docker and kubernetes on Google Cloud Platform, with the Kubernetes Engine. I have secrets configurated in a app.yaml file like so :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app namespace: $CI_COMMIT_REF_SLUG labels: app: app spec: replicas: 1 template: metadata: labels: app: app spec: containers: - name: app image: gcr.io/engagement-org/app:$CI_COMMIT_SHA imagePullPolicy: Always ports: - containerPort: 9000 env: - name: MAILJET_APIKEY_PUBLIC valueFrom: secretKeyRef: name: mailjet key: apikey_public - name: MAILJET_APIKEY_PRIVATE valueFrom: secretKeyRef: name: mailjet key: apikey_private </code></pre> <p>Each time I push on a new branch, a new namespace is created through a deploy in my gitlab-ci file. Secrets are created like so :</p> <pre><code> - kubectl create secret generic mailjet --namespace=$CI_COMMIT_REF_SLUG --from-literal=apikey_public=$MAILJET_APIKEY_PUBLIC --from-literal=apikey_private=$MAILJET_APIKEY_PRIVATE || echo 'Secret already exist'; </code></pre> <p>Now, I have updated my mailjet api keys and want to make the change to all namespaces. I can edit the secret on each namespace by getting a shell on the pods and running <code>kubectl edit secret mailjet --namespace=&lt;namespace_name&gt;</code></p> <p>What I want is to send the new secret values to the new pods that will be created in the future. When I deploy a new one, it still uses the old values.</p> <p>From what I understand, the gitlab-ci file uses the app.yaml file to replace the environment variables with values. But I don't understand where app.yaml finds the original values.</p> <p>Thank you for your help.</p>
<p>In general, Kubernetes namespaces are designed to provide isolation for components running inside them. For this reason, the Kubernetes API is not really designed to perform update operations across namespaces, or make secrets usable across namespaces.</p> <p>That being said, there are a few things to solve this issue.</p> <h3>1. Use a single namespace &amp; Helm releases instead of separate namespaces</h3> <p>From the looks of it, you are using Gitlab CI to deploy individual branches to review environments (presumably using Gitlab's <a href="https://docs.gitlab.com/ee/ci/review_apps/" rel="noreferrer">Review App</a> feature?). The same outcome can be achieved by deploying all Review Apps into the same namespace, and using <a href="https://helm.sh/" rel="noreferrer">Helm</a> to manage multiple deployments ("releases" in Helm-speak) of the same application within a single namespace.</p> <p>Within the <code>gitlab-ci.yml</code>, creating a Helm release for a new branch might look similar to this:</p> <pre><code>script: - helm upgrade --namespace default --install review-$CI_COMMIT_REF_SLUG ./path/to/chart </code></pre> <p>Of course, this requires that you have defined a <a href="https://docs.helm.sh/developing_charts/#charts" rel="noreferrer">Helm chart</a> for your application (which, in essence is just a set of YAML templates with a set of default variables that can then be overridden for individual releases). Refer to the documentation (linked above) for more information on creating Helm charts.</p> <h3>2. Keep secrets in sync across namespaces</h3> <p>We have had a similar issue a while ago and resorted to writing a custom Kubernetes controller that keeps secrets in sync across namespaces. It's open source and you can <a href="https://github.com/mittwald/kubernetes-replicator" rel="noreferrer">find it on GitHub</a> (use with caution, though). It is based on annotations and provides unidirectional propagation of changes from a single, authoritative parent secret:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mailjet namespace: some-kubernetes-namespace annotations: replicator.v1.mittwald.de/replicate-from: default/mailjet </code></pre> <p>With the secret replicator deployed in your cluster, using this annotation will propagate all changes made to the <code>mailjet</code> secret in the <code>default</code> namespace to all secrets in any namespaced annotated like show above.</p>
<p>I'm trying to run kubernetes using minikube. I have some docker images in n azure container registry.</p> <p>I created a secret like this:</p> <pre><code>kubectl create secret docker-registry private-repo-secret --docker-server=myregistry.azurecr.io --docker-username=myusername --docker-password=mypassword [email protected] </code></pre> <p>And I can see it's there:</p> <pre><code>$ kubectl get secret NAME TYPE DATA AGE default-token-x5xxh kubernetes.io/service-account-token 3 17m private-repo-secret kubernetes.io/dockercfg 1 4m </code></pre> <p>And in my pods:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 1 template: metadata: labels: app: frontend spec: containers: - name: frontend image: myregistry.azurecr.io/myproject/frontend:0.0.5 imagePullSecrets: - name: private-repo-secret </code></pre> <p>Then why I get an <code>ErrImagePull</code> saying that the authentication is required?</p> <p>EDIT: Those credentials are correct and they work locally. I've used one of the 2 pasword obtained with <code>az acr credential show -n myregistry</code>. The only difference is that in my local json I just have the auth property, not username, password and email.</p> <p>If I describe one of my pods I get:</p> <pre><code> Warning Failed 5m (x4 over 6m) kubelet, ip-172-20-49-180.eu-central-1.compute.internal Failed to pull image "myregistry.azurecr.io/myproject/frontend:0.0.5": rpc error: code = Unknown desc = Error response from daemon: Get https://myregistry.azurecr.io/v2/myproject/frontend/manifests/0.0.5: unauthorized: authentication required Normal BackOff 5m (x6 over 6m) kubelet, ip-172-20-49-180.eu-central-1.compute.internal Back-off pulling image "myregistry.azurecr.io/myproject/frontend:0.0.5" Warning FailedSync 1m (x25 over 6m) kubelet, ip-172-20-49-180.eu-central-1.compute.internal Error syncing pod </code></pre> <p>kubectl version:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>First, use following command to login into your docker registry.</p> <pre><code>$ docker login DOCKER_REGISTRY_SERVER --username=DOCKER_USER --password=DOCKER_PASSWORD --email=DOCKER_EMAIL'. </code></pre> <p>This will generate a config file.</p> <pre><code>$ ls -la ~/.docker/config.json /home/shahriar/.docker/config.json </code></pre> <p>Now, use this this command to create secret.</p> <pre><code>kubectl create secret generic my-secret-name --type=kubernetes.io/dockerconfigjson --from-file .dockerconfigjson=/home/shahriar/.docker/config.json </code></pre> <p>Your Secret will look like this</p> <pre><code>apiVersion: v1 data: .dockerconfigjson: ewoJImF1dGh...l9Cn0= kind: Secret metadata: creationTimestamp: 2018-02-17T10:06:56Z name: my-secret-name namespace: default resourceVersion: "269" selfLink: /api/v1/namespaces/default/secrets/my-secret-name uid: 48f9f398-13ca-11e8-89c4-0800276cd577 type: kubernetes.io/dockerconfigjson </code></pre> <p>And the <code>.dockerconfigjson</code> will look like this</p> <pre><code>{ "auths": { "https://index.docker.io/v1/": { "auth": "YWVyb2tp..XRlMDMzIw==" } }, "HttpHeaders": { "User-Agent": "Docker-Client/17.11.0-ce (linux)" } } </code></pre>
<p>The <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/cluster-intro/" rel="noreferrer">first section</a> of the official Kubernetes tutorial states that,</p> <blockquote> <p>A Kubernetes cluster that handles production traffic should have a minimum of three nodes.</p> </blockquote> <p>but gives no rationale for why three is preferred. Is three desirable over two in order to avoid a split-brain scenario, to merely allow for greater availability, or to cater to something specific to the internals of Kubernetes? I would have thought a split-brain scenario would only happen with multiple Kubernetes clusters (each having distinct masters) whereas a single cluster should be able to handle at least two nodes, each, perhaps, in their own availability-zone. </p>
<p>That means a minimum of 3 master nodes per cluster. </p> <p>Kubernetes keeps all critical data in etcd, which uses a majority to repair when there is a fault. An instance of etcd runs on each master node. Three is the minimum number of etcds one wants backing a prod-level cluster. Hence, three is the minimum number of masters per cluster. </p>
<p>Trying to set up the jupyterhub server on Google Kubernetes following this <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/create-k8s-cluster.html" rel="nofollow noreferrer">tutorial</a>. Everything went through fine. But when I install jupyterhub/jupyterhub image with helm, it's always showing the pods are pending:</p> <pre><code>kubectl --namespace=jupyter-server get pod NAME READY STATUS RESTARTS AGE hub-6dbd4df8b8-nqvnf 0/1 Pending 0 17h proxy-7bb666576c-fx726 0/2 Pending 0 17h </code></pre> <p>Even after 17 hours.</p> <p>The <code>helm</code> version is <code>2.6.2</code> as suggested in the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-helm.html" rel="nofollow noreferrer">tutorial</a>. And I am using 3 <code>f1-micro</code> instances in the kubernetes cluster. Are these instances too small? Thanks for any advice.</p>
<p>Try describing the pod, and the describing the nodes in the cluster, to get more info about why exactly they're still pending:</p> <pre><code>kubectl describe po/hub-6dbd4df8b8-nqvnf -n jupyter-server kubectl describe po/proxy-7bb666576c-fx726 -n jupyter-server kubectl describe nodes </code></pre>
<p>So we maintain a number of Kubernetes dashboards for different environments, and they're kept as similar as possible so our environments stay relatively consistent. Is there any way to modify the dashboard to make it clear exactly which Kubernetes cluster I'm currently connected to without needing to look back at which gcloud credentials are currently activated?</p>
<p>Try the system-banner dashboard argument:</p> <p><a href="https://github.com/kubernetes/dashboard/wiki/Dashboard-arguments" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard/wiki/Dashboard-arguments</a></p>
<p>In k8s <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-job-limitations" rel="noreferrer">Cron Job Limitations</a> mentioned that there is no guarantee that a job will executed exactly once:</p> <blockquote> <p>A cron job creates a job object about once per execution time of its schedule. We say “about” because there are certain circumstances where two jobs might be created, or no job might be created. We attempt to make these rare, but do not completely prevent them. Therefore, jobs should be idempotent</p> </blockquote> <p>Could anyone explain:</p> <ul> <li>why this could happen?</li> <li>what are the probabilities/statistic this could happen?</li> <li>will it be fixed in some reasonable future in k8s?</li> <li>are there any workarounds to prevent such a behavior (if the running job can't be implemented as idempotent)?</li> <li>do other <em>cron related</em> services suffer with the same issue? Maybe it is a core cron problem?</li> </ul>
<p>The controller:</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/cronjob_controller.go" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/cronjob_controller.go</a></p> <p>starts with a comment that lays the groundwork for an explanation:</p> <pre><code>I did not use watch or expectations. Those add a lot of corner cases, and we aren't expecting a large volume of jobs or scheduledJobs. (We are favoring correctness over scalability.) If we find a single controller thread is too slow because there are a lot of Jobs or CronJobs, we we can parallelize by Namespace. If we find the load on the API server is too high, we can use a watch and UndeltaStore.) Just periodically list jobs and SJs, and then reconcile them. </code></pre> <p>Periodically means every 10 seconds:</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/cronjob_controller.go#L105" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/cronjob/cronjob_controller.go#L105</a></p> <p>The documentation following the quoted limitations also has some useful color on some of the circumstances under which 2 jobs or no jobs may be launched on a particular schedule:</p> <pre><code>If startingDeadlineSeconds is set to a large value or left unset (the default) and if concurrentPolicy is set to AllowConcurrent, the jobs will always run at least once. Jobs may fail to run if the CronJob controller is not running or broken for a span of time from before the start time of the CronJob to start time plus startingDeadlineSeconds, or if the span covers multiple start times and concurrencyPolicy does not allow concurrency. For example, suppose a cron job is set to start at exactly 08:30:00 and its startingDeadlineSeconds is set to 10, if the CronJob controller happens to be down from 08:29:00 to 08:42:00, the job will not start. Set a longer startingDeadlineSeconds if starting later is better than not starting at all. </code></pre> <p>Higher level, solving for only-once in a distributed system is hard:</p> <p><a href="https://bravenewgeek.com/you-cannot-have-exactly-once-delivery/" rel="noreferrer">https://bravenewgeek.com/you-cannot-have-exactly-once-delivery/</a></p> <p>Clocks and time synchronization in a distributed system is also hard:</p> <p><a href="https://8thlight.com/blog/rylan-dirksen/2013/10/04/synchronization-in-a-distributed-system.html" rel="noreferrer">https://8thlight.com/blog/rylan-dirksen/2013/10/04/synchronization-in-a-distributed-system.html</a></p> <p>To the questions:</p> <ul> <li><p>why this could happen?</p> <p>For instance- the node hosting the CronJobController fails at the time a job is supposed to run. </p></li> <li><p>what are the probabilities/statistic this could happen?</p> <p>Very unlikely for any given run. For a large enough number of runs, very unlikely to escape having to face this issue. </p></li> <li><p>will it be fixed in some reasonable future in k8s?</p> <p>There are no idemopotency-related issues under the area/batch label in the k8s repo, so one would guess not. </p> <p><a href="https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fbatch" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Aarea%2Fbatch</a></p></li> <li><p>are there any workarounds to prevent such a behavior (if the running job can't be implemented as idempotent)?</p> <p>Think more about the specific definition of idempotent, and the particular points in the job where there are commits. For instance, jobs can be made to support more-than-once execution if they save state to staging areas, and then there is an election process to determine whose work wins.</p></li> <li><p>do other cron related services suffer with the same issue? Maybe it is a core cron problem?</p> <p>Yes, it's a core distributed systems problem. </p> <p>For most users, the k8s documentation gives perhaps a more precise and nuanced answer than is necessary. If your scheduled job is controlling some critical medical procedure, it's really important to plan for failure cases. If it's just doing some system cleanup, missing a scheduled run doesn't much matter. By definition, nearly all users of k8s CronJobs fall into the latter category. </p></li> </ul>
<p>I am running a traefik ingress controller on GKE (1.8.5-gke.0) deployed with Helm. What I'm observing is that the requests are frequently getting a 404 response.</p> <p>What it looks like is traefik is reloading it's configs constantly even when nothing is changing in it's config or the ingresses which are defined.</p> <p>The versions of traefik I've used for this are 1.3.8 and 1.5.2, both are doing the same thing.</p>
<p>The flipping behavior was being caused by the same ingress being loaded into two different namespaces, not sure if that's because of the conflicting endpoints or the naming. </p> <p>One namespace didn't have the service defined which is why it was 404 on one and 200 OK on the other.</p>
<p>Hi i am new to kubernets, i am using minikube single node cluster for local development and testing.</p> <pre><code>Host: Ubuntu 16.04 LTS. Minikube: Virtual box running minikube cluster </code></pre> <p>My requirement is i need to deploy kafka and zookeeper on minikube and should be used to produce or consume messages. I followed <a href="https://github.com/ramhiser/kafka-kubernetes" rel="nofollow noreferrer">this link</a> and successfully deployed it on minikube its details are below</p> <pre><code>$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kafka-service 10.0.0.15 &lt;pending&gt; 9092:30244/TCP 46m kubernetes 10.0.0.1 &lt;none&gt; 443/TCP 53m zoo1 10.0.0.43 &lt;none&gt; 2181/TCP,2888/TCP,3888/TCP 50m zoo2 10.0.0.226 &lt;none&gt; 2181/TCP,2888/TCP,3888/TCP 50m zoo3 10.0.0.6 &lt;none&gt; 2181/TCP,2888/TCP,3888/TCP 50m $ kubectl get pods NAME READY STATUS RESTARTS AGE kafka-deployment-3583985961-f2301 1/1 Running 0 48m zookeeper-deployment-1-1598963595-vgx1l 1/1 Running 0 52m zookeeper-deployment-2-2038841231-tdsff 1/1 Running 0 52m zookeeper-deployment-3-2478718867-5vjcj 1/1 Running 0 52m $ kubectl describe service kafka-service Name: kafka-service Namespace: default Labels: app=kafka Annotations: &lt;none&gt; Selector: app=kafka Type: LoadBalancer IP: 10.0.0.15 Port: kafka-port 9092/TCP NodePort: kafka-port 30244/TCP Endpoints: 172.17.0.7:9092 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>and i have set KAFKA_ADVERTISED_HOST_NAME to minikube ip(192.168.99.100). Now for message producer i am using <code>$cat textfile.log | kafkacat -b $(minikube ip):30244 -t mytopic</code> its not publishing the message giving below message </p> <pre><code>% Auto-selecting Producer mode (use -P or -C to override) % Delivery failed for message: Local: Message timed out </code></pre> <p>can any one help how to publish and consume message.</p>
<p>I know that this is quite an old post. Were you able to resolve and run kafka + zookeeper within minikube? I was able to run a simple single cluster kafka and zookeeper deployment successfully using minikube v0.17.1 and produce and consume messages using kafkacat producer and consumer respectively. I was able to run these successfully on uBuntu and Mac OSX. The deployment and service yamls are as below:</p> <p><code>zookeeper-deployment.yml</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper spec: replicas: 1 template: metadata: labels: app: zookeeper spec: containers: - image: wurstmeister/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 </code></pre> <p><code>zookeeper-service.yml</code></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30181 targetPort: 2181 selector: app: zookeeper </code></pre> <p><code>kafka-deployment.yml</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: kafka name: kafka spec: replicas: 1 template: metadata: labels: app: kafka spec: containers: - env: - name: KAFKA_ADVERTISED_HOST_NAME value: "192.168.99.100" - name: KAFKA_ADVERTISED_PORT value: "30092" - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: 192.168.99.100:30181 - name: KAFKA_CREATE_TOPICS value: "test-topic:1:1" image: wurstmeister/kafka imagePullPolicy: IfNotPresent name: kafka ports: - containerPort: 9092 </code></pre> <p><code>kafka-service.yml</code></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: kafka-service name: kafka-service spec: type: NodePort ports: - name: kafka-port port: 9092 nodePort: 30092 targetPort: 9092 selector: app: kafka </code></pre> <p>You can test your deployment by installing kafkacat client and running the following commands on separate terminal windows:</p> <p><code>echo "Am I receiving this message?" | kafkacat -P -b 192.168.99.100:30092 -t test-topic</code></p> <p><code>kafkacat -C -b 192.168.99.100:30092 -t test-topic</code></p> <p><code>% Reached end of topic test-topic [0] at offset 0</code></p> <p><code>Am I receiving this message?</code></p> <p>I was able to successfully run this on minikube versions v0.17.1 and v0.19.0. If you want to run this on minikube versions v0.21.1 and v0.23.0, please refer to my reply to the post here: <a href="https://stackoverflow.com/questions/45748536/kafka-inaccessible-once-inside-kubernetes-minikube/48856311#48856311">Kafka inaccessible once inside Kubernetes/Minikube</a></p> <p>Thanks.</p>
<p>In <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/run.go#L59" rel="nofollow noreferrer"><code>kubectl/run.go</code></a> in Kubernetes code, the <code>Generate</code> function has a result list of these two types:</p> <pre><code>runtime.Object, error </code></pre> <p>The last line of the function is:</p> <pre><code>return &amp;deployment, nil </code></pre> <p><code>runtime</code> is imported:</p> <pre><code>k8s.io/apimachinery/pkg/runtime </code></pre> <p>I got <code>runtime</code> by running <code>go get</code> on that import statement, and Object is defined in <code>interfaces.go</code>:</p> <pre><code>type Object interface { GetObjectKind() schema.ObjectKind DeepCopyObject() Object } </code></pre> <p>(And I found the same code on the web <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/runtime/interfaces.go" rel="nofollow noreferrer">here</a>.)</p> <p>The address operator creates a pointer... more specifically, the Go spec states:</p> <blockquote> <p>For an operand x of type T, the address operation &amp;x generates a pointer of type *T to x.</p> </blockquote> <p>and pointers <a href="https://golang.org/ref/spec#Pointer_types" rel="nofollow noreferrer">have a type distinct from their base type</a>:</p> <blockquote> <p>A pointer type denotes the set of all pointers to variables of a given type, called the base type of the pointer.</p> </blockquote> <p>How does <code>&amp;deployment</code> satisfy the <code>runtime.Object</code> type?</p> <p>My best guess so far is that <code>deployment</code> implements the <code>runtime.Object</code> interface, and mapping <code>&amp;deployment</code> to <code>runtime.Object</code> satisfies <a href="https://golang.org/ref/spec#Assignability" rel="nofollow noreferrer">this rule of assignability</a>:</p> <blockquote> <p>T is an interface type and x implements T.</p> </blockquote> <p>and that a return statement mapping to a result list type is equivalent to assignment in this respect. Is this correct? If not, is there another part of the specification or documentation that explains it?</p>
<p><code>deployment</code> is a local variable, its declaration:</p> <pre><code>deployment := extensionsv1beta1.Deployment{ // ... } </code></pre> <p>Where <code>extensionsv1beta1</code> from the imports:</p> <pre><code>import ( // ... extensionsv1beta1 &quot;k8s.io/api/extensions/v1beta1&quot; // ... ) </code></pre> <p>Doc of <a href="https://godoc.org/k8s.io/api/extensions/v1beta1#Deployment" rel="nofollow noreferrer"><code>extensionsv1beta1.Deployment</code></a>. Its definition is:</p> <pre><code>type Deployment struct { metav1.TypeMeta `json:&quot;,inline&quot;` // ...other fields... } </code></pre> <p>It embeds <a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#TypeMeta" rel="nofollow noreferrer"><code>metav1.TypeMeta</code></a>, which has a method <a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#TypeMeta.GetObjectKind" rel="nofollow noreferrer"><code>GetObjectKind()</code></a> method with pointer receiver. This means a pointer to <code>Deployment</code> also has this method, because <a href="https://golang.org/ref/spec#Struct_types" rel="nofollow noreferrer">Spec: Struct types:</a></p> <blockquote> <p>Given a struct type <code>S</code> and a defined type <code>T</code>, promoted methods are included in the method set of the struct as follows:</p> <ul> <li>If <code>S</code> contains an embedded field <code>T</code>, the method sets of <code>S</code> and <code>*S</code> both include promoted methods with receiver <code>T</code>. <strong>The method set of <code>*S</code> also includes promoted methods with receiver <code>*T</code>.</strong></li> </ul> </blockquote> <p>And <code>Deployment</code> has a &quot;direct&quot; <a href="https://godoc.org/k8s.io/api/extensions/v1beta1#Deployment.DeepCopyObject" rel="nofollow noreferrer"><code>DeepCopyObject()</code></a> method, again with pointer receiver. So the <a href="https://golang.org/ref/spec#Method_sets" rel="nofollow noreferrer">method set</a> of <code>*Deployment</code> contains this method.</p> <p>And last quoting <a href="https://golang.org/ref/spec#Interface_types" rel="nofollow noreferrer">Spec: Interface types:</a></p> <blockquote> <p>An interface type specifies a <a href="https://golang.org/ref/spec#Method_sets" rel="nofollow noreferrer">method set</a> called its <em>interface</em>. <strong>A variable of interface type can store a value of any type with a method set that is any superset of the interface.</strong> Such a type is said to <em>implement the interface</em>.</p> </blockquote> <p>So this means the method set of <code>*Deployment</code> has all the methods defined by <code>Object</code>, or in other words: the method set of <code>*Deployment</code> is a superset of the method set of <code>Object</code>, so <code>*Deployment</code> implements <code>Object</code>.</p> <p><code>deployment</code> is of type <code>extensionsv1beta1.Deployment</code>, which means <code>&amp;deployment</code> is of type <code>*extensionsv1beta1.Deployment</code>, which we showed above that it implements <code>Object</code>; so the value <code>&amp;deployment</code> can be assigned to or be stored in a variable of type <code>Object</code>.</p>
<p>I have below config in my ingress service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress spec: externalTrafficPolicy: Local type: LoadBalancer loadBalancerIP: **.***.**.*** ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx-ingress </code></pre> <p>we have purchased this public ip from azure. If we remove this loadBalancerIP from service yaml and deploy and then use the ip listed by <code>kubectl get services -n nginx-ingress</code>, It works fine when we access the service end points. But with this public ip nothing seems to be working.</p> <p>Please find below service description (<code>kubectl describe service nginx-ingress -n nginx-ingress-os</code>):</p> <pre><code>[openapianil@LHGOPENAPIDEV001 github]$ kubectl describe service nginx-ingress -n nginx-ingress-os Name: nginx-ingress Namespace: nginx-ingress-os Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-ingress","namespace":"nginx-ingress-os"},"spec":{"externalTrafficPolicy"... Selector: app=nginx-ingress Type: LoadBalancer IP: 10.0.0.44 IP: **.**.**.*** LoadBalancer Ingress: **.**.**.*** Port: http 80/TCP TargetPort: 80/TCP NodePort: http 31247/TCP Endpoints: **.**.**.***:80 Port: https 443/TCP TargetPort: 443/TCP NodePort: https 32241/TCP Endpoints: **.**.**.***:443 Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 30880 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 1m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 43s service-controller Ensured load balancer </code></pre> <p>Please help!!</p>
<p>As we know, after AKS created complete, Azure will create two resource groups.</p> <p>If you want to create service with IP address, you should create static IP address in another resource group, name like <code>MC_myResourceGRoup_myAKSCluster_eastus</code>.</p> <p>You can use Azure CLI command to create public IP address:</p> <pre><code>az network public-ip create --resource-group MC_myResourceGRoup_myAKSCluster_eastus --name myAKSPublicIP --allocation-method static </code></pre> <p>Also you can use Azure portal to create it, but you <strong>can't</strong> add DNS to it.</p> <p>Then you can use static IP address like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: azure-vote-front1 spec: type: LoadBalancer loadBalancerIP: 52.224.235.119 ports: - port: 80 selector: app: azure-vote-front1 </code></pre> <p>Here is the result:</p> <p><a href="https://i.stack.imgur.com/W7Z01.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W7Z01.png" alt="enter image description here"></a></p> <p><strong>Note:</strong></p> <p>1.Create Azure public IP address <strong>without</strong> DNS name.<br> 2.Create Azure public IP address in this resource group <code>MC_myResourceGRoup_myAKSCluster_eastus</code>.<br> 3.You can use <code>kubectl describe service</code> to check the status, like this:</p> <pre><code>[root@jasoncli@jasonye jason]# kubectl describe service azure-vote-front1 Name: azure-vote-front1 Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=azure-vote-front1 Type: LoadBalancer IP: 10.0.76.241 IP: 52.224.235.119 LoadBalancer Ingress: 52.224.235.119 Port: &lt;unset&gt; 80/TCP TargetPort: 80/TCP NodePort: &lt;unset&gt; 30416/TCP Endpoints: 10.244.0.11:80 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreatingLoadBalancer 45m service-controller Creating load balancer Normal CreatedLoadBalancer 44m service-controller Created load balancer </code></pre> <p><strong>Update:</strong></p> <p>Here is my yaml file:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: azure-vote-back spec: replicas: 1 template: metadata: labels: app: azure-vote-back spec: containers: - name: azure-vote-back image: redis ports: - containerPort: 6379 name: redis --- apiVersion: v1 kind: Service metadata: name: azure-vote-back spec: ports: - port: 6379 selector: app: azure-vote-back --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: azure-vote-front spec: replicas: 1 template: metadata: labels: app: azure-vote-front spec: containers: - name: azure-vote-front image: microsoft/azure-vote-front:v1 ports: - containerPort: 80 env: - name: REDIS value: "azure-vote-back" --- apiVersion: v1 kind: Service metadata: name: azure-vote-front spec: loadBalancerIP: 40.71.3.119 type: LoadBalancer ports: - port: 80 selector: app: azure-vote-front </code></pre>
<p>I'm executing <code>kubectl create -f nginx.yaml</code> which creates the pods successfully. But the PODS aren't exposed on Public IP of my instance. Following is the YAML used be me with service type as nodeport:</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: type: NodePort ports: - port: 80 nodePort: 30080 name: http - port: 443 nodePort: 30443 name: https selector: name: nginx </code></pre> <p>What could be in-correct in my approach or above YAML file to expose the pod on deployment to the public IP?</p> <p>PS: Firewall and ACLs are open to internet on all TCP</p>
<p>The endpoint was not getting added. On debugging I found the label between deployment and Service has a mismatch. Hence changed the label type from "app" to "name" and it worked.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: name: nginx spec: replicas: 3 selector: matchLabels: name: nginx template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: type: NodePort ports: - port: 80 nodePort: 30080 name: http selector: name: nginx </code></pre>
<p>I am receiving the "is not a valid repository/tag: invalid reference format" error when building an image on a jenkins agent.</p> <ul> <li><p>This error is generally known to occur when docker versions &lt; 17.05 attempt to build a modern multi-stage dockerfile.</p></li> <li><p>The agent is running on a Kubernetes cluster (server and nodes running 1.9.2-gke.1) and was provisioned by the below Jenkinsfile.</p></li> </ul> <p>Is it because I am binding <code>/var/run/docker.sock</code> from the client to the server that this is executing on the 17.03 version of docker?</p> <p><strong>The JenkinsFile:</strong></p> <pre><code>#!/usr/bin/groovy podTemplate(label: 'jenkins-pipeline', containers: [ containerTemplate(name: 'jnlp', image: 'jenkinsci/jnlp-slave:latest', args: '${computer.jnlpmac} ${computer.name}'), containerTemplate(name: 'docker', image: 'docker:latest', command: 'cat', ttyEnabled: true), containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true) ], volumes:[ hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'), ]) { node ('jenkins-pipeline') { stage('build') { container('docker') { dir ('src') { sh "docker version" sh "docker build -t ${tag} ." } } } } } </code></pre> <p><strong>Check the version of docker:</strong></p> <pre><code># docker version Client: Version: 18.02.0-ce API version: 1.27 (downgraded from 1.36) Go version: go1.9.3 Git commit: fc4de44 Built: Wed Feb 7 21:12:37 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm Server: Engine: Version: 17.03.2-ce API version: 1.27 (minimum version 1.12) Go version: go1.9.1 Git commit: f5ec1e2 Built: Thu Dec 7 20:13:20 2017 OS/Arch: linux/amd64 Experimental: false </code></pre> <p><strong>The dockerfile in question:</strong></p> <pre><code>FROM microsoft/aspnetcore:2.0 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0 AS build WORKDIR /src COPY XXXXXX.API.csproj ./ RUN dotnet restore COPY . . WORKDIR /src RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "XXXXXX.API.dll"] </code></pre>
<p>Docker builds are run on the server, and <a href="https://blog.docker.com/2017/07/multi-stage-builds/" rel="noreferrer">multi stage builds were introduced in 17.06</a>. You'll need to run the build on a newer server version to support that syntax.</p>
<p>I run a <code>v1.9.2</code> custom setup of Kubernetes and scrape various metrics with Prometheus <code>v2.1.0</code>. Among others, I scrape the <code>kubelet</code> and <code>cAdvisor</code> metrics.</p> <p>I want to answer the question: "<em>How much of the CPU resources defined by <code>requests</code> and <code>limits</code> in my deployment are actually used by a pod (and its containers) in terms of (milli)cores?</em>"</p> <p>There are a lot of scraped metrics available, but nothing like that. Maybe it could be calculated by the CPU usage time in seconds, but I don't know how.</p> <p>I was considering it's not possible - until a friend told me she runs Heapster in her cluster which has a graph in the built-in Grafana that tells <em>exactly</em> that: It shows the indivual CPU usage of a pod and its containers in (milli)cores.</p> <p>Since Heapster also uses <code>kubelet</code> and <code>cAdvisor</code> metrics, I wonder: how can I calculate the same? The metric in InfluxDB is named <code>cpu/usage_rate</code> but even with Heapster's code, I couldn't figure out how they calculate it.</p> <p>Any help is appreciated, thanks!</p>
<p>We're using the <code>container_cpu_usage_seconds_total</code> metric to calculate Pod CPU usage. This metrics contains the total amount of CPU seconds consumed <em>by container by core</em> (this is important, as a Pod may consist of multiple containers, each of which can be scheduled across multiple cores; however, the metric has a <code>pod_name</code> annotation that we can use for aggregation). Of special interest is the <strong>change rate</strong> of that metric (which can be calculated with <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#rate()" rel="noreferrer">PromQL's <code>rate()</code> function</a>). If it increases by 1 within one second, the Pod consumes 1 CPU core (or 1000 milli-cores) in that second.</p> <p>The following PromQL query does just that: Compute the CPU usage of all Pods (using the <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators" rel="noreferrer"><code>sum(...) by (pod_name)</code></a> operation) over a five minute average:</p> <pre><code>sum(rate(container_cpu_usage_seconds_total[5m])) by (pod_name) </code></pre>
<p>I want to set up port forwarding from my local ports (nodes) to the pod redis-master after the online guide from <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">kubernetes</a>.</p> <p>At the moment my prompt is frozen for more than 5 minutes at the <code>port-forward</code> command.</p> <pre><code>[root@k8s-master deployments]# kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/redis-master.yaml pod "redis-master" created [root@k8s-master deployments]# kubectl get pods NAME READY STATUS RESTARTS AGE redis-master 2/2 Running 0 1m [root@k8s-master deployments]# kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}' 6379 [root@k8s-master deployments]# kubectl port-forward redis-master 6379:6379 Forwarding from 127.0.0.1:6379 -&gt; 6379 ^C </code></pre> <p>I don't know why my prompt is frozen. In my logs aren't some error or warn entries.</p> <pre><code>journalctl -u kubelet.service -f --since "2018-02-19 10:30:00" --priority 0 -- Logs begin at Sa 2018-02-03 21:21:50 CET. -- </code></pre> <p>kubectl version</p> <pre><code>[root@k8s-master deployments]# kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>os-release</p> <pre><code>[root@k8s-master deployments]# cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" </code></pre> <p>Ports</p> <pre><code>[root@k8s-master deployments]# ss -atun | grep 6379 [root@k8s-master deployments]# </code></pre>
<p>The behaviour you see is expected. This command does not get daemonized by default. It will be forwarding the port until you kill the command with CTRL-C or other similar methods.</p> <p>You could try using <code>&amp;</code> at the end of the command if you want to continue using that prompt. Personally I would use a terminal multiplexer like tmux or screen.</p>
<p>Of all the concepts from Kubernetes, I find service working mechanism is the most difficult to understand </p> <p>Here is what I imagine right now:</p> <ul> <li>kube-proxy in each node listen to any new service/endpoint in master API controller</li> <li>If there is any new service/endpoint, it adds a rule to that node's iptables</li> <li>For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP</li> </ul> <p>Is it correct? There are still a few things I'm still not clear:</p> <ul> <li>Are services lying within nodes? If so, can we ssh into nodes and inspect how services work? </li> <li>Are service IPs virtual IPs and only accessible within nodes?</li> </ul> <p>Most of the diagrams that I see online draw services as crossing all nodes, which make it even more difficult to imagine</p>
<blockquote> <p>kube-proxy in each node listen to any new service/endpoint in master API controller</p> </blockquote> <p>Kubernetes uses etcd to share the current cluster configuration information across all nodes (including pods, services, deployments, etc.).</p> <blockquote> <p>If there is any new service/endpoint, it adds a rule to that node's iptables</p> </blockquote> <p>Internally kubernetes has a so called Endpoint Controller that is responsible for modifying the DNS configuration of the virtual cluster network to make service endpoints available via DNS (and environment variables). </p> <blockquote> <p>For NodePort service, external client has to access new service through one of the node's ip and NodePort. The node will forward the request to the new service IP</p> </blockquote> <p>Depending on the service type additional action is taken, e.g. to make a port available on the nodes through an automatically created clusterIP service for type nodePort. Or an external load balancer is created with the cloud provider, etc.</p> <blockquote> <p>Are services lying within nodes? If so, can we ssh into nodes and inspect how services work?</p> </blockquote> <p>As explained, services are manifested in the cluster configuration, the endpoint controller as well as additional things, like the clusterIP services, load balancers, etc. I cannot see a need to ssh into nodes to inspect services. Typically interacting with the cluster api should be sufficient to investigate/update the service configuration.</p> <blockquote> <p>Are service IPs virtual IPs and only accessible within nodes?</p> </blockquote> <p>Service IPs, like POD IPs are virtual and accessible from within the cluster network. There is a global allocation map in etcd that maintains the complete list that allows allocating unique new ones. For more information on the networking model read <a href="https://cloudnativelabs.github.io/post/2017-04-18-kubernetes-networking/" rel="nofollow noreferrer">this blog</a>.</p> <p>For more detailed information see the docs for <a href="https://kubernetes.io/docs/concepts/overview/components/#etcd" rel="nofollow noreferrer">kubernetes components</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a>.</p>
<p>When processing a rolling update with database migrations, how does kubernetes handle this? </p> <p>For an instance - I have an app that gets updated from app-v1 to app-v2, which includes a migration step to alter an existing table. So this would mean it requires me to run something like <code>db:migrate</code> for a rails app once deployed.</p> <p>When a rolling deployment takes place on 3 replica set. It will deploy from one pod to another. Potentially allowing PODs that don't have the new version of the app to break. </p> <p>Although this scenario is not something that happens very often. It's quite possible that it would. I would like to learn about the best/recommended approaches for this scenario.</p>
<p>One way to prevent an old version from breaking is to split a migration into multiple steps. </p> <p>E.g. you want to rename a column in the database. Renaming the column directly would break old versions of the app. This can be split into multiple steps:</p> <ul> <li>Add a db migration that inserts the new column</li> <li>Change the app so that all writes go to the old and new column</li> <li>Run a task that copies all values from the old to the new column</li> <li>Change the app that it reads from the new column</li> <li>Add a migration that remove the old column</li> </ul> <p>This is unfortunately quite a hassle, but prevents having a downtime with a maintenance page up.</p>
<p>I've bootstrapped with kubeadm Kubernetes 1.9 RBAC cluster and I've started inside a POD Jenkins based on jenkins/jenkins:lts. I would like to try out <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a> . I have already created a serviceaccount based on the proposal in <a href="https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2" rel="noreferrer">https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2</a> </p> <pre><code>&gt; kubectl -n dev-infra create sa jenkins &gt; kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=dev-infra:jenkins &gt; kubectl -n dev-infra get sa jenkins -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: 2018-02-16T12:06:26Z name: jenkins namespace: dev-infra resourceVersion: "1295580" selfLink: /api/v1/namespaces/dev-infra/serviceaccounts/jenkins uid: d040041c-1311-11e8-a4f8-005056039a14 secrets: - name: jenkins-token-vmt79 &gt; kubectl -n dev-infra get secret jenkins-token-vmt79 -o yaml apiVersion: v1 data: ca.crt: LS0tL...0tLQo= namespace: ZGV2LWluZnJh token: ZXlK...tdVE= kind: Secret metadata: annotations: kubernetes.io/service-account.name: jenkins kubernetes.io/service-account.uid: d040041c-1311-11e8-a4f8-005056039a14 creationTimestamp: 2018-02-16T12:06:26Z name: jenkins-token-vmt79 namespace: dev-infra resourceVersion: "1295579" selfLink: /api/v1/namespaces/dev-infra/secrets/jenkins-token-vmt79 uid: d041fa6c-1311-11e8-a4f8-005056039a14 type: kubernetes.io/service-account-token </code></pre> <p>After that I go to Manage Jenkins -> Configure System -> Cloud -> Kubernetes and set the Kubernetes URL to the Cluster API that I use also in my kubectl KUBECONFIG server: url:port.</p> <p>When I hit test connection I get "Error testing connection <a href="https://url:port" rel="noreferrer">https://url:port</a>: Failure executing: GET at: <a href="https://url:port/api/v1/namespaces/dev-infra/pods" rel="noreferrer">https://url:port/api/v1/namespaces/dev-infra/pods</a>. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:dev-infra:default" cannot list pods in the namespace "dev-infra".</p> <p>I don't want to give to the dev-infra:default user a cluster-admin role and I want to use the jenkins sa I created. I can't understand how to configure the credentials in Jenkins. When I hit add credentials on the <a href="https://github.com/jenkinsci/kubernetes-plugin/blob/master/configuration.png" rel="noreferrer">https://github.com/jenkinsci/kubernetes-plugin/blob/master/configuration.png</a> I get </p> <pre><code>&lt;select class="setting-input dropdownList"&gt; &lt;option value="0"&gt;Username with password&lt;/option&gt; &lt;option value="1"&gt;Docker Host Certificate Authentication&lt;/option&gt; &lt;option value="2"&gt;Kubernetes Service Account&lt;/option&gt; &lt;option value="3"&gt;OpenShift OAuth token&lt;/option&gt; &lt;option value="4"&gt;OpenShift Username and Password&lt;/option&gt; &lt;option value="5"&gt;SSH Username with private key&lt;/option&gt; &lt;option value="6"&gt;Secret file&lt;/option&gt; &lt;option value="7"&gt;Secret text&lt;/option&gt; &lt;option value="8"&gt;Certificate&lt;/option&gt;&lt;/select&gt; </code></pre> <p>I could not find a clear example how to configure Jenkins Kubernetes Cloud connector to use my Jenkins to authenticate with service account jenkins. Could you please help me to find step-by-step guide - what kind of of credentials I need? </p> <p>Regards, Pavel</p>
<p>The best practice is to launch you Jenkins master pod with the <code>serviceaccount</code> you created, instead of creating credentials in Jenkins</p> <p><a href="https://github.com/jenkinsci/kubernetes-plugin/blob/fc40c869edfd9e3904a9a56b0f80c5a25e988fa1/src/main/kubernetes/jenkins.yml#L22" rel="noreferrer">See example yaml</a></p>
<p>I have added HPA support for my containers and they are scaling out and in as expected. But, I'm not sure about the internal state of the docker containers during the scaling. </p> <p>Let's say I have an ongoing process going and number of replicas is 1. If the cpu usage goes above threshold, replicas scale out to 2 or 3. I understand the new replicas are ready to serve new requests, but what happens to the ongoing process ? Also, how would it be impacted in case of stateless and stateful process ?</p>
<p>Your ongoing process shouldn't be affected by the scaling operation, existing requests will keep being processed in the existing container, while new requests be routed to the new containers the cluster provisioned.</p> <p>In the case of stateless processes the scaling shouldn't affect the response as the service doesn't hold any state in any of it's containers.<br> The case for stateful services is a lot more complicated and kinda goes beyond the scope of an SO question. You can examine the k8s <code>statefulset</code> object to see how they tackle this issue, and consider it for stateful processes running in a cluster. Check out the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">documentation</a> for more info.</p>
<p>I'm trying (for tests purpose) to expose to kubernetes a very simple image pong http:</p> <pre><code>FROM golang:onbuild EXPOSE 8000 </code></pre> <p>I built the docker image: </p> <pre><code>docker build -t pong . </code></pre> <p>I started a private registry (with certificates):</p> <pre><code>docker run -d --restart=always --name registry -v `pwd`/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -p 443:443 registry:2.6.2 </code></pre> <p>I created a secret: </p> <pre><code>kubectl create secret docker-registry regsecret --docker-server=localhost --docker-username=johndoe --docker-password=johndoe [email protected] </code></pre> <p>I uploaded the image: </p> <pre><code>docker tag 9c0bb659fea1 localhost/pong docker push localhost/pong </code></pre> <p>I had an insecure registry configuration </p> <pre><code>{ "storage-driver" : "aufs", "insecure-registries" : [ "localhost" ], "debug" : true, "experimental" : true } </code></pre> <p>So I tried to create my kubernetes pods with: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pong spec: containers: - name: pong image: localhost/pong:latest imagePullPolicy: Always imagePullSecrets: - name: regsecret </code></pre> <p>I'm on MacOS with docker Version 17.12.0-ce-mac49 (21995).</p> <p>If I use <code>image: localhost/pong:latest</code> I got:</p> <pre><code>waiting: message: 'rpc error: code = Unknown desc = Error response from daemon: error parsing HTTP 404 response body: invalid character ''d'' looking for beginning of value: "default backend - 404"' reason: ErrImagePull </code></pre> <p>I'm stuck on it since the beginning of the week, without success.</p>
<p>It was not a problem of registry configuration. I forgot to mention that I used minikube.</p> <p>For the flags to be taken into account, I had to delete the minikube configuration and recreate it</p> <p><code>minikube delete</code></p> <p><code>minikube start --insecure-registry="10.0.4.0/24"</code></p>
<p>Consider a Statefulset (Cassandra using offical K8S example) across 3 Availability zones:</p> <ul> <li>cassandra-0 -> zone a</li> <li>cassandra-1 -> zone b</li> <li>cassandra-2 -> zone c</li> </ul> <p>Each Cassandra pod uses an EBS volume. So there is automatically an affinity. For instance, cassandra-0 cannot move to "zone-b" because its volume is in "zone-a". All good.</p> <p>If some Kubernetes nodes/workers fail, they will be replaced. The pods will start again on the new node and be re-attached their EBS volume. Looking like nothing happened.</p> <p>Now if the entire AZ "zone-a" goes down and is unavailable for some time (meaning cassandra-0 cannot start anymore due to affinity for EBS in the same zone). You are left with:</p> <ul> <li>cassandra-1 -> zone b</li> <li>cassandra-2 -> zone c</li> </ul> <p>Kubernetes will never be able to start cassandra-0 for as long as "zone-a" is unavailable. That's all good because cassandra-1 and cassandra-2 can serve requests.</p> <p>Now if on top of that, another K8S node goes down or you have setup auto-scaling of your infrastructure, you could end up with cassandra-1 or cassandra-2 needed to move to another K8S node. It shouldn't be a problem.</p> <p><strong>However</strong> from my testing, K8S will not do that because the pod cassandra-0 is offline. It will never self-heal cassandra-1 or cassandra-2 (or any cassandra-X) because it wants cassandra-0 back first. And cassandra-0 cannot start because it's volume is in a zone which is down and not recovering.</p> <p>So if you use <strong>Statefulset + VolumeClaim + across zones</strong> AND you experience an <strong>entire AZ failure</strong> AND you experience an EC2 failure in another AZ or have auto-scaling of your infrastructure</p> <p>=> then you will loose all your Cassandra pods. Up until zone-a is back online</p> <p>This seems like a dangerous situation. Is there a way for a stateful set to not care about the order and still self-heal or start more pod on cassandra-3, 4, 5, X?</p>
<p>Two options: </p> <p>Option 1: <strong>use podManagementPolicy</strong> and set it to Parallel. The pod-1 and pod-2 will crash a few times until the seed node (pod-0) is available. This happens when creating the statefulset the first time. Also note that Cassandra documentation used to recommend NOT creating multiple nodes in parallel but it seems recent updates makes this not true. Multiple nodes can be added to the cluster at the same time</p> <p>Issue found: if using 2 seed nodes, you will get a split brain scenario. Each seed node will be created at the same time and create 2 separate logical Cassandra clusters</p> <p>Option 1 b: <strong>use podManagementPolicy</strong> and set it to Parallel and use ContainerInit. Same as option 1 but use an initContainer <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a>. The init container is a short lived container which has for role to check that the seed node is available before starting the actual container. This is not required if we are happy for the pod to crash until the seed node is available again The problem is that Init Container will always run which is not required. We want to ensure the Cassandra cluster was well formed the first time it was created. After that it does not matter</p> <p>Option 2: <strong>create 3 different statefulets</strong>.</p> <p>1 statefulset per AZ/Rack. Each statefulset has constraints so it can run only on nodes in the specific AZ. I've also got 3 storage classes (again constraint to a particular zone), to make sure the statefulset does not provision EBS in the wrong zone (statefulset does not handle that dynamically yet) In each statefulset I've got a Cassandra seed node (defined as environment variable CASSANDRA_SEEDS which populates SEED_PROVIDER at run time). That makes 3 seeds which is plenty. My setup can survive a complete zone outage thanks to replication-factor=3</p> <p>Tips:</p> <ul> <li>the list of seed node contains all 3 nodes separated by commas: "cassandra-a-0.cassandra.MYNAMESPACE.svc.cluster.local, cassandra-b-0.cassandra.MYNAMESPACE.svc.cluster.local, cassandra-c-0.cassandra.MYNAMESPACE.svc.cluster.local"</li> <li>Wait until the first seed (cassandra-a-0) is ready before creating the other 2 statefulsets. Otherwise you get a split brain. This is only an issue when you create the cluster. After that, you can loose one or two seed nodes without impact as the third one is aware of all the others.</li> </ul>
<p>I'm trying to run the <a href="https://github.com/kubernetes/charts/tree/master/stable/jenkins" rel="noreferrer">Jenkins Helm chart</a>. As part of this setup, I'd like to pass in a persistent volume that I provisioned ahead of time (or perhaps exported from another cluster during a migration). </p> <p>I'm trying to get my persistent volume (PV) and persistent volume claim (PVC) setup in a such a way that when Jenkins starts, it uses my predefined PV and PVC.</p> <p>I think the problem originates from the persistent storage definition for the Azure disk points to a VHD in my storage account. Is there any way to point it to an existing managed disk -and not a blob?</p> <p>This is how I setup my persistent storage using Azure Managed Disk</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-home spec: capacity: storage: 10Gi storageClassName: default azureDisk: diskName: jenkins-home diskURI: https://&lt;storageaccount&gt;.blob.core.windows.net/jenkins-data/jenkins-home.vhd fsType: ext4 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain claimRef: name: jenkins-home-pvc namespace: default --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-home-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: default </code></pre> <p>I then start helm like this... </p> <pre><code>helm install --name jenkins stable/jenkins --values=values.yaml </code></pre> <p>Where my <code>values.yaml</code> file looks like</p> <pre><code>Persistence: ExistingClaim: jenkins-home-pvc </code></pre> <p>Here is the error I receive when the Jenkins' pod starts.</p> <blockquote> <p>AttachVolume.Attach failed for volume "jenkins-home" : Attach volume "jenkins-home" to instance "aks-agentpool-40897452-0" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="OperationNotAllowed" Message="Addition of a blob based disk to VM with managed disks is not supported."</p> </blockquote>
<p>I posed this question to the Azure team <a href="https://github.com/Azure/acs-engine/issues/2299" rel="noreferrer">here</a>.</p> <p>Through their help I arrived at the following solution...</p> <p>I had tried to use the managed disk resource ID before but it yelled at me saying it expected a .vhd file. But after adding 'kind: Managed', it was perfectly happy to take the managed disk resource id. </p> <p>Creating an empty and formatted managed disk is of course a pre-requisite for this to work. Copying the managed disk into the same resource group as the AKS cluster was also required.</p> <p>So now my PV and PVC look like this and it's working...</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-home spec: capacity: storage: 10Gi storageClassName: default azureDisk: kind: Managed diskName: jenkins-home diskURI: /subscriptions/{subscription-id}/resourceGroups/{aks-controlled-resource-group-name}/providers/Microsoft.Compute/disks/jenkins-home fsType: ext4 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain claimRef: name: jenkins-home-pvc namespace: default --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-home-pvc annotations: volume.beta.kubernetes.io/storage-class: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: default </code></pre>
<p>I have installed JFrog Artifactory Pro Trial(v5.8.3) on Kubernetes cluster on AWS using the official Helm chart:</p> <pre><code>helm install --name artifactory stable/artifactory </code></pre> <p>I have mapped the ELB launched by the Artifactory chart to Route53 DNS entry jfrogartifactory.qdatalabs.com</p> <p>I'm able to access the Jfrog application on the URL: <a href="http://jfrogartifactory.qdatalabs.com/" rel="nofollow noreferrer">jfrogartifactory.qdatalabs.com</a></p> <p>Now, I want to push Docker images from my local machine to Artifactory. I have set up Docker repository on Artifactory and want to expose it without using a reverse proxy. I have chosen 'Repository Path' as the Docker Access Method in the HTTP Settings Configuration:</p> <p><a href="https://i.stack.imgur.com/NPu3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NPu3R.png" alt="enter image description here"></a></p> <p>I have chosen 'Embedded Tomcat' in the reverse proxy settings:</p> <p><a href="https://i.stack.imgur.com/PKvFX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PKvFX.png" alt="enter image description here"></a></p> <p>The HTTP Settings in the Advanced settings page of docker, docker-local and docker-remote are auto-populated like this:</p> <p><a href="https://i.stack.imgur.com/e7lJ9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e7lJ9.png" alt="enter image description here"></a></p> <p>I'm getting a 200 response on the following URL: <a href="http://jfrogartifactory.qdatalabs.com/artifactory/api/docker/docker/v2" rel="nofollow noreferrer">http://jfrogartifactory.qdatalabs.com/artifactory/api/docker/docker/v2</a></p> <p>However, I'm getting 404 response on the following URL: <a href="http://jfrogartifactory.qdatalabs.com/v2/" rel="nofollow noreferrer">http://jfrogartifactory.qdatalabs.com/v2/</a></p> <p><a href="https://i.stack.imgur.com/IknBq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IknBq.png" alt="enter image description here"></a></p> <p>On my local machine, I have added jfrogartifactory.qdatalabs.com to the insecure registries by editing the /etc/docker/daemon.json:</p> <p><a href="https://i.stack.imgur.com/KellV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KellV.png" alt="enter image description here"></a></p> <p>After reloading the Docker daemon, I can see the URL added to the insecure registries:</p> <p><a href="https://i.stack.imgur.com/lxOda.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lxOda.png" alt="enter image description here"></a></p> <p>I have also enabled anonymous access to the registry. But when I try to pull an image or login to the directory, I get 500 Internal Server error: <a href="https://i.stack.imgur.com/4KhNS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4KhNS.png" alt="enter image description here"></a></p> <p>Please help with any pointers or any obvious mistakes I maybe doing.</p>
<ul> <li><p>When adding jfrogartifactory.qdatalabs.com, you also need to specify the port Artifactory is running on. e.g. <strong>jfrogartifactory.qdatalabs.com:8081</strong> Then, reload the Docker daemon. </p> <ul> <li><p>Then, when you try to login through the Docker client, you should run the following: <strong>docker login -u {USER_NAME} -p {USER_PASSWORD} jfrogartifactory.qdatalabs.com:8081</strong></p></li> <li><p>To pull an image, you need to specify the Docker virtual repo name as part of the request. If you're using a virtual repository named docker-virtual, you should run the following: <strong>docker pull jfrogartifactory.qdatalabs.com:8081/docker-virtual/hello-world</strong> </p></li> </ul></li> </ul> <p>Hope that helps. </p>
<p>Is there any way I can exec into the container, then edit some code (ex: add some log, edit come configuration file, etc) and restart the container to see what happens?</p> <p>I tried to search for this but found nothing helpful.</p> <p>The point is, I want to do a quick debug, not to do a full cluster deployment.</p>
<p>Some programs (like ie. nginx) support configuration reload without restarting their process, with these you can just <code>kubectl exec</code> change config and send a signal to master process (ie. <code>kubectl exec &lt;nginx_pod&gt; kill -HUP 1</code>). It is a feature of the software though, so many will not take that into account.</p> <p>Containers are immutable by design, so they restart with a clean state each time. That said, with no simple way of doing this, there are hackish ways to achieve it.</p> <p>One I can think of involves modifying the image on the node that will then restart the container. If you can ssh into the node and access docker directly, you can identify the container with a modified file and commit these changes with <code>docker commit</code> under the same tag. At that point your local container with that tag has your changes baked in so if you restart it (not reschedule, as it could start on different node), it will come up with your changes (assuming you do not use <code>pullPolicy: always</code>).</p> <p>Again, not the way it's meant to be used, but achievable.</p>
<p>As I understand all of these tools <a href="https://draft.sh/" rel="noreferrer">Draft</a>,<a href="https://helm.sh/" rel="noreferrer">Helm</a> and <a href="https://ksonnet.io/" rel="noreferrer">Ksonnet</a> have overlapping functionality such as creating a chart as well as deploying kubernetes configurations. </p> <p><em>I understand that purpose of these tool would be to describe and configure the application as well as the k8s environments.</em> </p> <blockquote> <p>By using Draft we can create Dockerfile, Chart. Nevertheless we can do same thing with Helm and Ksonnet. </p> </blockquote> <p><em>My question is If these components create a pipeline in CI/CD then what will be the order?</em> </p> <p>for example, </p> <p><strong>draft -> ksonnet > helm</strong> or</p> <p><strong>draft -> helm -> ksonnet</strong> </p>
<p>In short, <strong>draft</strong> and <strong>helm</strong> are more or less complimentary and <strong>ksonnet</strong> is orthogonal, specifically providing an alternative to <strong>helm</strong>.</p> <p>In elaborating I will split my answer up into three major sections, the first of which describes how <strong>draft</strong> and <strong>helm</strong> interact, the second describing how <strong>ksonnet</strong> is orthogonal to the others, and finally a section explaining how I understand these with respect to CI/CD.</p> <h1>Helm &amp; Draft</h1> <p>Helm and Draft are complimentary in the sense that Helm, which can be considered to be a package management system for Kubernetes, provides a portion of the functionality offered by Draft which itself is essentially a Kubernetes application development tool.</p> <p>The relationship between Draft and Helm can be summarized by pointing out that in pursuit of its goal of simplifying Kubernetes application development, Draft produces a Helm chart using metadata inferred from your current application type (more about that below) if one does not already exist or uses and existing one in order to deploy/update a development version of your application without you having to know anything about how it does that.</p> <h2>Helm for Kubernetes Package Management</h2> <p>As mentioned previously, Helm is a package management system for Kubernetes-based applications. It provides the following features:</p> <ul> <li>A templating approach for defining Kubernetes manifests (called "charts")</li> <li>Package management, including a basic package repository service to host released packages.</li> <li>Application lifecycle management including deploy, update, and purging of Helm applications</li> <li>Package dependencies</li> </ul> <p>Helm takes a templated YAML approach to parameterizing Kubernetes manifests and allows values to be shared and overridden between dependent packages. ie, supposed Package A depends on Package B; Package A can re-use configuration values set on Package B and it can override those parameters with values of its own. Values for all packages in a given deployment can also be overridden using the Helm command line tool.</p> <p>Also worth mentioning is the fact that Helm depends on the availability of its cluster-side component named "Tiller" to actually do the work of reifying templates and deploying the generated Kubernetes manifests to the cluster.</p> <h2>Draft for Kubernetes Application Development</h2> <p>The aim of Draft is to <a href="https://github.com/Azure/draft/blob/master/docs/reference/dep-002.md#user-personas-and-stories" rel="noreferrer">dramatically simplify development of Kubernetes applications</a> by being quickly building and deploying the Helm charts/packages and corresponding docker images necessary to run a project -- provided that the following exist:</p> <ul> <li>A Kubernetes cluster</li> <li>Helm's Tiller pod installed in the Kubernetes cluster</li> <li>A Docker registry</li> </ul> <p>The <a href="https://github.com/Azure/draft/blob/master/docs/install.md" rel="noreferrer">draft installation guide</a> provides details for getting these pieces set up to try it out yourself.</p> <p>Draft also builds on Helm by providing a high-level "packaging" format that includes both the application helm chart and the Dockerfile, the latter giving it the ability to build docker images.</p> <p>Finally, it has built-in support for specific programming languages and will to a limited extent attempt to infer which programming language and framework(s) you are using when initially creating a new Draft project using <code>draft create</code>.</p> <h1>Ksonnet for Kubernetes Package Management</h1> <p>As mentioned previously, Ksonnet is orthogonal in many ways to Helm, providing essentially the same features with respect to package management wrapped in different terminology -- see its <a href="https://ksonnet.io/docs/concepts" rel="noreferrer">core concepts documentation</a>. It's worth noting that it is not compatible with nor does it address the same concerns as Draft.</p> <p>I say that Ksonnet and Helm are orthogonal because they take mutually incompatible approaches to generating and deploying Kubernetes manifests. Whereas Helm uses templated YAML, Ksonnet generates Kubernetes manifests using a <a href="http://jsonnet.org/" rel="noreferrer">"data templating" language called Jsonnet</a>. Also, rather than conceiving of "dependent" packages as is the case with Helm, Ksonnet blurs the line between dependent services by representing them as composable "prototypes". Finally, rather than depending on a cluster-side application that reifies and deployes manifest templates, Ksonnet has an <code>apply</code> subcommand analogous to <code>kubectl apply</code>.</p> <h1>CI/CD</h1> <p>So where do these pieces fit into a CI/CD workflow? Well since there are essentially two mutually incompatible toolsets, let's consider them on a case-by-case basis:</p> <h2>Draft + Helm</h2> <p>According to <a href="https://github.com/Azure/draft/blob/master/docs/reference/dep-002.md#qa" rel="noreferrer">the Draft design Q&amp;A section</a>, it is meant only as a developer tool intended to abstract much of the complexity of dealing with kubernetes, helm, and docker from developers primarily interested in seeing their application run in a development cluster.</p> <p>With this in mind, any CD approach involving this set of tools would have to do the following:</p> <ul> <li>Build docker image(s) using the <code>docker</code> CLI if necessary</li> <li>Build Helm package(s) using the <code>helm</code> CLI</li> <li>Deploy Helm package(s) to Helm repository using the <code>helm</code> CLI</li> <li>Install/update Helm package(s) on the appropriate staging/prod Kubernetes cluster(s) using the <code>helm</code> CLI</li> </ul> <h2>Ksonnet</h2> <p>The Ksonnet CD workflow is somewhat abbreviated compared to the <code>helm</code> workflow above:</p> <ul> <li>Build docker image(s) using the <code>docker</code> CLI if necessary</li> <li>Apply the Ksonnet manifest using the <code>ks</code> CLI</li> </ul> <p>Whereas with Helm you would deploy your applicat's package to a Helm registry for re-use, if your Ksonnet manifest contains <a href="https://ksonnet.io/docs/concepts#prototype" rel="noreferrer">re-usable prototypes</a> that might be of use to another Ksonnet-based application you would want to ensure it is available in a git repo as described in the <a href="https://ksonnet.io/docs/concepts#registry" rel="noreferrer">Ksonnet registry documentation.</a></p> <p>This means that how Ksonnet definitions are dealt with in CI/CD is largely dependent on which git repo(s) you decide to store them in and how they are structured.</p>
<p>I'm using Google cloud platform and Kubernetes.</p> <p>I'm trying to find out which token should I use in order to login to the dashboard and have enough permissions to do as I please.</p> <p>I created a 3-node Kubernetes 1.8.6 cluster on Google Cloud Platform</p> <p>my developer desktop is a Mac Pro (late 2013) on macos high sierra 10.13.2 with google-cloud-sdk and kubernetes-cli installed from homebrew.</p> <pre><code>~ ❯❯❯ kubectl version ✘ 1 Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;9&quot;, GitVersion:&quot;v1.9.1&quot;, GitCommit:&quot;3a1c9449a956b6026f075fa3134ff92f7d55f812&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2018-01-04T20:00:41Z&quot;, GoVersion:&quot;go1.9.2&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;8+&quot;, GitVersion:&quot;v1.8.6-gke.0&quot;, GitCommit:&quot;ee9a97661f14ee0b1ca31d6edd30480c89347c79&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2018-01-05T03:36:42Z&quot;, GoVersion:&quot;go1.8.3b4&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>and</p> <pre><code>~ ❯❯❯ gcloud version Google Cloud SDK 184.0.0 bq 2.0.28 core 2018.01.05 gsutil 4.28 </code></pre> <p>I read in the docs that it's not safe to create an admin user for the dashboard, unfortunately the all permissions to the dashboard pod confuses me a bit.</p> <p>when I execute <code>kubectl get secrets -n kube-system</code> and decode one of the tokens with <code>kubectl get secret &lt;TOKEN_NAME&gt; -n=kube-system -o json | jq -r '.data[&quot;token&quot;]' | base64 -D &gt; user_token.txt</code></p> <p>and use that to login using the kubectl web proxy I started with the command <code>kubectl proxy</code>, I get lots of permissions errors when I try to view any of the pages in the dashboard web interface. I probably don't use the proper token.. or I need to create a new token.</p> <p>Is there a way to view the permissions of the tokens so I'll know before hand what I'm actually trying to login with?</p> <h1>update</h1> <p>so I ran kubectl to get all the secret tokens in kube-system namespace:</p> <pre><code>~ ❯❯❯ kubectl get secrets -n kube-system NAME TYPE DATA AGE attachdetach-controller-token-4pp92 kubernetes.io/service-account-token 3 10m certificate-controller-token-bqnjp kubernetes.io/service-account-token 3 10m cloud-provider-token-ltbnh kubernetes.io/service-account-token 3 10m cronjob-controller-token-84cl9 kubernetes.io/service-account-token 3 10m daemon-set-controller-token-ncz5r kubernetes.io/service-account-token 3 10m default-token-fpmht kubernetes.io/service-account-token 3 10m deployment-controller-token-4xc8k kubernetes.io/service-account-token 3 10m disruption-controller-token-9gdqg kubernetes.io/service-account-token 3 10m endpoint-controller-token-gr29m kubernetes.io/service-account-token 3 10m event-exporter-sa-token-6klz5 kubernetes.io/service-account-token 3 10m fluentd-gcp-token-s2kk4 kubernetes.io/service-account-token 3 10m generic-garbage-collector-token-tqbqz kubernetes.io/service-account-token 3 10m heapster-token-7pgmr kubernetes.io/service-account-token 3 10m horizontal-pod-autoscaler-token-74v57 kubernetes.io/service-account-token 3 10m job-controller-token-2skhj kubernetes.io/service-account-token 3 10m kube-dns-autoscaler-token-wc9gz kubernetes.io/service-account-token 3 10m kube-dns-token-nx2tf kubernetes.io/service-account-token 3 10m kubernetes-dashboard-certs Opaque 0 10m kubernetes-dashboard-key-holder Opaque 2 9m kubernetes-dashboard-token-zxp7n kubernetes.io/service-account-token 3 10m namespace-controller-token-tz54r kubernetes.io/service-account-token 3 10m node-controller-token-m2w7k kubernetes.io/service-account-token 3 10m persistent-volume-binder-token-6sfkt kubernetes.io/service-account-token 3 10m pod-garbage-collector-token-zqxhd kubernetes.io/service-account-token 3 10m replicaset-controller-token-8n6b7 kubernetes.io/service-account-token 3 10m replication-controller-token-nb2tw kubernetes.io/service-account-token 3 10m resourcequota-controller-token-blhfg kubernetes.io/service-account-token 3 10m route-controller-token-c5ns6 kubernetes.io/service-account-token 3 10m service-account-controller-token-zptxc kubernetes.io/service-account-token 3 10m service-controller-token-75hht kubernetes.io/service-account-token 3 10m statefulset-controller-token-fhpk8 kubernetes.io/service-account-token 3 10m ttl-controller-token-5vwln kubernetes.io/service-account-token 3 10m </code></pre> <p>then I executed</p> <pre><code>kubectl get secret kubernetes-dashboard-token-zxp7n -n=kube-system -o json | jq -r '.data[&quot;token&quot;]' | base64 -D &gt; user_token.txt </code></pre> <p>and used that token to login.</p> <p>after login I get the following messages:</p> <pre><code>warning configmaps is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list configmaps in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning persistentvolumeclaims is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list persistentvolumeclaims in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning secrets is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list secrets in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning services is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list services in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning ingresses.extensions is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list ingresses.extensions in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning daemonsets.apps is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list daemonsets.apps in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning pods is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list pods in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning events is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list events in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning deployments.apps is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list deployments.apps in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning replicasets.apps is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list replicasets.apps in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning jobs.batch is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list jobs.batch in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning cronjobs.batch is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list cronjobs.batch in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning replicationcontrollers is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list replicationcontrollers in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; close warning statefulsets.apps is forbidden: User &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; cannot list statefulsets.apps in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:kube-system:kubernetes-dashboard&quot; </code></pre> <p>any ideas why ?</p>
<p>After you connect the cluster with <strong>gcloud container clusters get-credentials</strong>. Use the following command to get the access token of current-context</p> <pre><code>kubectl config view | grep -A10 "name: $(kubectl config current-context)" | awk '$1=="access-token:"{print $2}' </code></pre>
<p>Our application landscape consists of Spring Boot apps hosted on docker containers managed by Kubernetes. In Spring Boot, we use the property "spring.config.location" to specify the external location of the property files. The java command is as follows: java -jar myproject.jar --spring.config.location={file://some file path}</p> <p>Now instead of using the local file path, can I create a Kube persistent volume and give that path in the above command? What Kube volume type should I use to allow for the same semantics of file://{file path} ?</p>
<p>We could successfully read application.properties file from a Kubernetes Persistent Volume using the "spring.config.location" command line argument. The steps we followed were as follows: </p> <ol> <li><p>Created a persistent volume on Kubernetes (PVC was set to ReadWriteMany option, as multiple microservices will use the same properties file). </p></li> <li><p>Mounted the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/properties' path. </p></li> <li><p>Added the "spring.config.location" parameter to the "java -jar" command - e.g. java -Dspring.profiles.active=$spring_profile -Xmx4096m -XX:+HeapDumpOnOutOfMemoryError -jar myJar.jar --spring.config.location=file:/shared/folder/properties</p></li> </ol>
<p>I want to restart a pod (that is part of a deployment), without having to restart/replace the entire deployment.</p> <p>I have tried <code>kubectl replace --force -f file.yaml</code> but that restarts the whole deployment. I want to just restart the current pod that is live. </p> <p>Any thoughts are appreciated.</p>
<p>Why not just <code>kubectl delete pod &lt;pod&gt;</code> ? It will remove your single pod and schedule new in it's place.</p> <p>If rescheduling is a problem, you could try to kill the process running inside the container in pod with something like <code>kubectl exec &lt;pod&gt; &lt;container&gt; kill 1</code>, but some processes might not be willing to surrender easily :)</p>
<p>I am new to Kubernetes networking.</p> <p>We have separated a Kubernetes cluster into a set of namespaces (e.g. <code>namespace-a</code>, <code>namespace-b</code>). Every namespace has a set of Kubernetes pods. Every pod has a service that is available at <code>my-svc.namespace-x.svc.cluster.local</code>.</p> <p>Now, we want to <strong>prevent</strong> pods of namespace <code>namespace-a</code> to talk with services or pods that are part of <code>namespace-b</code> and <strong>vice versa</strong>. Communication <strong>within</strong> a namespace should be unrestricted. </p> <p>This is what I found as an example in the network policies documentation: <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource</a></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress - Egress </code></pre> <p>As far as I understand, this prevents network communication completely, for all pods across a namespace. </p> <ul> <li>How can I allow all network traffic, but only <strong>within</strong> a particular namespace?</li> <li>Do I need a networking plugin, such as Calico, Flannel or Weave? Which one should I choose?</li> </ul>
<blockquote> <p>Do I need a networking plugin, such as Calico, Flannel or Weave?</p> </blockquote> <p>No matter what you need a networking plugin, but not all plugins support the <code>NetworkPolicy</code> API object. According to the <a href="https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/" rel="noreferrer">Declare Network Policy walkthrough</a>, the following is a (probably non-exhaustive) list of plugins that do support <code>NetworkPolicy</code>:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/calico-network-policy/" rel="noreferrer">Calico</a></li> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/cilium-network-policy/" rel="noreferrer">Cilium</a></li> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/kube-router-network-policy/" rel="noreferrer">Kube-router</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/romana-network-policy/" rel="noreferrer">Romana</a></li> <li><a href="https://kubernetes.io/docs/tasks/administer-cluster/weave-network-policy/" rel="noreferrer">Weave Net</a></li> </ul> <p>Without a plugin that supports <code>NetworkPolicy</code>, creating the resource would have no effect.</p> <blockquote> <p>Which one should I choose?</p> </blockquote> <p>As for which one you should choose, stackoverflow is not the place for soliciting that kind of advice. What I can recommend is reading the overview/features documentation for the various options available. Maybe try one or two different plugins in a local development cluster to get a feel for how difficult or easy they are to install, maintain, and update.</p> <blockquote> <p>How can I allow all network traffic, but only within a particular namespace?</p> </blockquote> <p>Given your example setup, I think the following <code>NetworkPolicy</code> resources would address your need:</p> <p>For pods in <code>namespace-a</code>, only allow ingress from <code>namspace-a</code> pods, denying ingress from any other source. Egress is unrestricted:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: namespace-a spec: policyTypes: - Ingress podSelector: {} ingress: - from: - namespaceSelector: matchLabels: name: namespace-a </code></pre> <p>For pods in <code>namespace-b</code>, only allow ingress from <code>namspace-b</code> pods, denying ingress from any other source. Egress is unrestricted:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: namespace-b spec: policyTypes: - Ingress podSelector: {} ingress: - from: - namespaceSelector: matchLabels: name: namespace-b </code></pre> <p>Note that this assumes you have set the <code>name: namespace-a</code> and <code>name: namespace-b</code> labels on your namespaces, similar to this:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: namespace-a labels: name: namespace-a other: labelname </code></pre> <p>I only point this out to avoid confusing you with regard to the fact that the labels I showed above happen to match up with your hypothetical namespace names. The labels can be arbitrary and potentially inclusive of mulitple namespaces -- for example you might have <code>namespace-a</code> and <code>namespace-c</code> both with a label called <code>other: labelname</code> which would allow you to select multiple namespaces using a single <code>namespaceSelector</code> in your <code>NetworkPolicy</code> resource.</p>
<p>When I try to run a container in the cluster, I get a message "<code>deployment test created</code>" but when I look at the dashboard I can see that its in an error state (<code>Failed to pull image...</code>, it was not able to pull the image from the local minikube docker env due to authorization issues</p> <p>My steps were:</p> <ol> <li>Start minikube using hyperv and setting the <code>--insecure-registry</code> switch to 10.0.0.0/8, also tried 0.0.0.0/0 - Kubernetes version 1.9.0 also tried 1.8.0</li> <li>Set the <code>docker env</code> to the minikube docker via <code>minikube docker-env | Invoke-Expression</code></li> <li>build docker image - image builds and exists in minikube local docker</li> <li><code>kubectl run test --image test-service --port 1101</code></li> </ol> <p>This is the result:</p> <p><a href="https://i.stack.imgur.com/IsEXO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IsEXO.jpg" alt="Failed Deployment"></a></p> <p>What am I missing?</p>
<p>As discussed in the comments, <code>openfaas/faas-netes</code> issue 135 illustrates a similar issue, and mention as a possible solution:</p> <blockquote> <p><code>imagePullPolicy</code> if not mentioned should have defaulted to <code>Never</code> instead of <code>Always</code>.</p> </blockquote> <p>The <a href="https://stackoverflow.com/users/10387/tim-jarvis">OP Tim Jarvis</a> realized then:</p> <blockquote> <p>I realized it was not an auth issue, but that it was always wanting to pull from an external repo.<br> The fix for me was to use the <code>imagePullPolicy</code> of <code>IfNotPresent</code>.</p> </blockquote>
<p>I am really new to kubernetes and nginx. I am able to use it as a reverse-proxy by setting up ingress resource, however, I am not sure about how should I use it to forward the request from kubernetes to a particular host. </p> <p>My case is as follows: I have a container running in kubernetes pod which access an external api url (example www.xxx.com) with some parameters, however, because I have blocked the outgoing requests for all the pods, it can not access that api url. </p> <p>To solve this I want to setup nginx proxy which will forward my request to the actual api url. </p> <p>Being new to this and having lack of proper steps documented anywhere to achieve this, I am really stuck. How can I do this?</p>
<p>What you could do is to define a Service object that points to your external API endpoint. This is done by creating an Endpoint object and a Service object both with the same name.</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors</a></p> <p>Once you have your service, you could create an Ingress rule that would forward the traffic to that service. Make sure that the Ingress controller can access your API endpoint.</p>
<p>When I try any kubectl command, it always returns:</p> <pre class="lang-text prettyprint-override"><code>Unable to connect to the server: EOF </code></pre> <p>I followed these tutorials:</p> <blockquote> <p><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a></p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</a></p> </blockquote> <p>But they have not helped me. According to the first link, by default, kubectl configuration is located at</p> <pre class="lang-text prettyprint-override"><code>~/.kube/config </code></pre> <p>But in that path I don't have anything. I don't know if this is causing the issue.</p> <p>Other thing is when I try to check the kubectl configuration:</p> <pre class="lang-text prettyprint-override"><code>M:.kube candres$ kubectl cluster-info Kubernetes master is running at http://localhost:8080 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Unable to connect to the server: EOF M:.kube candres$ kubectl cluster-info dump Unable to connect to the server: EOF </code></pre> <p>The versions I have installed are:</p> <p>Kubernetes - kubectl</p> <pre class="lang-text prettyprint-override"><code>M:.kube candres$ kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;9&quot;, GitVersion:&quot;v1.9.3&quot;, GitCommit:&quot;X&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2018-02-09T21:51:06Z&quot;, GoVersion:&quot;go1.9.4&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Unable to connect to the server: EOF </code></pre> <p>Minikube</p> <pre class="lang-text prettyprint-override"><code>M:.kube candres$ minikube version minikube version: v0.25.0 </code></pre> <p>Docker:</p> <pre class="lang-text prettyprint-override"><code>M:.kube candres$ docker version Client: Version: 17.12.0-ce API version: 1.35 Go version: go1.9.2 Git commit: X Built: Wed Dec 27 20:03:51 2017 OS/Arch: darwin/amd64 Server: Engine: Version: 17.12.0-ce API version: 1.35 (minimum version 1.12) Go version: go1.9.2 Git commit: X Built: Wed Dec 27 20:12:29 2017 OS/Arch: linux/amd64 Experimental: true </code></pre> <p>Does anyone know how to resolve this?</p>
<p>After Minikube is started, <strong>kubectl</strong> is configured automatically.</p> <pre><code>minikube start Starting local Kubernetes cluster... Kubernetes is available at https://192.168.99.100:8443. Kubectl is now configured to use the cluster. </code></pre> <p>You can verify and validate the cluster and context with following commands.</p> <pre><code>kubectl config view </code></pre>