prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I read in many books and documentations[1][2] that a docker-container or a pod are considered to be disposable and have a short lifetime. why are they considered so ephemeral? In such case how can one run a containerized application in production ?</p> <p>Besides, do the two terms disposable-container and immutable-container mean the same ?</p> <p>[1] <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/</a> <br> [2] <a href="https://developers.redhat.com/blog/2016/02/24/10-things-to-avoid-in-docker-containers/" rel="nofollow noreferrer">https://developers.redhat.com/blog/2016/02/24/10-things-to-avoid-in-docker-containers/</a></p>
<blockquote> <p>Besides, are the two terms disposable-container and immutable-container mean the same ?</p> </blockquote> <p>Immutable means once it's created, it cannot be changed. Disposable, in the context of your question, means it can be destroyed and replaced with little consequence. </p> <p>These things are not the same, but they operate together in a typical containerized application. You will be running an immutable container, and when you want to change the behavior of the container you would replace it with a new container instead of trying to change the existing container. </p> <p>This is different than something like a VM, where you would deploy code updates and reload the service or similar if you wanted to change the behavior of your app.</p> <blockquote> <p>why are they considered so ephemeral?</p> </blockquote> <p>A container is a process. A process is ephemeral. Containers are ephemeral. <a href="https://docs.docker.com/engine/admin/volumes/volumes/" rel="nofollow noreferrer">Containers are able to persist data separately though</a>. </p> <blockquote> <p>how can one run a containerized application in production</p> </blockquote> <p>If your hangup with using containers in production can be rephrased "how can you run a containerized application in production <em>with no state</em>?", then I'd first say not all applications are stateful. A basic web server, or a great many well-designed micro services, for example.</p> <p>For stateful applications, nothing stops you from using a common database to back your containerized applications. You can also use volumes, as described above. You could combine the two and run a containerized database, using volumes underneath the database container. State isn't really a problem.</p>
<p>I started minikube with k8s version 1.5.2 and I would like to downgrade my kubectl so that it is also 1.5.2. Currently when I run <code>kubectl version</code> I get:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:12Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I would like to use kubectl to fetch <code>PetSets</code> but in later versions this was updated to <code>StatefulSets</code> so I cannot use the commands with my current kubectl version </p> <pre><code>kubectl get petsets the server doesn't have a resource type "petsets" </code></pre> <p>Thanks!</p>
<p>You can just download the previous version binary and replace the one you have now.</p> <p>Linux:</p> <pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl </code></pre> <p>macOS:</p> <pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/darwin/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl </code></pre> <p>Windows:</p> <pre><code>curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.5.2/bin/windows/amd64/kubectl.exe </code></pre> <p>And add it to PATH.</p> <p>If not follow instructions for other Operating Systems here: <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl" rel="noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl</a></p>
<p>I followed the experimental k8s install and it seems to work until I open the portal. Then applications and projects have the title bar but the main page body is just a spinning gear. How can I debug this?</p> <p>Install instructions: <a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple" rel="nofollow noreferrer">https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple</a></p> <p>Here is the only error in a the logs that I've found:</p> <pre><code>2017-09-12 19:29:35.764 INFO 1 --- [x-credentials-1] c.n.s.g.s.internal.ClouddriverService : ---- ERROR http://spin-clouddriver.spinnaker:7002/credentials 2017-09-12 19:29:35.765 INFO 1 --- [x-credentials-1] c.n.s.g.s.internal.ClouddriverService : java.net.SocketTimeoutException: connect timed out </code></pre> <p>Other info:</p> <pre><code>kubectl describe svc --namespace spinnaker spin-clouddriver Name: spin-clouddriver Namespace: spinnaker Labels: app=spin stack=clouddriver Annotations: &lt;none&gt; Selector: load-balancer-spin-clouddriver=true Type: ClusterIP IP: 100.70.137.138 Port: &lt;unset&gt; 7002/TCP Endpoints: 100.96.2.4:7002 Session Affinity: None Events: &lt;none&gt; kubectl describe pod --namespace spinnaker spin-clouddriver-v000-fmwhr Name: spin-clouddriver-v000-fmwhr Namespace: spinnaker Node: ip-172-20-61-85.ca-central-1.compute.internal/172.20.61.85 Start Time: Wed, 13 Sep 2017 08:11:05 -0400 Labels: load-balancer-spin-clouddriver=true replication-controller=spin-clouddriver-v000 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"spinnaker","name":"spin-clouddriver-v000","uid":"9df7c363-987c-11e7-90ae-02f58db8... Status: Running IP: 100.96.2.4 Created By: ReplicaSet/spin-clouddriver-v000 Controlled By: ReplicaSet/spin-clouddriver-v000 Containers: clouddriver: Container ID: docker://d7c7ba2611186a248f6910c605c71045e0f7300f3ab4857df30ef28b9f9c7f54 Image: quay.io/spinnaker/clouddriver:master Image ID: docker-pullable://quay.io/spinnaker/clouddriver@sha256:98be0ee63e040a2bcd8ba6ca6a67d23bb8aab457f4a86882b3da65f043dc895f Port: 7002/TCP State: Running Started: Wed, 13 Sep 2017 08:12:03 -0400 Ready: True Restart Count: 0 Readiness: http-get http://:7002/credentials delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /opt/spinnaker/config from spinnaker-config (rw) /root/.kube from creds-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-hpql5 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: spinnaker-config: Type: Secret (a volume populated by a Secret) SecretName: spinnaker-config Optional: false creds-config: Type: Secret (a volume populated by a Secret) SecretName: creds-config Optional: false default-token-hpql5: Type: Secret (a volume populated by a Secret) SecretName: default-token-hpql5 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300s Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 19m 19m 1 default-scheduler Normal Scheduled Successfully assigned spin-clouddriver-v000-fmwhr to ip-172-20-61-85.ca-central-1.compute.internal 19m 19m 1 kubelet, ip-172-20-61-85.ca-central-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "creds-config" 19m 19m 1 kubelet, ip-172-20-61-85.ca-central-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-hpql5" 19m 19m 1 kubelet, ip-172-20-61-85.ca-central-1.compute.internal Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "spinnaker-config" 19m 19m 1 kubelet, ip-172-20-61-85.ca-central-1.compute.internal spec.containers{clouddriver} Normal Pulling pulling image "quay.io/spinnaker/clouddriver:master" 18m 18m 1 kubelet, ip-172-20-61-85.ca-central-1.compute.internal spec.containers{clouddriver} Normal Pulled Successfully pulled image "quay.io/spinnaker/clouddriver:master" 18m 18m 1 kubelet, ip-172-20-61-85.ca-central-1.compute.internal spec.containers{clouddriver} Normal Created Created container 18m 18m 1 kubelet, ip-172-20-61-85.ca-central-1.compute.internal spec.containers{clouddriver} Normal Started Started container 18m 18m 2 kubelet, ip-172-20-61-85.ca-central-1.compute.internal spec.containers{clouddriver} Warning Unhealthy Readiness probe failed: Get http://100.96.2.4:7002/credentials: dial tcp 100.96.2.4:7002: getsockopt: connection refused </code></pre>
<p>Spinnaker's installation has improved dramatically over the past year. They now have a tool called <a href="https://github.com/spinnaker/halyard" rel="nofollow noreferrer">Halyard</a> that is designed to configure and deploy Spinnaker. </p> <p>It can deploy directly into a Kubernetes cluster.</p> <p>Installation instructions can be found <a href="https://www.spinnaker.io/setup/install/halyard/" rel="nofollow noreferrer">here</a>.</p>
<p>I am gettting this error in kubectl <code>describe nodes nodename</code>. I just did a google search but nothing useful found , what does this mean.</p> <p><code>Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 3783778304 to memory.limit_in_bytes: write /rootfs/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument</code></p> <p>Do I need to change some kernel settings using sysctl ?</p> <pre><code>[iahmad@lxplus000 ~]$ kubectl --version Kubernetes v1.5.2 [iahmad@lxplus000 ~]$ [iahmad@lxplus000 ~]$ [iahmad@lxplus000 ~]$ kubectl version Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-18T08:52:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"linux/amd64"} [iahmad@lxplus000 ~]$ </code></pre>
<p>It depends on the <code>kubectl/kubernetes</code> version: this was seen (and fixed) in <a href="https://github.com/kubernetes/kubernetes/issues/42701" rel="nofollow noreferrer">kubernetes issue 42701</a></p> <p>1.6 should have been patched. If this is not the same bug, the error message was also seen in <a href="https://github.com/kubernetes/kubernetes/issues/29166#issuecomment-315787325" rel="nofollow noreferrer">issue 29166</a>:</p> <blockquote> <p>I just forgot to active back <code>disk.uuid</code> after creating back my new VMs!</p> </blockquote>
<blockquote> <p>cat configmap.yaml</p> </blockquote> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: product-api-configmap data: myvalue: "Hello World" </code></pre> <blockquote> <p>helm install ./configmap.yaml</p> </blockquote> <p><code>Error: gzip: invalid header</code></p>
<p>Change directory out of templates and the chart directory and then do</p> <blockquote> <p>helm install ./CHARTNAME</p> </blockquote>
<p>I'm pretty new in kubernetes, I just install kubernetes via kubeadm and run dashboard UI but can't config access to it. Following docs I add line <code>--basic-auth-file=/etc/kubernetes/auth.csv</code> to /etc/kubernetes/manifests/kube-apiserver.yaml, create file and put in one string like <code>pass,admin,admin</code>. But after that api server crashed and back to normal after deleting this string and reboot the server. How I can pass this parametr to api server without api server crashing, and maybe something else need add or remove from this file? Here is my </p> <p><strong>kube-apiserver.yaml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key - --secure-port=6443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --requestheader-allowed-names=front-proxy-client - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-bootstrap-token-auth=true - --allow-privileged=true - --requestheader-username-headers=X-Remote-User - --advertise-address=236.273.51.124 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --insecure-port=0 - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- - --service-cluster-ip-range=10.96.0.0/12 - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --authorization-mode=Node,RBAC - --etcd-servers=http://127.0.0.1:2379 image: gcr.io/google_containers/kube-apiserver-amd64:v1.8.0 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 6443 scheme: HTTPS initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/pki name: ca-certs-etc-pki </code></pre>
<p>Your file for basic authentication <code>/etc/kubernetes/auth.csv</code> is not available inside kube-apiserver pod's container. It should be mounted to pod's container as well as certificate folders. Just add it to volumes and volumeMounts sections:</p> <pre><code> volumeMounts: - mountPath: /etc/kubernetes/auth.csv name: kubernetes-dashboard readOnly: true volumes: - hostPath: path: /etc/kubernetes/auth.csv name: kubernetes-dashboard </code></pre>
<p>I'm trying to exec into haproxy-ingress pod created from <a href="https://quay.io/jcmoraisjr/haproxy-ingress" rel="nofollow noreferrer">this</a> image, with this command:</p> <p><code>kubectl -n kube-system exec -it haproxy-ingress-4122301161-bcd94 /bin/bash</code> </p> <p>Then I get this message </p> <blockquote> <p>rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\"/bin/bash\\": stat /bin/bash: no such file or directory\"\n"</p> </blockquote> <p>Is there a way to exec into a container that is created using an image that does not have bash pre-installed?</p>
<p>Yes, it's not that uncommon for container not to have bash available. Often you will find that when bash is not there, <code>/bin/sh</code> still is, as is the case for the image you mention. Thus using <code>kubectl -n kube-system exec -it haproxy-ingress-4122301161-bcd94 /bin/sh</code> should suffice.</p> <p>That aside, please mind for sake of clarity, you do not SSH into container, you execute a process within it.</p>
<p>I'm a Kubernetes newbie trying to follow along with the Udacity tutorial class linked on the Kubernetes website.</p> <p>I execute</p> <pre><code>kubectl create -f pods/secure-monolith.yaml </code></pre> <p>That is referencing this official yaml file: <a href="https://github.com/udacity/ud615/blob/master/kubernetes/pods/secure-monolith.yaml" rel="nofollow noreferrer">https://github.com/udacity/ud615/blob/master/kubernetes/pods/secure-monolith.yaml</a></p> <p>I get this error:</p> <pre><code>error: error validating "pods/secure-monolith.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>FYI, the official lesson link is here: <a href="https://classroom.udacity.com/courses/ud615/lessons/7824962412/concepts/81991020770923" rel="nofollow noreferrer">https://classroom.udacity.com/courses/ud615/lessons/7824962412/concepts/81991020770923</a></p> <p>My first guess is that the provided yaml is out of date and incompatible with the current Kubernetes. Is this right? How can I fix/update?</p>
<p>I ran into the exact same problem but with a much simpler example.</p> <p>Here's my yaml:</p> <p><code>apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx ports: - containerPort: 80 </code></p> <p>The command <code>kubectl create -f pod-nginx.yaml</code> returns:</p> <p><code>error: error validating "pod-nginx.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false</code></p> <p>As the error says, I am able to override it but I am still at a loss as to the cause of the original issue.</p> <p>Local versions:</p> <ul> <li><p><code>Ubuntu 16.04</code></p></li> <li><p><code>minikube version: v0.22.2</code></p></li> <li><p><code>kubectl version: 1.8</code></p></li> </ul> <p>Thanks in advance!</p>
<p>I try to run on any Kube slave node:</p> <pre><code>$ kubectl top nodes </code></pre> <p>And get an error:</p> <pre><code>Error from server (Forbidden): User "system:node:ip-10-43-0-13" cannot get services/proxy in the namespace "kube-system". (get services http:heapster:) </code></pre> <p>On master node it works:</p> <pre><code>$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-43-0-10 95m 4% 2144Mi 58% ip-10-43-0-11 656m 32% 1736Mi 47% ip-10-43-0-12 362m 18% 2030Mi 55% ip-10-43-0-13 256m 12% 2412Mi 65% ip-10-43-0-14 254m 12% 2512Mi 68% </code></pre> <p>Ok, what I should do? give permissions to the <code>system:node</code> group I suppose</p> <pre><code>kubectl create clusterrolebinding bu-node-admin-binding --clusterrole=cluster-admin --group=system:node </code></pre> <p>It doesn't help</p> <p>Ok, inspecting cluster role:</p> <pre><code>$ kubectl describe clusterrole system:node Name: system:node Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate=true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- configmaps [] [] [get] endpoints [] [] [get] events [] [] [create patch update] localsubjectaccessreviews.authorization.k8s.io [] [] [create] nodes [] [] [create get list watch delete patch update] nodes/status [] [] [patch update] persistentvolumeclaims [] [] [get] persistentvolumes [] [] [get] pods [] [] [get list watch create delete] pods/eviction [] [] [create] pods/status [] [] [update] secrets [] [] [get] services [] [] [get list watch] subjectaccessreviews.authorization.k8s.io [] [] [create] tokenreviews.authentication.k8s.io [] [] [create] </code></pre> <p>Trying to patch rules:</p> <pre><code>kubectl patch clusterrole system:node --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{"apiGroups": [""], "resources": ["services/proxy"], "verbs": ["get", "list", "watch"]}}]' </code></pre> <p>Now:</p> <pre><code>$ kubectl describe clusterrole system:node Name: system:node Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate=true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- ... services/proxy [] [] [get list watch] ... </code></pre> <p><code>top nodes</code> still doesn't work</p> <p>Only way that it works is:</p> <pre><code>kubectl create clusterrolebinding bu-node-admin-binding --clusterrole=cluster-admin --user=system:node:ip-10-43-0-13 </code></pre> <p>This also works, but it node-specific too:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: # "namespace" omitted since ClusterRoles are not namespaced name: top-nodes-watcher rules: - apiGroups: [""] resources: ["services/proxy"] verbs: ["get", "watch", "list"] --- # This cluster role binding allows anyone in the "manager" group to read secrets in any namespace. kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: top-nodes-watcher-binding subjects: - kind: User name: system:node:ip-10-43-0-13 apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: top-nodes-watcher apiGroup: rbac.authorization.k8s.io </code></pre> <p>And I should apply it for each slave node. Can I do it only for one group or role? What I'm doing wrong? </p> <p>More details:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>What I really need it's physical node memory and CPU usage in %</p> <p>Thaks for the attention</p>
<p>To simply solve this problem(use kubectl top nodes in all slave nodes), you can copy the kubeconfig your kubectl is using on your master to all other slaves.</p> <p>And to explain why you meet this problem, I think you are using kubelet's kubeconfig for your kubectl in slave nodes.(Correct me if not).</p> <p>In k8s v1.7+, kubernetes have deprecated <strong>system::node</strong> role, instead using Node authorizer and NodeRestriction for default. You can read docs about <strong>system::node</strong> from <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">here</a>. So when you try to patch system::node, it won't take effect. Kubelet use specified <em>system:node:[node_name]</em> to constraint specified node's behavior.</p>
<p>I have installed helm 2.6.2 on the kubernetes 8 cluster. <code>helm init</code> worked fine. but when I run <code>helm list</code> it giving this error.</p> <pre><code> helm list Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system" </code></pre> <p>How to fix this RABC error message?</p>
<p>Once these commands:</p> <pre><code>kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' helm init --service-account tiller --upgrade </code></pre> <p>were run, the issue has been solved.</p>
<p>I've just set up a single node Kubernetes cluster following the kubeadm guide to the letter. The cluster itself looks good, and all pods are running correctly:</p> <pre><code>will@kubemaster:~$ sudo kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-etcd-w6dkj 1/1 Running 0 16m kube-system calico-node-mjsnr 2/2 Running 0 16m kube-system calico-policy-controller-59fc4f7888-vc6x6 1/1 Running 0 16m kube-system etcd-kubemaster 1/1 Running 0 16m kube-system kube-apiserver-kubemaster 1/1 Running 1 16m kube-system kube-controller-manager-kubemaster 1/1 Running 0 16m kube-system kube-dns-545bc4bfd4-mbbrl 3/3 Running 0 16m kube-system kube-proxy-wkmlj 1/1 Running 0 16m kube-system kube-scheduler-kubemaster 1/1 Running 0 16m kube-system kubernetes-dashboard-7f9dbb8685-rxwfw 1/1 Running 0 4m </code></pre> <p>I installed the dashboard using:</p> <pre><code>sudo kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code></pre> <p>I've tried serving up the kubrnetes dashboard locally by running "sudo kubectl proxy".</p> <p>When I load "<a href="http://127.0.0.1:8001" rel="nofollow noreferrer">http://127.0.0.1:8001</a>" I get the API endpoint listing, and all looks well. But when I add the /ui to load the dashboard (<a href="http://127.0.0.1:8001/ui" rel="nofollow noreferrer">http://127.0.0.1:8001/ui</a>), I get the following response:</p> <pre><code>Error: 'malformed HTTP response "\x15\x03\x01\x00\x02\x02"' Trying to reach: 'http://192.167.141.3:8443/' </code></pre> <p>Also note, the above URL gets redirected to the API:</p> <pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/ </code></pre> <p>If I replace the HTTP with HTTPS, I get a "Secure connection failed, SSL recieved a record that exceeded the maximum permisslbe length".</p> <p>If I try loading the dashboard without using the kubectl proxy, e.g. using the master IP, I get a connection refused.</p> <p>I'm running on Ubuntu 16.04, my kubectl version details are as follows:</p> <pre><code>will@kubemaster:~$ sudo kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<h3>Since v1.7, Dashboard can only be accessed over HTTPS by default.</h3> <p>It is available at <a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a> with <code>kubectl proxy</code>.</p> <h3>To deploy dashboard with HTTP (Not recommended for Production)</h3> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml </code></pre> <p>Dashboard can be loaded at <a href="http://localhost:8001/ui" rel="noreferrer">http://localhost:8001/ui</a> with <code>kubectl proxy</code>.</p>
<p>I am really new to kubernetes and have testing app with redis and mongodb running in GCE. I would like to grap my log files with fluentd and send them to logz:</p> <p>I use the following fluentd config file. I tested a similar version on my local machine.</p> <pre><code>&lt;source&gt; @type tail path /var/log/containers/squidex*.log pos_file /var/log/squidex.log.pos tag squidex.logs format json &lt;/source&gt; &lt;match squidex.logs&gt; @type copy &lt;store&gt; @type logzio_buffered endpoint_url https://listener.logz.io:8071?token=... output_include_time true output_include_tags true buffer_type file buffer_path /fluentd/log/squidex.log.buffer flush_interval 10s buffer_chunk_limit 1m &lt;/store&gt; &lt;store&gt; @type stdout &lt;/store&gt; &lt;/match&gt; </code></pre> <p>My kubernetes configuration is:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-logging labels: app: fluentd-logging spec: template: metadata: labels: app: fluentd-logging spec: containers: - name: fluentd image: gcr.io/squidex-157415/squidex-fluentd:latest resources: limits: memory: 200Mi requests: cpu: 40m volumeMounts: - name: varlog mountPath: /var/log terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log </code></pre> <p>Almost everything works, but when I run the fluentd pods I see the following entries in the log output from these pods:</p> <pre><code>2017-04-22T09:49:22.286740784Z 2017-04-22 09:49:22 +0000 [warn]: /var/log/containers/squidex-282724611-3nhtw_default_squidex-ed7c437e677d3438c137cdc80110d106339999a6ba8e495a5752fe6d5da9e70d.log unreadable. It is excluded and would be examined next time </code></pre> <p>How can I get permissions to those log files?</p>
<p>This is not a permission issue but broken symlinks. Kubernetes is using symbolic links from <code>/var/log/containers</code> to <code>/var/log/pods</code> to <code>/var/lib/docker/containers</code>. You can confirm this from any node of your cluster using <code>ls -la</code></p> <p>Your DaemonSet configuration should include something like:</p> <pre><code>volumeMounts: - name: varlog mountPath: /var/log/ readOnly: true - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true [...] volumes: - name: varlog hostPath: path: /var/log/ - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers </code></pre> <p>This way, you are mounting both the logs files directory and the symlinks of symlinks so your fluentd can read everything.</p>
<p>I have created the ubuntu docker image with Nginx, PHP and php-fpm configured in it. It is working fine when I am running it on Docker instance.</p> <p>But when I run the same image in kubernetes, the php-fpm process receive the SIGKILL (9) signal and we are getting 502 Gateway errors.</p> <p>I guess it is kubernetes which send the SIGKILL signal to kubernetes pods. I am not using any readiness and liveliness probes in kubernetes templates.</p> <p>Appreciate any help. Thanks in advance.</p> <p>Find the docker file and php-fpm log below for details,</p> <h2>Dockerfile</h2> <pre><code>FROM ubuntu #install utilities tools RUN apt-get update \ &amp;&amp; apt-get install -y vim unzip curl python-software-properties software-properties-common locales supervisor # Update software list, install php-nginx &amp; clear cache RUN locale-gen en_US.UTF-8 &amp;&amp; \ export LANG=en_US.UTF-8 &amp;&amp; \ add-apt-repository -y ppa:ondrej/php &amp;&amp; \ apt-get update &amp;&amp; \ apt-get upgrade -y &amp;&amp; \ apt-get install -y --force-yes nginx \ php5.6 php5.6-zip php5.6-fpm php5.6-cli php5.6-mysql php5.6-mcrypt php5.6-xml\ php5.6-curl php5.6-gd &amp;&amp; \ apt-get clean &amp;&amp; \ rm -rf /var/lib/apt/lists/* \ /tmp/* \ /var/tmp/* # Configure nginx RUN echo "daemon off;" &gt;&gt; /etc/nginx/nginx.conf RUN sed -i "s/sendfile on/sendfile off/" /etc/nginx/nginx.conf RUN mkdir -p /var/www/html # Configure PHP RUN sed -i -e "s/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/" /etc/php/5.6/fpm/php.ini &amp;&amp; \ sed -i -e "s/;date.timezone =.*/date.timezone = America\/Argentina\/Buenos_Aires/" /etc/php/5.6/fpm/php.ini &amp;&amp; \ sed -i -e "s/upload_max_filesize\s*=\s*2M/upload_max_filesize = 100M/g" /etc/php/5.6/fpm/php.ini &amp;&amp; \ sed -i -e "s/post_max_size\s*=\s*8M/post_max_size = 100M/g" /etc/php/5.6/fpm/php.ini &amp;&amp; \ sed -i -e "s/variables_order = \"GPCS\"/variables_order = \"EGPCS\"/g" /etc/php/5.6/fpm/php.ini ##Updated for PHP 5.6 RUN sed -i -e "s/;daemonize\s*=\s*yes/daemonize = no/g" /etc/php/5.6/fpm/php-fpm.conf &amp;&amp; \ sed -i -e "s/pid =.*/pid = \/var\/run\/php-fpm.pid/" /etc/php/5.6/fpm/php-fpm.conf &amp;&amp; \ sed -i -e "s/listen =.*sock/listen = 127.0.0.1:9000/" /etc/php/5.6/fpm/pool.d/www.conf &amp;&amp; \ sed -i -e "s/;clear_env = no/clear_env = no/" /etc/php/5.6/fpm/pool.d/www.conf &amp;&amp; \ sed -i -e "s/;catch_workers_output\s*=\s*yes/catch_workers_output = yes/g" /etc/php/5.6/fpm/pool.d/www.conf &amp;&amp; \ sed -i -e "s/pm.max_children = 5/pm.max_children = 4/g" /etc/php/5.6/fpm/pool.d/www.conf &amp;&amp; \ sed -i -e "s/pm.start_servers = 2/pm.start_servers = 3/g" /etc/php/5.6/fpm/pool.d/www.conf &amp;&amp; \ sed -i -e "s/pm.min_spare_servers = 1/pm.min_spare_servers = 2/g" /etc/php/5.6/fpm/pool.d/www.conf &amp;&amp; \ sed -i -e "s/pm.max_spare_servers = 3/pm.max_spare_servers = 4/g" /etc/php/5.6/fpm/pool.d/www.conf &amp;&amp; \ sed -i -e "s/;pm.max_requests = 500/pm.max_requests = 200/g" /etc/php/5.6/fpm/pool.d/www.conf RUN sed -i -e "s/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/" /etc/php/5.6/cli/php.ini &amp;&amp; \ sed -i -e "s/;date.timezone =.*/date.timezone = America\/Argentina\/Buenos_Aires/" /etc/php/5.6/cli/php.ini &amp;&amp; \ sed -i -e "s/upload_max_filesize\s*=\s*2M/upload_max_filesize = 100M/g" /etc/php/5.6/cli/php.ini &amp;&amp; \ sed -i -e "s/post_max_size\s*=\s*8M/post_max_size = 100M/g" /etc/php/5.6/cli/php.ini &amp;&amp; \ sed -i -e "s/variables_order = \"GPCS\"/variables_order = \"EGPCS\"/g" /etc/php/5.6/cli/php.ini COPY opsconfig/default_server_config /etc/nginx/sites-available/default COPY opsconfig/supervisor.conf /etc/supervisor/conf.d/supervisor.conf RUN phpenmod -v 5.6 mcrypt &amp;&amp; \ phpenmod -v 5.6 xdebug &amp;&amp; \ phpenmod -v 5.6 zip #install composer RUN curl -O https://getcomposer.org/composer.phar &amp;&amp; \ mv composer.phar /usr/local/bin/composer &amp;&amp; \ chmod +x /usr/local/bin/composer # Workdir WORKDIR /var/www/html COPY src/ /var/www/html/ RUN chown -R www-data:www-data /var/www/html CMD ["/usr/bin/supervisord"] </code></pre> <h2>supervisor.conf</h2> <pre><code>[supervisord] nodaemon=true [program:php-fpm] command=/usr/sbin/php-fpm5.6 --nodaemonize [program:nginx] command=/usr/sbin/nginx autostart=true autorestart=true priority=10 stdout_events_enabled=true stderr_events_enabled=true stdout_logfile=/dev/stdout stdout_logfile_maxbytes=0 stderr_logfile=/dev/stderr </code></pre> <h2>php-fpm.log</h2> <pre><code>[10-Oct-2017 16:52:02] NOTICE: fpm is running, pid 56 [10-Oct-2017 16:52:02] NOTICE: ready to handle connections [10-Oct-2017 16:52:02] NOTICE: systemd monitor interval set to 10000ms [10-Oct-2017 16:52:30] WARNING: [pool www] child 57 exited on signal 9 (SIGKILL) after 28.399445 seconds from start [10-Oct-2017 16:52:30] NOTICE: [pool www] child 61 started [10-Oct-2017 16:52:38] WARNING: [pool www] child 59 exited on signal 9 (SIGKILL) after 36.796172 seconds from start [10-Oct-2017 16:52:38] NOTICE: [pool www] child 62 started [10-Oct-2017 16:53:15] WARNING: [pool www] child 58 exited on signal 9 (SIGKILL) after 73.299127 seconds from start [10-Oct-2017 16:53:15] NOTICE: [pool www] child 63 started [10-Oct-2017 17:45:02] WARNING: [pool www] child 62 exited on signal 9 (SIGKILL) after 3143.801344 seconds from start [10-Oct-2017 17:45:02] NOTICE: [pool www] child 64 started </code></pre> <h2>Nginx Log</h2> <pre><code>2017/10/10 16:53:15 [error] 11#11: *162 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.244.2.49, server: _, request: "GET /admin/index.php?route=common/dashboard&amp;token=V4iXjKHenn2ZOIldfn4pmIHcTIHiFoxk HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "shop9.homesourcesystems.net", referrer: "https://domain_name/admin/" 2017/10/10 17:45:02 [error] 11#11: *166 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.244.2.49, server: _, request: "GET /admin/index.php?route=common/dashboard&amp;token=sFiMAItAgX22BarBfcNNVuyin50ZauIa HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "shop9.homesourcesystems.net", referrer: "https://domain_name/admin/" </code></pre> <h2>Pod Definition</h2> <pre><code>Name: bumptious-beetle-3107682338-qlvcf Namespace: testns Node: k8s-agent-1/10.240.0.4 Start Time: Wed, 11 Oct 2017 09:26:21 +0000 Labels: app=testapp pod-template-hash=3107682338 release=bumptious-beetle Annotations: checksum/config=466a2fbe40164c0f5a10a06e26417c92a47422720e96c4fb51562eb8388d282f kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"testns","name":"bumptious-beetle-3107682338","uid":"3e2745b7-ae66... Status: Running IP: 10.244.0.205 Controllers: ReplicaSet/bumptious-beetle-3107682338 Containers: hstestns: Container ID: docker://31477927d3d0ac1b3f2fe662601d1d65d2a6d1fb442e580f9c1836e921a85f75 Image: phpnginx:1.9 Image ID: docker-pullable://phpnginx@sha256:7dfb96e283f0802e72249aeb252d3e6290dec00591c442da80bfa4727cef7ca5 Port: 80/TCP State: Running Started: Wed, 11 Oct 2017 09:26:42 +0000 Ready: True Restart Count: 0 Limits: cpu: 100m memory: 50Mi Requests: cpu: 10m memory: 40Mi Environment Variables from: bumptious-beetle ConfigMap Optional: false Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-ptkqw (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-ptkqw: Type: Secret (a volume populated by a Secret) SecretName: default-token-ptkqw Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 13m 13m 1 default-scheduler Normal Scheduled Successfully assigned bumptious-beetle-3107682338-qlvcf to k8s-agent-1 13m 13m 1 kubelet, k8s-agent-1 spec.containers{hsecommerce} Normal Pulling pulling image "phpnginx:1.9" 13m 13m 1 kubelet, k8s-agent-1 spec.containers{hsecommerce} Normal Pulled Successfully pulled image "phpnginx:1.9" 13m 13m 1 kubelet, k8s-agent-1 spec.containers{hsecommerce} Normal Created Created container with id 31477927d3d0ac1b3f2fe662601d1d65d2a6d1fb442e580f9c1836e921a85f75 13m 13m 1 kubelet, k8s-agent-1 spec.containers{hsecommerce} Normal Started Started container with id 31477927d3d0ac1b3f2fe662601d1d65d2a6d1fb442e580f9c1836e921a85f75 </code></pre> <p>Regards, Rahul</p>
<p>You have a very low memory limit considering you want to run multiple PHP processes inside managed by php-fpm. It is very likely that your processes are killed due to this memory constraint. I would advise switching memory limit to something like 400Mi and checking again.</p>
<p>I have installed spinnaker and kubernetes as suggested in the manual <a href="https://www.spinnaker.io/guides/tutorials/codelabs/kubernetes-source-to-prod/" rel="nofollow noreferrer">https://www.spinnaker.io/guides/tutorials/codelabs/kubernetes-source-to-prod/</a> </p> <p>Thing is, I cannot seem to be able to access my docker containers on Docker Hub via Spinnaker in step 3 in the manual.</p> <p>Here is my spinnaker.yml (the relevant part):</p> <pre><code>kubernetes: # For more information on configuring Kubernetes clusters (kubernetes), see # http://www.spinnaker.io/v1.0/docs/target-deployment-setup#section-kubernetes-cluster-setup # NOTE: enabling kubernetes also requires enabling dockerRegistry. enabled: ${SPINNAKER_KUBERNETES_ENABLED:true} primaryCredentials: # These credentials use authentication information at ~/.kube/config # by default. name: euwest1.aws.crossense.io dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name} dockerRegistry: # For more information on configuring Docker registries, see # http://www.spinnaker.io/v1.0/docs/target-deployment-configuration#section-docker-registry # NOTE: Enabling dockerRegistry is independent of other providers. # However, for convienience, we tie docker and kubernetes together # since kubernetes (and only kubernetes) depends on this docker provider # configuration. enabled: ${SPINNAKER_KUBERNETES_ENABLED:true} primaryCredentials: name: crossense address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/} repository: ${SPINNAKER_DOCKER_REPOSITORY:crossense/gator} username: crossense # A path to a plain text file containing the user's password password: password #${SPINNAKER_DOCKER_PASSWORD_FILE} </code></pre> <p><a href="https://i.stack.imgur.com/p6D1N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p6D1N.png" alt="enter image description here"></a></p> <p>Thank you guys, in advance, for any and all of the help :)</p>
<p>I believe the issue is that the docker registry does not provide index services. Therefore you need to provide a list of all the images that you want to have available.</p> <pre><code>dockerRegistry: enabled: true accounts: - name: spinnaker-dockerhub requiredGroupMembership: [] address: https://index.docker.io username: username password: password email: [email protected] cacheIntervalSeconds: 30 repositories: - library/httpd - library/python - library/openjrd - your-org/your-image primaryAccount: spinnaker-dockerhub </code></pre> <p>The halyard <a href="https://www.spinnaker.io/reference/halyard/commands/#hal-config-provider-docker-registry-account-edit" rel="nofollow noreferrer">commands</a> to execute this is:</p> <pre><code>export ACCOUNT=spinnaker-dockerhub hal config provider docker-registry account edit $ACCOUNT --repositories [library/httpd, library/python] hal config provider docker-registry account edit $ACCOUNT --add-repository library/python </code></pre> <p>This will update your halyard config file, pending a deploy. </p> <p>Note, if you do not have access to one of the images, the command will likely fail.</p>
<p>I have a K8s deployment that mounts a secret into <code>/etc/google-cloud-account</code> containing the Google auth JSON file to use from the application. When I try to run the deployment, I get the following error from my pod:</p> <pre><code>1m 1m 1 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x spec.containers{api} Normal Created Created container with docker id 36b85ec8415a; Security:[seccomp=unconfined] 1m 1m 1 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x spec.containers{api} Warning Failed Failed to start container with docker id 36b85ec8415a with error: Error response from daemon: rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: mkdir /var/lib/docker/overlay/b4aa81194f72ccb54d88680e766a921ea26f7a4df0f4b32d6030123896b2b203/merged/etc/google-cloud-account: read-only file system" 1m 1m 1 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "api" with RunContainerError: "runContainer: Error response from daemon: rpc error: code = 2 desc = \"oci runtime error: could not synchronise with container process: mkdir /var/lib/docker/overlay/b4aa81194f72ccb54d88680e766a921ea26f7a4df0f4b32d6030123896b2b203/merged/etc/google-cloud-account: read-only file system\"" 2m 13s 11 kubelet, gke-development-cluster-default-pool-17f531d7-sj4x spec.containers{api} Warning BackOff Back-off restarting failed docker container </code></pre> <p>The deployment in question looks like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: # ... spec: replicas: {{ .Values.api.replicaCount }} template: # ... spec: containers: - name: {{ .Values.api.name }} # ... volumeMounts: - name: google-cloud-account mountPath: /etc/google-cloud-account volumes: - name: google-cloud-account secret: secretName: {{ template "fullname" . }} items: - key: google-cloud-credentials path: credentials.json </code></pre> <p>I don't know how <code>/etc</code> in the container would be a read only file system and don't know how to change that.</p>
<p>An alternative to <a href="https://stackoverflow.com/a/44557294/1845976">Dave Long's answer</a> are <a href="https://kubernetes.io/docs/concepts/storage/volumes/#projected" rel="nofollow noreferrer">projected volumes</a>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: # ... spec: replicas: {{ .Values.api.replicaCount }} template: # ... spec: containers: - name: {{ .Values.api.name }} # ... volumeMounts: - name etc mountPath: /etc - name: google-cloud-account mountPath: /etc/google-cloud-account - name: odbc mountPath: /etc volumes: - name: config projected: sources: - secret: name: {{ template "fullname" . }} items: - key: google-cloud-credentials path: google-cloud-account/credentials.json - configMap: name: {{ template "fullname" . }} items: - key: odbc.ini path: odbc.ini </code></pre>
<p>I try to run on any Kube slave node:</p> <pre><code>$ kubectl top nodes </code></pre> <p>And get an error:</p> <pre><code>Error from server (Forbidden): User "system:node:ip-10-43-0-13" cannot get services/proxy in the namespace "kube-system". (get services http:heapster:) </code></pre> <p>On master node it works:</p> <pre><code>$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-43-0-10 95m 4% 2144Mi 58% ip-10-43-0-11 656m 32% 1736Mi 47% ip-10-43-0-12 362m 18% 2030Mi 55% ip-10-43-0-13 256m 12% 2412Mi 65% ip-10-43-0-14 254m 12% 2512Mi 68% </code></pre> <p>Ok, what I should do? give permissions to the <code>system:node</code> group I suppose</p> <pre><code>kubectl create clusterrolebinding bu-node-admin-binding --clusterrole=cluster-admin --group=system:node </code></pre> <p>It doesn't help</p> <p>Ok, inspecting cluster role:</p> <pre><code>$ kubectl describe clusterrole system:node Name: system:node Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate=true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- configmaps [] [] [get] endpoints [] [] [get] events [] [] [create patch update] localsubjectaccessreviews.authorization.k8s.io [] [] [create] nodes [] [] [create get list watch delete patch update] nodes/status [] [] [patch update] persistentvolumeclaims [] [] [get] persistentvolumes [] [] [get] pods [] [] [get list watch create delete] pods/eviction [] [] [create] pods/status [] [] [update] secrets [] [] [get] services [] [] [get list watch] subjectaccessreviews.authorization.k8s.io [] [] [create] tokenreviews.authentication.k8s.io [] [] [create] </code></pre> <p>Trying to patch rules:</p> <pre><code>kubectl patch clusterrole system:node --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{"apiGroups": [""], "resources": ["services/proxy"], "verbs": ["get", "list", "watch"]}}]' </code></pre> <p>Now:</p> <pre><code>$ kubectl describe clusterrole system:node Name: system:node Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate=true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- ... services/proxy [] [] [get list watch] ... </code></pre> <p><code>top nodes</code> still doesn't work</p> <p>Only way that it works is:</p> <pre><code>kubectl create clusterrolebinding bu-node-admin-binding --clusterrole=cluster-admin --user=system:node:ip-10-43-0-13 </code></pre> <p>This also works, but it node-specific too:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: # "namespace" omitted since ClusterRoles are not namespaced name: top-nodes-watcher rules: - apiGroups: [""] resources: ["services/proxy"] verbs: ["get", "watch", "list"] --- # This cluster role binding allows anyone in the "manager" group to read secrets in any namespace. kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: top-nodes-watcher-binding subjects: - kind: User name: system:node:ip-10-43-0-13 apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: top-nodes-watcher apiGroup: rbac.authorization.k8s.io </code></pre> <p>And I should apply it for each slave node. Can I do it only for one group or role? What I'm doing wrong? </p> <p>More details:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>What I really need it's physical node memory and CPU usage in %</p> <p>Thaks for the attention</p>
<p>I ended up with next:</p> <ul> <li>removed <code>NodeRestriction</code> from <code>kube-apiserver</code> <code>--admission-control</code> option</li> <li>removed <code>Node</code> from <code>--authorization-mode</code> option, only <code>RBAC</code> left here</li> <li>patched <code>system:node</code> role with <code>kubectl patch clusterrole system:node --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{"apiGroups": [""], "resources": ["services/proxy"], "verbs": ["get", "list", "watch"]}}]'</code></li> </ul>
<p>I still have a question about Kubernetes NodePort service.</p> <p>NodePort: Exposes the service on <strong>each</strong> Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :.</p> <p>If I have two Node: nodeA and nodeB, and if I deploy an app only on nodeA then create a NodePort service, then can I use both nodeA and nodeB ips to access this service?</p> <p>I had some tests, the result is no....</p> <p>I did two kinds of tests:</p> <p><strong>Test1:</strong></p> <p>I deployed a deployment with two pods, one is on NodeA, another is on NodeB. then create a NodePort service to access it. I can access the service by using both NodeA and NodeB ips. then I delete pod on NodeA, then try again. I found that I cannot access the service by using NodeA ip, but can access by using NodeB ip. After the pod is startup on NodeA, then I can access by using NodeA ip</p> <p><strong>Test2:</strong></p> <p>I deployed a deployment with only one pod, it is on NodeA. then create a NodePort service to access it. I can only access the service by using NodeA ip but cannot by NodeB ip.</p> <p>So my question is:</p> <p>For the NodePort type, it can only be used that the pod/pods are on the same Node? If I use NodeA ip, service won't load balance the request to the pod on NodeB?</p> <p>Thanks a lot! :)</p>
<blockquote> <p>If I have two Node: nodeA and nodeB, and if I deploy an app only on nodeA then create a NodePort service, then can I use both nodeA and nodeB ips to access this service?</p> <p>I had some tests, the result is no....</p> </blockquote> <p>In that case, it sounds very much like one of three things is not happening: you do not have <code>kube-proxy</code> running on all the Nodes, or the Nodes are firewalled off from one another in a very restrictive way, or you are not using a software-defined network (such as flannel, calico, etc).</p> <p>That NodePort behavior is, to the best of my knowledge, implemented using <code>iptables</code> applied to all the machines, causing traffic received on port X of machine A to be effectively NAT-ed to one of the machine(s) where the actual Pods are running, back to machine A, back to the requester. It is <code>kube-proxy</code>'s job to install the initial <code>iptables</code> rules for doing that, and then subsequently keep them up to date as the Pods come and go from the cluster. One can observe the correct behavior by running <code>iptables -L -n -t nat</code> on a Node that <strong>is</strong> running <code>kube-proxy</code>, and observe the rules named for the various kubernetes services. They even helpfully include comments in the <code>iptables</code> rules, which I thought was nice</p> <p>The firewalling case I think speaks for itself</p> <p>I actually have never run kubernetes without a software-defined network, so I am not in a good place to offer troubleshooting steps (aside from: install flannel or calico and rejoice in their awesomeness). Perhaps others will be able to weigh in, if that is in fact your circumstance</p>
<p>Using an <code>APIService</code> resource I registered an add-on API server with my core API Server. Now, for whatever reason, this add-on API server has become unresponsive. The problem is that <code>kubectl</code> works by getting information about all the versions of all the groups it can see when running <code>kubectl get --raw "/apis"</code>, including my unresponsive <code>APIService</code> and so it is hanging whenever I type any command that contacts the API Server and I can not longer administer my cluster. </p> <p>Is there a good way to deal with this situation?</p>
<p>All Kubernetes objects' data is stored in etcd backend. Just delete unresponsive <code>APIService</code> from there. </p> <p>You can find it in etcd version 3 the following way:</p> <pre><code>$ ETCDCTL_API=3 etcdctl --endpoints=&lt;etcd_ip&gt;:2379 get / --prefix --keys-only | grep -i apiservice /registry/apiregistration.k8s.io/apiservices/v1. /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.autoscaling /registry/apiregistration.k8s.io/apiservices/v1.batch /registry/apiregistration.k8s.io/apiservices/v1.crd.projectcalico.org /registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v1alpha1.monitoring.coreos.com /registry/apiregistration.k8s.io/apiservices/v1alpha1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1alpha1.settings.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.apiextensions.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.apps /registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.certificates.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.extensions /registry/apiregistration.k8s.io/apiservices/v1beta1.policy /registry/apiregistration.k8s.io/apiservices/v1beta1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.voyager.appscode.com /registry/apiregistration.k8s.io/apiservices/v2alpha1.batch </code></pre> <p>And delete it after that:</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints=10.128.10.11:2379 del &lt;path&gt; </code></pre>
<p>I am attempting to configure Fabric to work inside of a Kubernetes cluster, and while I have everything standing up, I am having difficulting deploying chaincode (using composer-cli) to the network. It appears that the chaincode containers are not able to see the peer that created them.</p> <pre><code>2017-10-10 20:51:12.590 UTC [ccprovider] NewCCContext -&gt; DEBU 437 NewCCCC (chain=lynnhurst,chaincode=lynnhurst-composer,version=0.13.2,txid=14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1,syscc=false,proposal=0xc42195c0f0,canname=lynnhurst-composer:0.13.2 2017-10-10 20:51:12.605 UTC [chaincode] Launch -&gt; DEBU 438 launchAndWaitForRegister fetched 2902002 bytes from file system 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 439 chaincode lynnhurst-composer:0.13.2 is being launched 2017-10-10 20:51:12.605 UTC [chaincode] getArgsAndEnv -&gt; DEBU 43a Executable is chaincode 2017-10-10 20:51:12.605 UTC [chaincode] getArgsAndEnv -&gt; DEBU 43b Args [chaincode -peer.address=peer-0.peer:7052] 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 43c start container: lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer) 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 43d start container with args: chaincode -peer.address=peer-0.peer:7052 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 43e start container with env: CORE_CHAINCODE_ID_NAME=lynnhurst-composer:0.13.2 CORE_PEER_TLS_ENABLED=true CORE_CHAINCODE_LOGGING_LEVEL=info CORE_CHAINCODE_LOGGING_SHIM=warning CORE_CHAINCODE_LOGGING_FORMAT=%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -&gt; %{level:.4s} %{id:03x}%{color:reset} %{message} 2017-10-10 20:51:12.605 UTC [container] lockContainer -&gt; DEBU 43f waiting for container(dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:51:12.605 UTC [container] lockContainer -&gt; DEBU 440 got container (dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:51:12.606 UTC [dockercontroller] Start -&gt; DEBU 441 Cleanup container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.607 UTC [dockercontroller] stopInternal -&gt; DEBU 442 Stop container dev-peer-0.peer-lynnhurst-composer-0.13.2(No such container: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.608 UTC [dockercontroller] stopInternal -&gt; DEBU 443 Kill container dev-peer-0.peer-lynnhurst-composer-0.13.2 (No such container: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.609 UTC [dockercontroller] stopInternal -&gt; DEBU 444 Remove container dev-peer-0.peer-lynnhurst-composer-0.13.2 (No such container: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.609 UTC [dockercontroller] Start -&gt; DEBU 445 Start container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.609 UTC [dockercontroller] getDockerHostConfig -&gt; DEBU 446 docker container hostconfig NetworkMode: bridge 2017-10-10 20:51:12.610 UTC [dockercontroller] createContainer -&gt; DEBU 447 Create container: dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.648 UTC [dockercontroller] createContainer -&gt; DEBU 448 Created container: dev-peer-0.peer-lynnhurst-composer-0.13.2-49b014bf4f406b2b248c840a95c16be5a45845e67e4f7c1af9a5e7b7e69037bf 2017-10-10 20:51:12.836 UTC [dockercontroller] Start -&gt; DEBU 449 Started container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.836 UTC [container] unlockContainer -&gt; DEBU 44a container lock deleted(dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.955 UTC [dev-peer-0.peer-lynnhurst-composer-0.13.2] func2 -&gt; INFO 44b 2017-10-10 20:51:12.955 UTC [Composer] Info -&gt; INFO 001 Setting the Composer pool size to 8 2017-10-10 20:51:15.957 UTC [dev-peer-0.peer-lynnhurst-composer-0.13.2] func2 -&gt; INFO 44c 2017-10-10 20:51:15.956 UTC [shim] userChaincodeStreamGetter -&gt; ERRO 002 Error trying to connect to local peer: context deadline exceeded 2017-10-10 20:51:16.001 UTC [dockercontroller] func2 -&gt; INFO 44d Container dev-peer-0.peer-lynnhurst-composer-0.13.2 has closed its IO channel 2017-10-10 20:56:12.567 UTC [eventhub_producer] validateEventMessage -&gt; DEBU 44e ValidateEventMessage starts for signed event 0xc421507680 2017-10-10 20:56:12.569 UTC [eventhub_producer] deRegisterHandler -&gt; DEBU 44f deregistering event type: BLOCK 2017-10-10 20:56:12.578 UTC [eventhub_producer] Chat -&gt; ERRO 450 error during Chat, stopping handler: rpc error: code = Canceled desc = context canceled 2017-10-10 20:56:12.836 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 451 stopping due to error while launching Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1) 2017-10-10 20:56:12.836 UTC [container] lockContainer -&gt; DEBU 452 waiting for container(dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:56:12.836 UTC [container] lockContainer -&gt; DEBU 453 got container (dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:56:12.837 UTC [dockercontroller] stopInternal -&gt; DEBU 454 Stop container dev-peer-0.peer-lynnhurst-composer-0.13.2(Container not running: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:56:12.838 UTC [dockercontroller] stopInternal -&gt; DEBU 455 Kill container dev-peer-0.peer-lynnhurst-composer-0.13.2 (API error (500): {"message":"Cannot kill container dev-peer-0.peer-lynnhurst-composer-0.13.2: Container db3e259d3c98fbb97a10a100724b71f861a7ee6b60317686329e1e6439be1ebd is not running"} ) 2017-10-10 20:56:12.843 UTC [dockercontroller] stopInternal -&gt; DEBU 456 Removed container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:56:12.843 UTC [container] unlockContainer -&gt; DEBU 457 container lock deleted(dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:56:12.844 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 458 error on stop Error stopping container: context canceled(Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1)) 2017-10-10 20:56:12.844 UTC [chaincode] func1 -&gt; DEBU 459 chaincode lynnhurst-composer:0.13.2 launch seq completed 2017-10-10 20:56:12.844 UTC [chaincode] Launch -&gt; ERRO 45a launchAndWaitForRegister failed Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1) 2017-10-10 20:56:12.844 UTC [endorser] callChaincode -&gt; DEBU 45b Exit 2017-10-10 20:56:12.844 UTC [endorser] simulateProposal -&gt; ERRO 45c failed to invoke chaincode name:"lscc" on transaction 14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1, error: Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1) 2017-10-10 20:56:12.844 UTC [endorser] simulateProposal -&gt; DEBU 45d Exit 2017-10-10 20:56:12.844 UTC [lockbasedtxmgr] Done -&gt; DEBU 45e Done with transaction simulation / query execution [f693a082-3404-4f01-98a1-bed5c3f6b24e] 2017-10-10 20:56:12.844 UTC [endorser] ProcessProposal -&gt; DEBU 45f Exit </code></pre> <p>My configuration currently mounts the host's <code>/var/run</code> folder as <code>/host/var/run</code> and sets <code>CORE_VM_DOCKER_ENDPOINT=unix:///host/var/run/docker.sock</code>. I currently have <code>CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=bridge</code>, but I'm not sure if that's the correct setting (I've tried <code>host</code>) to allow these chaincode containers to connect to the peer. Any tips would be appreciated.</p> <p>Fabric version 1.0.3, Composer version 0.13.2</p>
<p>I was able to get this working, through a combination of configuration parameters:</p> <pre><code>CORE_PEER_ADDRESSAUTODETECT=true CORE_PEER_TLS_SERVERHOSTOVERRIDE=%(hostname).peer </code></pre> <p>In addition, I needed to make sure NOT to set <code>CORE_PEER_CHAINCODELISTENADDRESS</code>, so that the peer's actual IP is passed to the chaincode containers.</p>
<p>I just upgraded kubeadm and kubelet to v1.8.0. And install the dashboard following the official <a href="https://github.com/kubernetes/dashboard" rel="noreferrer">document</a>.</p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code></pre> <p>After that, I started the dashboard by running</p> <pre><code>$ kubectl proxy --address="192.168.0.101" -p 8001 --accept-hosts='^*$' </code></pre> <p>Then fortunately, I was able to access the dashboard thru <a href="http://192.168.0.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="noreferrer">http://192.168.0.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a></p> <p>I was redirected to a login page like this which I had never met before. <a href="https://i.stack.imgur.com/5dy2F.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5dy2F.png" alt="enter image description here"></a> It looks like that there are two ways of authentication. </p> <p>I tried to upload the <code>/etc/kubernetes/admin.conf</code> as the kubeconfig but got failed. Then I tried to use the token I got from <code>kubeadm token list</code> to sign in but failed again. </p> <p>The question is how I can sign in the dashboard. It looks like they added a lot of security mechanism than before. Thanks. </p>
<blockquote> <p>As of release 1.7 Dashboard supports user authentication based on:</p> <ul> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#authorization-header" rel="noreferrer"><code>Authorization: Bearer &lt;token&gt;</code></a> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown.</li> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#bearer-token" rel="noreferrer">Bearer Token</a> that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="noreferrer">login view</a>.</li> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#basic" rel="noreferrer">Username/password</a> that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="noreferrer">login view</a>.</li> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#kubeconfig" rel="noreferrer">Kubeconfig</a> file that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="noreferrer">login view</a>.</li> </ul> </blockquote> <p>— <a href="https://github.com/kubernetes/dashboard" rel="noreferrer">Dashboard on Github</a></p> <h2>Token</h2> <p>Here <code>Token</code> can be <code>Static Token</code>, <code>Service Account Token</code>, <code>OpenID Connect Token</code> from <a href="https://kubernetes.io/docs/admin/authentication/" rel="noreferrer">Kubernetes Authenticating</a>, but not the kubeadm <code>Bootstrap Token</code>.</p> <p>With kubectl, we can get an service account (eg. deployment controller) created in kubernetes by default.</p> <pre><code>$ kubectl -n kube-system get secret # All secrets with type 'kubernetes.io/service-account-token' will allow to log in. # Note that they have different privileges. NAME TYPE DATA AGE deployment-controller-token-frsqj kubernetes.io/service-account-token 3 22h $ kubectl -n kube-system describe secret deployment-controller-token-frsqj Name: deployment-controller-token-frsqj Namespace: kube-system Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name=deployment-controller kubernetes.io/service-account.uid=64735958-ae9f-11e7-90d5-02420ac00002 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZnJzcWoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQ3MzU5NTgtYWU5Zi0xMWU3LTkwZDUtMDI0MjBhYzAwMDAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.OqFc4CE1Kh6T3BTCR4XxDZR8gaF1MvH4M3ZHZeCGfO-sw-D0gp826vGPHr_0M66SkGaOmlsVHmP7zmTi-SJ3NCdVO5viHaVUwPJ62hx88_JPmSfD0KJJh6G5QokKfiO0WlGN7L1GgiZj18zgXVYaJShlBSz5qGRuGf0s1jy9KOBt9slAN5xQ9_b88amym2GIXoFyBsqymt5H-iMQaGP35tbRpewKKtly9LzIdrO23bDiZ1voc5QZeAZIWrizzjPY5HPM1qOqacaY9DcGc7akh98eBJG_4vZqH2gKy76fMf0yInFTeNKr45_6fWt8gRM77DQmPwb3hbrjWXe1VvXX_g </code></pre> <h2>Kubeconfig</h2> <p>The dashboard needs the user in the kubeconfig file to have either <code>username &amp; password</code> or <code>token</code>, but <code>admin.conf</code> only has <code>client-certificate</code>. You can edit the config file to add the token that was extracted using the method above.</p> <pre> $ kubectl config set-credentials cluster-admin --token=<i>bearer_token</i></pre> <h1>Alternative (Not recommended for Production)</h1> <p>Here are two ways to bypass the authentication, but use for caution. </p> <h2>Deploy dashboard with HTTP</h2> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml </code></pre> <p>Dashboard can be loaded at <a href="http://localhost:8001/ui" rel="noreferrer">http://localhost:8001/ui</a> with <code>kubectl proxy</code>.</p> <h2>Granting admin privileges to Dashboard's Service Account</h2> <pre><code>$ cat &lt;&lt;EOF | kubectl create -f - apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF </code></pre> <p>Afterwards you can use <kbd>Skip</kbd> option on login page to access Dashboard.</p> <p>If you are using dashboard version v1.10.1 or later, you must also add <code>--enable-skip-login</code> to the deployment's command line arguments. You can do so by adding it to the <code>args</code> in <code>kubectl edit deployment/kubernetes-dashboard --namespace=kube-system</code>.</p> <p>Example:</p> <pre><code> containers: - args: - --auto-generate-certificates - --enable-skip-login # &lt;-- add this line image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 </code></pre>
<p>I have deployed a grpc service running on OpenShift Origin. And this backed by a OpenShift service. And the service is exposed with an OpenShift route. I am trying to make this pod available via a service and route that maps the container port (50051) to outside world on port 8080.</p> <p>The image that the service is trying to expose has, in its Dockerfile:</p> <pre><code>EXPOSE 50051 </code></pre> <p>The route has the following:</p> <ul> <li>Service Port: 8080/TCP </li> <li>Target Port: 50051</li> </ul> <p>In the DeploymentConfig I specify the port with:</p> <pre><code>ports: - containerPort: 50051 protocol: TCP </code></pre> <p>However, when I try to access the application via the route and port, I get (from Java)</p> <pre><code>java.net.NoRouteToHostException: No route to host </code></pre> <p>And when I try to telnet the service IP:</p> <pre><code>telnet 172.30.197.247 8080 </code></pre> <p>I am able to connect.</p> <p>However, when I try to connect via the route it doesnt work:</p> <pre><code>telnet my.route.com 8080 </code></pre> <p>Trying ... telnet: connect to address : Connection refused</p> <p>When I use:</p> <pre><code>curl -kv my-svc.myproject.svc.cluster.local:8080 </code></pre> <p>I can connect.</p> <p>So it seems the service is working but the route is not.</p> <p>I have been going through the troubleshooting guide on <a href="https://docs.openshift.org/3.6/admin_guide/sdn_troubleshooting.html#debugging-the-router" rel="nofollow noreferrer">https://docs.openshift.org/3.6/admin_guide/sdn_troubleshooting.html#debugging-the-router</a></p>
<p>The router setups in OpenShift focus on <a href="https://docs.openshift.org/3.6/dev_guide/getting_traffic_into_cluster.html#using-a-router" rel="nofollow noreferrer">HTTP/HTTPS(SNI)/TLS(SNI)</a>. However it appears that you can use an <a href="https://docs.openshift.org/3.6/dev_guide/getting_traffic_into_cluster.html#using-externalIP" rel="nofollow noreferrer">externalIP</a> to expose non-web application ports from the cluster. Because gRPC is an over the wire protocol, you might need to go this path.</p>
<p>I am running <code>etcd</code>, <code>kube-apiserver</code>, <code>kube-scheduler</code>, and <code>kube-controllermanager</code> on a master node as well as <code>kubelet</code> and <code>kube-proxy</code> on a minion node as follows (all kube binaries are from kubernetes 1.7.4):</p> <pre><code># [master node] ./etcd ./kube-apiserver --logtostderr=true --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.10.10.0/24 --insecure-port 8080 --secure-port=0 --allow-privileged=true --insecure-bind-address 0.0.0.0 ./kube-scheduler --address=0.0.0.0 --master=http://127.0.0.1:8080 ./kube-controller-manager --address=0.0.0.0 --master=http://127.0.0.1:8080 # [minion node] ./kubelet --logtostderr=true --address=0.0.0.0 --api_servers=http://$MASTER_IP:8080 --allow-privileged=true ./kube-proxy --master=http://$MASTER_IP:8080 </code></pre> <p>After this, if I execute <code>kubectl get all --all-namespaces</code> and <code>kubectl get nodes</code>, I get</p> <pre><code>NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default svc/kubernetes 10.10.10.1 &lt;none&gt; 443/TCP 27m NAME STATUS AGE VERSION minion-1 Ready 27m v1.7.4+793658f2d7ca7 </code></pre> <p>Then, I apply flannel as follows:</p> <pre><code>kubectl apply -f kube-flannel-rbac.yml -f kube-flannel.yml </code></pre> <p>Now, I see a pod is created, but with error:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-p8tcb 1/2 CrashLoopBackOff 4 2m </code></pre> <p>When I check the logs inside the failed container in the minion node, I see the following error:</p> <pre><code>Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory </code></pre> <p>My question is: how to resolve this? Is this a SSL issue? What step am I missing in setting up my cluster?</p>
<p>Maybe it is your flannel yaml file has something wrong, you can try this to install your flannel, check the old ip link </p> <p><code> ip link </code></p> <p>if it show flannel,please delete it </p> <p><code> ip link delete flannel.1 </code></p> <p>and install , its default pod network cdir is 10.244.0.0/16</p> <p><code> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml </code></p>
<p><strong>TL;DR</strong> for some reason every http service seems to have the path rewritten as <code>/</code></p> <p>I'm pretty new to Kubernetes and trying to set up an ingress load balancer.</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.6-gke.1", GitCommit:"407dbfe965f3de06b332cc22d2eb1ca07fb4d3fb", GitTreeState:"clean", BuildDate:"2017-09-27T21:21:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I'm setting everything up in Google Container Engine (GKE)</p> <p>It's partially working, which is a good thing. However, for some reason it's routing everything to my service as if the request is directed at <code>/</code>.</p> <p>What's happening here? My guess is poor configuration.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: main-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "ingress-main-ip" spec: tls: - secretName: cert-main hosts: - api.example.com rules: - host: api.example.com http: paths: - path: /my_service/* backend: serviceName: myService servicePort: main-port - path: /my_service2/* backend: serviceName: myService2 servicePort: main-port </code></pre> <p>Whenever I make a request to <code>api.example.com/my_service/</code> then myService will register the request as coming from index--no problem.</p> <p>Same thing with <code>https://api.example.com/my_service2/</code> which goes to myService2</p> <p>But whenever I make a request to <code>https://api.example.com/my_service/bogus/path/that/should/return/404</code> then my service seems to think the request is being directed at the <code>/</code> path again just as before.</p> <p>Of course when I run myService locally (which happens to be a Go http server), then it works perfectly fine. <strong>To be especially clear:</strong> running this locally <code>127.0.0.1/bogus/path/that/should/return/404</code> <em>returns a 404 as expected</em> (and of course other API endpoints also work)</p> <p>Here's an extremely strange thing: whenever I visit the static IP address directly (aka ingress-main-ip, let's just say it's 1.2.3.4), for example <code>http://1.2.3.4/my_service/</code> or <code>http://1.2.3.4/my_service2/</code> it <em>always</em> returns a 404 from the GKE default backend, so my services aren't even getting routed to.</p> <p>Summary:</p> <ol> <li>Why are the domain versions of the requests all routing to <code>/</code></li> <li>Why do the direct IP requests not work at all?</li> </ol> <p>Thank you for your time and help!</p>
<p>I will answer the 2nd question.</p> <p>The <em>extremely strange thing</em> is due to the HTTP header "Host" matters because you specified it in the <strong>rules</strong> section:</p> <pre><code>rules: - host: api.example.com &lt;----- THIS http: paths: - path: /my_service/* backend: serviceName: myService servicePort: main-port - path: /my_service2/* backend: serviceName: myService2 servicePort: main-port </code></pre> <p>So it's used to route the request to the proper service. If your <em>service controller</em> is nginx, so that "host:" is its <a href="http://nginx.org/en/docs/http/server_names.html" rel="nofollow noreferrer"><em>server_name</em></a>.</p>
<p>cAdvisor reports 14GB for the memory used by Prometheus where as top reports 6.xGB.</p> <p>Can someone explain why this discrepancy? </p> <p>The documentatation of container_memory_usage_bytes says </p> <pre><code>Current memory usage in bytes, including all memory regardless of when it was accessed </code></pre> <p>but it's not clear what this refers to - I assume it's virtual memory size?</p> <p>As reported by cAdvisor:</p> <pre><code>core@ip-172-20-100-148 ~ $ curl -q localhost:4194/metrics | grep container_memory_usage_bytes | grep prometheus container_memory_usage_bytes{container_name="prometheus",id="/docker/d37e7503309e632265cb834095efa949da4dc3c72122cb290e626f1121d0ed6b",image="quay.io/prometheus/prometheus:v1.7.1",name="k8s_prometheus.71f5f8c6_prometheus-zwvhx_xx-system_9753de21-aaac-11e7-97e5-026a05f72f78_4682ea0b",namespace="xx-system",pod_name="prometheus-zwvhx"} 1.4178545664e+10 </code></pre> <p>As reported by top on the host machine:</p> <pre><code>Tasks: 272 total, 1 running, 271 sleeping, 0 stopped, 0 zombie %Cpu(s): 20.2 us, 2.7 sy, 0.0 ni, 75.9 id, 0.5 wa, 0.6 hi, 0.1 si, 0.0 st KiB Mem: 65964640 total, 64892592 used, 1072048 free, 5222520 buffers KiB Swap: 0 total, 0 used, 0 free. 4737420 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 118446 root 20 0 6783108 6.154g 19864 S 316.7 9.8 8241:05 prometheus </code></pre>
<p>Per the discussion <a href="https://github.com/google/cadvisor/issues/913" rel="nofollow noreferrer">here</a>, cAdvisor reports a <em>total</em> memory usage in bytes. </p> <p>In your case, <em>cAdvisor</em> was reporting <code>1.4178545664e+10 Byte = 14.178545664 Gigabyte</code>. Note it was <code>e+10</code> so it is approximately 14G, not 1.4G.</p>
<p>So running and scaling a deployment running <strong>multiple pods</strong> on a <strong>single node</strong> works nicely, and when exposing the service with a type "nodePort" nicely balances requests to the virtual IP between the multiple pods on that individual node. </p> <p>I've since added an additional node to my cluster, and when exposing the Service using nodePort and then running pods over 2 nodes, I of course need to specify each host specifically to hit the endpoints running in different pods on different nodes.</p> <p>I would like to send requests to a single VIP and load balance accross the different nodes. I am running this small cluster on my home network, so my question is, is there anyway to send requests to a single VIP, and load balance across the nodes / pods without using an external load-balancer? E.g., is there some config within kubernetes to handle this?</p> <p>I tried using a service type load balancer (instead of node port) but this didn't load balance accross nodes.</p>
<p>Take a look at <a href="https://github.com/kubernetes/contrib/blob/master/keepalived-vip/README.md" rel="nofollow noreferrer">Keepalived in Kubernetes</a>.</p> <blockquote> <p>The idea is to expose a Virtual IP (VIP) address per service, outside of the kubernetes cluster. keepalived then uses VRRP to sync this "mapping" in the local network. With 2 or more instance of the pod running in the cluster is possible to provide HA using a single VIP address.</p> </blockquote>
<p>I am using the <a href="https://github.com/kubernetes-incubator/client-python/" rel="noreferrer">kubernetes python client</a>. In the event that kubernetes isn't available when my code starts up, I would like to retry the connection. </p> <p>When the client is unable to connect, it throws what appears to be a <code>urllib3.exceptions.MaxRetryError</code> exception, so I started with something like this:</p> <pre><code>import time import urllib3 import kubernetes kubernetes.config.load_kube_config() api = kubernetes.client.CoreV1Api() while True: try: w = kubernetes.watch.Watch() for event in w.stream(api.list_pod_for_all_namespaces): print event except urllib3.exceptions.HTTPError: print('retrying in 1 second') time.sleep(1) </code></pre> <p>But that completely fails; it acts like there is no <code>except</code> statement and bails out with:</p> <pre><code>urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='192.168.122.140', port=8443): Max retries exceeded with url: /api/v1/pods?watch=True (Caused by NewConnectionError('&lt;urllib3.connection.VerifiedHTTPSConnection object at 0x2743110&gt;: Failed to establish a new connection: [Errno 111] Connection refused',)) </code></pre> <p>I thought maybe I didn't understand inheritance as well as I thought, so I replace the above with:</p> <pre><code>except urllib3.exceptions.MaxRetryError: print('retrying in 1 second') time.sleep(1) </code></pre> <p>Which fails in the same way. In an effort to figure out what was going on, I added a catch-all <code>except</code> and invoked pdb:</p> <pre><code>except Exception as err: import pdb; pdb.set_trace() </code></pre> <p>And from the <code>pdb</code> prompt, we can see:</p> <pre><code>(Pdb) type(err) &lt;class 'urllib3.exceptions.MaxRetryError'&gt; </code></pre> <p>...which looks fine, as does the mro:</p> <pre><code>(Pdb) import inspect (Pdb) inspect.getmro(err.__class__) (&lt;class 'urllib3.exceptions.MaxRetryError'&gt;, &lt;class 'urllib3.exceptions.RequestError'&gt;, &lt;class 'urllib3.exceptions.PoolError'&gt;, &lt;class 'urllib3.exceptions.HTTPError'&gt;, &lt;type 'exceptions.Exception'&gt;, &lt;type 'exceptions.BaseException'&gt;, &lt;type 'object'&gt;) </code></pre> <p>But despite all that:</p> <pre><code>(Pdb) isinstance(err, urllib3.exceptions.MaxRetryError) False </code></pre> <p>And all the paths look reasonable:</p> <pre><code>(Pdb) urllib3.__file__ '/usr/lib/python2.7/site-packages/urllib3/__init__.pyc' (Pdb) kubernetes.client.rest.urllib3.__file__ '/usr/lib/python2.7/site-packages/urllib3/__init__.pyc' </code></pre> <p>So...what the actual what is going on here?</p> <p><strong>Update</strong></p> <p>Here is the full stack trace:</p> <pre><code>Traceback (most recent call last): File "testkube.py", line 13, in &lt;module&gt; for event in w.stream(api.list_pod_for_all_namespaces): File "/usr/lib/python2.7/site-packages/kubernetes/watch/watch.py", line 116, in stream resp = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 14368, in list_pod_for_all_namespaces (data) = self.list_pod_for_all_namespaces_with_http_info(**kwargs) File "/usr/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 14464, in list_pod_for_all_namespaces_with_http_info collection_formats=collection_formats) File "/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 335, in call_api _preload_content, _request_timeout) File "/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 148, in __call_api _request_timeout=_request_timeout) File "/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 371, in request headers=headers) File "/usr/lib/python2.7/site-packages/kubernetes/client/rest.py", line 250, in GET query_params=query_params) File "/usr/lib/python2.7/site-packages/kubernetes/client/rest.py", line 223, in request headers=headers) File "/usr/lib/python2.7/site-packages/urllib3/request.py", line 66, in request **urlopen_kw) File "/usr/lib/python2.7/site-packages/urllib3/request.py", line 87, in request_encode_url return self.urlopen(method, url, **extra_kw) File "/usr/lib/python2.7/site-packages/urllib3/poolmanager.py", line 321, in urlopen response = conn.urlopen(method, u.request_uri, **kw) File "/usr/lib/python2.7/site-packages/urllib3/connectionpool.py", line 668, in urlopen **response_kw) File "/usr/lib/python2.7/site-packages/urllib3/connectionpool.py", line 668, in urlopen **response_kw) File "/usr/lib/python2.7/site-packages/urllib3/connectionpool.py", line 668, in urlopen **response_kw) File "/usr/lib/python2.7/site-packages/urllib3/connectionpool.py", line 639, in urlopen _stacktrace=sys.exc_info()[2]) File "/usr/lib/python2.7/site-packages/urllib3/util/retry.py", line 388, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='192.168.122.140', port=8443): Max retries exceeded with url: /api/v1/pods?watch=True (Caused by NewConnectionError('&lt;urllib3.connection.VerifiedHTTPSConnection object at 0x3d16110&gt;: Failed to establish a new connection: [Errno 111] Connection refused',)) </code></pre>
<p>Your code has an indirect dependency on the <code>requests</code> package, and the <code>requests</code> package has a strange submodule called <code>requests.packages</code>. This used to contain copied source code from a number of dependencies, including <code>urllib3</code>, but they stopped doing that. They wanted to keep <code>requests.packages</code> around for backward compatibility, though, so now they're doing something weird.</p> <p>Instead of <code>requests.packages</code> including a complete copy of the <code>urllib3</code> source code, it now imports <code>urllib3</code> and sets <code>sys.modules['requests.packages.urllib3'] = urllib3</code>. Depending on the <code>requests</code> version, it may set a number of other <code>sys.modules</code> entries, too; for example, as of requests <a href="https://github.com/requests/requests/blob/v2.18.4/requests/packages.py" rel="noreferrer">2.18.4</a>, the source code does</p> <pre><code>for package in ('urllib3', 'idna', 'chardet'): locals()[package] = __import__(package) # This traversal is apparently necessary such that the identities are # preserved (requests.packages.urllib3.* is urllib3.*) for mod in list(sys.modules): if mod == package or mod.startswith(package + '.'): sys.modules['requests.packages.' + mod] = sys.modules[mod] </code></pre> <p>but in <a href="https://github.com/requests/requests/blob/v2.17.0/requests/packages.py" rel="noreferrer">2.17.0</a>, it does</p> <pre><code>import urllib3 sys.modules['requests.packages.urllib3'] = urllib3 import idna sys.modules['requests.packages.idna'] = idna import chardet sys.modules['requests.packages.chardet'] = chardet </code></pre> <p>This code interacts badly with submodules of the imported packages. If some code tries to do <code>import requests.packages.urllib3.exceptions</code> and Python doesn't find a <code>sys.modules['requests.packages.urllib3.exceptions']</code> entry, Python will <em>recreate</em> the <code>urllib3.exceptions</code> module and set <code>urllib3.exceptions</code> and <code>sys.modules['requests.packages.urllib3.exceptions']</code> to the new module (but it won't touch <code>sys.modules['urllib3.exceptions']</code>. This will generate new copies of the classes involved, causing your error.</p> <p>A related problem with the same cause was <a href="https://github.com/requests/requests/issues/4102" rel="noreferrer">reported</a> back in May, leading to the new code shown in 2.18.4. 2.18.4 shouldn't cause the specific problems you're seeing, but it's still fragile, because if any submodules of <code>urllib3</code> aren't yet loaded at the time <code>requests.packages</code> screws with <code>sys.modules</code>, those submodules will exhibit the same problems you've seen today.</p>
<p>What is purpose of pod-infra-container-image in kubernetes?</p> <p>Official documentation says only: </p> <blockquote> <p>The image whose network/ipc namespaces containers in each pod will use. (default "gcr.io/google_containers/pause-amd64:3.0")</p> </blockquote> <p>but I don't understand exactly what it does and how it works in detail.</p>
<p>The pause container, which image the <code>--pod-infra-container</code> flag selects, is used so that multiple containers can be launched in a pod, while sharing resources. It mostly does nothing, and unless you have a very good reason to replace it with something custom, you shouldn't. It mostly invokes the <code>pause</code> system call (hence its name) but it also performs the important function of having PID 1 and making sure no zombie processes are kept around.</p> <p>An extremely complete article on the subject <a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noreferrer">can be found here</a>, from where I also shamelessly stole the following picture which illustrates where the pause container lives:</p> <p><a href="https://i.stack.imgur.com/oaF5L.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oaF5L.png" alt="enter image description here"></a></p>
<p>I setup a new kubernetes cluster on GKE using the <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">nginx-ingress</a> controller. TLS is not working, it's using the fake certificates.</p> <p>There is a lot of configuration detail so I made a repo - <a href="https://github.com/jobevers/test_ssl_ingress" rel="noreferrer">https://github.com/jobevers/test_ssl_ingress</a></p> <p>In short the steps were</p> <ul> <li>create a new cluster without GKE's load balancer</li> <li>create a tls secret with my key and cert</li> <li>create an nginx-ingress deployment / pod</li> <li>create an ingress controller</li> </ul> <p>The nginx-ingress config comes from <a href="https://zihao.me/post/cheap-out-google-container-engine-load-balancer/" rel="noreferrer">https://zihao.me/post/cheap-out-google-container-engine-load-balancer/</a> (and looks very similar to a lot of the examples in the ingress-nginx repo).</p> <p>My ingress.yaml is nearly identical to <a href="https://github.com/kubernetes/ingress-nginx/blob/master/examples/tls-termination/nginx/nginx-tls-ingress.yaml" rel="noreferrer">the example one</a></p> <p>When I run curl, I get </p> <pre><code>$ curl -kv https://35.196.134.52 [...] * common name: Kubernetes Ingress Controller Fake Certificate (does not match '35.196.134.52') [...] * issuer: O=Acme Co,CN=Kubernetes Ingress Controller Fake Certificate [...] </code></pre> <p>which shows that I'm still using the default certificates.</p> <p>How am I supposed to get it using mine?</p> <hr> <p><a href="https://github.com/jobevers/test_ssl_ingress/blob/master/ingress.yaml" rel="noreferrer">Ingress definition</a></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ssl-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: tls: - secretName: tls-secret rules: - http: paths: - path: / backend: serviceName: demo-echo-service servicePort: 80 </code></pre> <hr> <p><a href="https://github.com/jobevers/test_ssl_ingress/blob/master/setup_cluster_with_ssl.sh#L22" rel="noreferrer">Creating the secret</a>:</p> <pre><code>kubectl create secret tls tls-secret --key tls/privkey.pem --cert tls/fullchain.pem </code></pre> <hr> <p>Debugging further, the certificate is being found and exist on the server:</p> <pre><code>$ kubectl -n kube-system exec -it $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ") -- ls -1 /ingress-controller/ssl/ default-fake-certificate-full-chain.pem default-fake-certificate.pem default-tls-secret-full-chain.pem default-tls-secret.pem </code></pre> <p>And, from the log, I see</p> <pre><code>kubectl -n kube-system log -f $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ") [...] I1013 17:21:45.423998 6 queue.go:111] syncing default/test-ssl-ingress I1013 17:21:45.424009 6 backend_ssl.go:40] starting syncing of secret default/tls-secret I1013 17:21:45.424135 6 ssl.go:60] Creating temp file /ingress-controller/ssl/default-tls-secret.pem236555242 for Keypair: default-tls-secret.pem I1013 17:21:45.424946 6 ssl.go:118] parsing ssl certificate extensions I1013 17:21:45.743635 6 backend_ssl.go:102] found 'tls.crt' and 'tls.key', configuring default/tls-secret as a TLS Secret (CN: [...]) [...] </code></pre> <p>But, looking at the nginx.conf, its still using the fake certs:</p> <pre><code>$ kubectl -n kube-system exec -it $(kubectl -n kube-system get pods | grep ingress | head -1 | cut -f 1 -d " ") -- cat /etc/nginx/nginx.conf | grep ssl_cert ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem; ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem; </code></pre>
<p>Turns out that the ingress definition needs to look like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ssl-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: tls: - hosts: - app.example.com secretName: tls-secret rules: - host: app.example.com http: paths: - path: / backend: serviceName: demo-echo-service servicePort: 80 </code></pre> <p>The host entry under rules needs to match one of the hosts entries under tls.</p>
<p>I am using a Kubernetes service, but I am a little confused about the parameter of <strong>External IPs</strong> in service:</p> <p>If there are external IPs that route to one or more cluster nodes, Kubernetes services can be exposed on those external IPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the service port, will be routed to one of the service endpoints. External IPs are not managed by Kubernetes and are the responsibility of the cluster administrator.</p> <p>I am not sure the difference between the <em>Load Balancer type service</em> and <em>the service which uses External IPs</em>.</p> <p>I can use service with Load Balancer type, then the service can be load balanced by cloud providers.</p> <p>But when I use ClusterIP type with <strong>External IPs</strong> parameter, I can also get the load balance ability.</p> <p>So what the difference? Why have this <strong>External IPs</strong> parameter?</p>
<p>You are right that they are both ways of exposing services externally, and that they will both reach all of the pods connected by the service in question, but the infrastructure that allows it is different.</p> <p>With a LoadBalancer type, on cloud providers which support external load balancers, one will be provisioned for you. This comes with differences from simply attaching an IP address to a node. Within AWS it will be the difference between provisioning an ELB and attaching a EIP. Those will then imply differences with for example security group controls, encrypted traffic termination and so on.</p> <p>Another important difference is that while LoadBalancers are provisioned automatically by Kubernetes if you're in a supported cloud environment, External IPs are manually provisioned by you, which might mean more or less flexibility, depending on your setup. Please see the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="noreferrer">docs for more</a>.</p>
<p>I have two applications deployed in different namespace in Google Kontainer engine(GKE). I want to use a single ingress resource (Google Load Balancer) to point to both the application using path based routing. Is it possible to have backend kubernetes services in multiple namespace behind a single ingress resource. If possible, how?</p>
<p>You'll have to setup an nginx ingress controller and deploy ingress resources per each namespace: <a href="https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-221393102" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-221393102</a></p>
<p><strong>TL;DR</strong> for some reason every http service seems to have the path rewritten as <code>/</code></p> <p>I'm pretty new to Kubernetes and trying to set up an ingress load balancer.</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.6-gke.1", GitCommit:"407dbfe965f3de06b332cc22d2eb1ca07fb4d3fb", GitTreeState:"clean", BuildDate:"2017-09-27T21:21:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I'm setting everything up in Google Container Engine (GKE)</p> <p>It's partially working, which is a good thing. However, for some reason it's routing everything to my service as if the request is directed at <code>/</code>.</p> <p>What's happening here? My guess is poor configuration.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: main-ingress annotations: kubernetes.io/ingress.global-static-ip-name: "ingress-main-ip" spec: tls: - secretName: cert-main hosts: - api.example.com rules: - host: api.example.com http: paths: - path: /my_service/* backend: serviceName: myService servicePort: main-port - path: /my_service2/* backend: serviceName: myService2 servicePort: main-port </code></pre> <p>Whenever I make a request to <code>api.example.com/my_service/</code> then myService will register the request as coming from index--no problem.</p> <p>Same thing with <code>https://api.example.com/my_service2/</code> which goes to myService2</p> <p>But whenever I make a request to <code>https://api.example.com/my_service/bogus/path/that/should/return/404</code> then my service seems to think the request is being directed at the <code>/</code> path again just as before.</p> <p>Of course when I run myService locally (which happens to be a Go http server), then it works perfectly fine. <strong>To be especially clear:</strong> running this locally <code>127.0.0.1/bogus/path/that/should/return/404</code> <em>returns a 404 as expected</em> (and of course other API endpoints also work)</p> <p>Here's an extremely strange thing: whenever I visit the static IP address directly (aka ingress-main-ip, let's just say it's 1.2.3.4), for example <code>http://1.2.3.4/my_service/</code> or <code>http://1.2.3.4/my_service2/</code> it <em>always</em> returns a 404 from the GKE default backend, so my services aren't even getting routed to.</p> <p>Summary:</p> <ol> <li>Why are the domain versions of the requests all routing to <code>/</code></li> <li>Why do the direct IP requests not work at all?</li> </ol> <p>Thank you for your time and help!</p>
<p>Robert is right about the second question (Many thanks)</p> <p>I just figured out the main question, why all my paths seemed to be rewritten to <code>/</code></p> <p>It turns out that the path is sent to the server with the "path" as specified in the ingress configuration to the actual service. So for example, when trying to do a GET request to <code>api.example.com/my_service/my_endpoint</code>, I was expecting <code>my_service</code> to just get the path <code>/my_endpoint</code> but it was actually getting the full <code>/my_service/my_endpoint</code></p> <p>This seems like a pretty bad kubernetes side effect in my opinion, and the path should really be a relative rewrite.</p> <p>In any case it's fixed now. The reason I "thought" that they were all being rewritten to simply <code>/</code> is because for some reason (logic errors I wrote into my code) my multiplexer was capturing every unknown request and sending it to my index controller. Go figure!</p>
<p>I have the following persistent volume and volume claim:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: kloud spec: capacity: storage: 100Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 172.21.51.42 path: / readOnly: false </code></pre> <p>and:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: kloud spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi </code></pre> <p>The nfs server is AWS EFS. I specifically ssh to k8s master and checked that I can manually mount the NFS volume. But when I create the volume and the claim with kubectl it indefinitely hangs there pending:</p> <pre><code>$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE kloud Pending gp2 8s </code></pre> <p>If I change the mode to <code>ReadWriteOnce</code>, it works as expected and won't hang.</p> <pre><code>$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE kloud Bound pvc-c9a01bff-94d0-11e7-8ed4-0aec4a0f734a 100Gi RWO gp2 </code></pre> <p>Is there something I missing? How can I create a RWX claim with k8s and EFS?</p>
<p>You will need to setup the EFS-provisioner in your cluster. Mounting EFS is still not supported by the default Kubernetes distribution and as such you need this extension: <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="noreferrer">https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs</a></p> <p>You'll need to set up it's storage class:</p> <pre><code> kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: aws-efs provisioner: example.com/aws-efs </code></pre> <p>And then write PVC's of the type:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs annotations: volume.beta.kubernetes.io/storage-class: "aws-efs" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi </code></pre> <p>Don't mind the storage size, although it's not used by EFS, Kubernetes requires you to set something there for it to work.</p>
<p>We are trying to test if we can have 2 or more k8s cluster in the same AWS VPC and Subnets (public and privates) using kops solution <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">https://github.com/kubernetes/kops</a></p> <p>We build the cluster with no issues, but when we try to expose a service through the ELB, it fails.</p> <p>If we modify the subnet tag <code>KubernetesCluster</code> to the new cluster name, the ELB is published.</p> <p>Is it possible to have multiple k8s cluster built with kops in the same subnets?</p> <p>The <code>KubernetesCluster</code> tag supports multiple clusters?</p> <p>Thanks in advance</p>
<p>Please refer to the kops <a href="https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md#shared-subnets" rel="nofollow noreferrer">documentation on shared subnets</a>:</p> <blockquote> <p>If you run in AWS private topology with shared subnets, and you would like Kubernetes to provision resources in these shared subnets, you must create tags on them with Key=value KubernetesCluster=. This is important, for example, if your utility subnets are shared, you will not be able to launch any services that create Elastic Load Balancers (ELBs).</p> </blockquote> <p>Which means that yes, you will need to set the KubernetesCluster tag if you want resources that need ELBs.</p>
<p>I am attempting to configure Fabric to work inside of a Kubernetes cluster, and while I have everything standing up, I am having difficulting deploying chaincode (using composer-cli) to the network. It appears that the chaincode containers are not able to see the peer that created them.</p> <pre><code>2017-10-10 20:51:12.590 UTC [ccprovider] NewCCContext -&gt; DEBU 437 NewCCCC (chain=lynnhurst,chaincode=lynnhurst-composer,version=0.13.2,txid=14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1,syscc=false,proposal=0xc42195c0f0,canname=lynnhurst-composer:0.13.2 2017-10-10 20:51:12.605 UTC [chaincode] Launch -&gt; DEBU 438 launchAndWaitForRegister fetched 2902002 bytes from file system 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 439 chaincode lynnhurst-composer:0.13.2 is being launched 2017-10-10 20:51:12.605 UTC [chaincode] getArgsAndEnv -&gt; DEBU 43a Executable is chaincode 2017-10-10 20:51:12.605 UTC [chaincode] getArgsAndEnv -&gt; DEBU 43b Args [chaincode -peer.address=peer-0.peer:7052] 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 43c start container: lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer) 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 43d start container with args: chaincode -peer.address=peer-0.peer:7052 2017-10-10 20:51:12.605 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 43e start container with env: CORE_CHAINCODE_ID_NAME=lynnhurst-composer:0.13.2 CORE_PEER_TLS_ENABLED=true CORE_CHAINCODE_LOGGING_LEVEL=info CORE_CHAINCODE_LOGGING_SHIM=warning CORE_CHAINCODE_LOGGING_FORMAT=%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -&gt; %{level:.4s} %{id:03x}%{color:reset} %{message} 2017-10-10 20:51:12.605 UTC [container] lockContainer -&gt; DEBU 43f waiting for container(dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:51:12.605 UTC [container] lockContainer -&gt; DEBU 440 got container (dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:51:12.606 UTC [dockercontroller] Start -&gt; DEBU 441 Cleanup container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.607 UTC [dockercontroller] stopInternal -&gt; DEBU 442 Stop container dev-peer-0.peer-lynnhurst-composer-0.13.2(No such container: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.608 UTC [dockercontroller] stopInternal -&gt; DEBU 443 Kill container dev-peer-0.peer-lynnhurst-composer-0.13.2 (No such container: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.609 UTC [dockercontroller] stopInternal -&gt; DEBU 444 Remove container dev-peer-0.peer-lynnhurst-composer-0.13.2 (No such container: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.609 UTC [dockercontroller] Start -&gt; DEBU 445 Start container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.609 UTC [dockercontroller] getDockerHostConfig -&gt; DEBU 446 docker container hostconfig NetworkMode: bridge 2017-10-10 20:51:12.610 UTC [dockercontroller] createContainer -&gt; DEBU 447 Create container: dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.648 UTC [dockercontroller] createContainer -&gt; DEBU 448 Created container: dev-peer-0.peer-lynnhurst-composer-0.13.2-49b014bf4f406b2b248c840a95c16be5a45845e67e4f7c1af9a5e7b7e69037bf 2017-10-10 20:51:12.836 UTC [dockercontroller] Start -&gt; DEBU 449 Started container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:51:12.836 UTC [container] unlockContainer -&gt; DEBU 44a container lock deleted(dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:51:12.955 UTC [dev-peer-0.peer-lynnhurst-composer-0.13.2] func2 -&gt; INFO 44b 2017-10-10 20:51:12.955 UTC [Composer] Info -&gt; INFO 001 Setting the Composer pool size to 8 2017-10-10 20:51:15.957 UTC [dev-peer-0.peer-lynnhurst-composer-0.13.2] func2 -&gt; INFO 44c 2017-10-10 20:51:15.956 UTC [shim] userChaincodeStreamGetter -&gt; ERRO 002 Error trying to connect to local peer: context deadline exceeded 2017-10-10 20:51:16.001 UTC [dockercontroller] func2 -&gt; INFO 44d Container dev-peer-0.peer-lynnhurst-composer-0.13.2 has closed its IO channel 2017-10-10 20:56:12.567 UTC [eventhub_producer] validateEventMessage -&gt; DEBU 44e ValidateEventMessage starts for signed event 0xc421507680 2017-10-10 20:56:12.569 UTC [eventhub_producer] deRegisterHandler -&gt; DEBU 44f deregistering event type: BLOCK 2017-10-10 20:56:12.578 UTC [eventhub_producer] Chat -&gt; ERRO 450 error during Chat, stopping handler: rpc error: code = Canceled desc = context canceled 2017-10-10 20:56:12.836 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 451 stopping due to error while launching Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1) 2017-10-10 20:56:12.836 UTC [container] lockContainer -&gt; DEBU 452 waiting for container(dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:56:12.836 UTC [container] lockContainer -&gt; DEBU 453 got container (dev-peer-0.peer-lynnhurst-composer-0.13.2) lock 2017-10-10 20:56:12.837 UTC [dockercontroller] stopInternal -&gt; DEBU 454 Stop container dev-peer-0.peer-lynnhurst-composer-0.13.2(Container not running: dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:56:12.838 UTC [dockercontroller] stopInternal -&gt; DEBU 455 Kill container dev-peer-0.peer-lynnhurst-composer-0.13.2 (API error (500): {"message":"Cannot kill container dev-peer-0.peer-lynnhurst-composer-0.13.2: Container db3e259d3c98fbb97a10a100724b71f861a7ee6b60317686329e1e6439be1ebd is not running"} ) 2017-10-10 20:56:12.843 UTC [dockercontroller] stopInternal -&gt; DEBU 456 Removed container dev-peer-0.peer-lynnhurst-composer-0.13.2 2017-10-10 20:56:12.843 UTC [container] unlockContainer -&gt; DEBU 457 container lock deleted(dev-peer-0.peer-lynnhurst-composer-0.13.2) 2017-10-10 20:56:12.844 UTC [chaincode] launchAndWaitForRegister -&gt; DEBU 458 error on stop Error stopping container: context canceled(Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1)) 2017-10-10 20:56:12.844 UTC [chaincode] func1 -&gt; DEBU 459 chaincode lynnhurst-composer:0.13.2 launch seq completed 2017-10-10 20:56:12.844 UTC [chaincode] Launch -&gt; ERRO 45a launchAndWaitForRegister failed Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1) 2017-10-10 20:56:12.844 UTC [endorser] callChaincode -&gt; DEBU 45b Exit 2017-10-10 20:56:12.844 UTC [endorser] simulateProposal -&gt; ERRO 45c failed to invoke chaincode name:"lscc" on transaction 14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1, error: Timeout expired while starting chaincode lynnhurst-composer:0.13.2(networkid:dev,peerid:peer-0.peer,tx:14cc34b63f20838b904116a03d39fba2a0eabf8ab7076f51982c746a17c667f1) 2017-10-10 20:56:12.844 UTC [endorser] simulateProposal -&gt; DEBU 45d Exit 2017-10-10 20:56:12.844 UTC [lockbasedtxmgr] Done -&gt; DEBU 45e Done with transaction simulation / query execution [f693a082-3404-4f01-98a1-bed5c3f6b24e] 2017-10-10 20:56:12.844 UTC [endorser] ProcessProposal -&gt; DEBU 45f Exit </code></pre> <p>My configuration currently mounts the host's <code>/var/run</code> folder as <code>/host/var/run</code> and sets <code>CORE_VM_DOCKER_ENDPOINT=unix:///host/var/run/docker.sock</code>. I currently have <code>CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=bridge</code>, but I'm not sure if that's the correct setting (I've tried <code>host</code>) to allow these chaincode containers to connect to the peer. Any tips would be appreciated.</p> <p>Fabric version 1.0.3, Composer version 0.13.2</p>
<p>As @christo4ferris indicated, Kubenetes is the basis for the IBM Blockchain developer experience. </p> <p>All the scripts are available here:</p> <p><a href="https://github.com/IBM-Blockchain/ibm-container-service/tree/master/cs-offerings/kube-configs" rel="nofollow noreferrer">https://github.com/IBM-Blockchain/ibm-container-service/tree/master/cs-offerings/kube-configs</a></p>
<p>I'm running <strong>kubernetes</strong> on bare-metal <strong>Debian</strong> (3 masters, 2 workers, PoC for now). I followed k8s-the-hard-way, and I'm running into the following problem on my kubelet: </p> <blockquote> <p>Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"</p> </blockquote> <p>And I have the same message for kubelet.service.</p> <p>I have some <strong>files</strong> about those cgroups:</p> <pre><code>$ ls /sys/fs/cgroup/systemd/system.slice/docker.service cgroup.clone_children cgroup.procs notify_on_release tasks $ ls /sys/fs/cgroup/systemd/system.slice/kubelet.service/ cgroup.clone_children cgroup.procs notify_on_release tasks </code></pre> <p>And <strong>cadvisor</strong> tells me:</p> <pre><code>$ curl http://127.0.0.1:4194/validate cAdvisor version: OS version: Debian GNU/Linux 8 (jessie) Kernel version: [Supported and recommended] Kernel version is 3.16.0-4-amd64. Versions &gt;= 2.6 are supported. 3.0+ are recommended. Cgroup setup: [Supported and recommended] Available cgroups: map[cpu:1 memory:1 freezer:1 net_prio:1 cpuset:1 cpuacct:1 devices:1 net_cls:1 blkio:1 perf_event:1] Following cgroups are required: [cpu cpuacct] Following other cgroups are recommended: [memory blkio cpuset devices freezer] Hierarchical memory accounting enabled. Reported memory usage includes memory used by child containers. Cgroup mount setup: [Supported and recommended] Cgroups are mounted at /sys/fs/cgroup. Cgroup mount directories: blkio cpu cpu,cpuacct cpuacct cpuset devices freezer memory net_cls net_cls,net_prio net_prio perf_event systemd Any cgroup mount point that is detectible and accessible is supported. /sys/fs/cgroup is recommended as a standard location. Cgroup mounts: cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0 cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0 Managed containers: /kubepods/burstable/pod76099b4b-af57-11e7-9b82-fa163ea0076a /kubepods/besteffort/pod6ed4ee49-af53-11e7-9b82-fa163ea0076a/f9da6bf60a186c47bd704bbe3cc18b25d07d4e7034d185341a090dc3519c047a Namespace: docker Aliases: k8s_tiller_tiller-deploy-cffb976df-5s6np_kube-system_6ed4ee49-af53-11e7-9b82-fa163ea0076a_1 f9da6bf60a186c47bd704bbe3cc18b25d07d4e7034d185341a090dc3519c047a /kubepods/burstable/pod76099b4b-af57-11e7-9b82-fa163ea0076a/956911118c342375abfb7a07ec3bb37451bbc64a1e141321b6284cf5049e385f </code></pre> <p><strong>EDIT</strong></p> <p>Disabling the <strong>cadvisor</strong> port on kubelet (<code>--cadvisor-port=0</code>) doesn't fix that.</p>
<p>Try to start kubelet with</p> <pre><code>--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice </code></pre> <p>I'm using this solution on RHEL7 with Kubelet 1.8.0 and Docker 1.12</p>
<p>I've been pulling my hair out over this for too many hours... I'm pretty new to kubernetes so I know I must be missing something.</p> <p><strong>"ERROR: Job failed (system failure): the server does not allow access to the requested resource (post pods)"</strong></p> <p>We have a GitLab instance setup on a VM, and another VM with the GitLab runner installed. Both live in Google Cloud Compute Engine.</p> <p>We also have a Kubernetes cluster spun up on Google Cloud.</p> <p>When the runner attempts to run, it results in the following:</p> <pre><code>Running with gitlab-runner 10.0.2 (a9a76a50) on rd-002-optic-nexus (21590677) Using Kubernetes namespace: gitlab Using Kubernetes executor with image docker:git ... ERROR: Job failed (system failure): the server does not allow access to the requested resource (post pods) </code></pre> <p>Due to the Runner being "external" to the cluster, my only option is to authenticate to the API server via "client certificate" authentication.</p> <p>I'm using the cluster ca.crt provided from the Google Cloud Console, and have followed <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#overview" rel="nofollow noreferrer">Kubernetes guide</a> to create a client cert. However, I just can't get it to work.</p> <ul> <li>I can ping the Kubernetes server no problem from the (Runner) VM.</li> <li>GitLab is hooked up to the runner and does attempt to use it (hence the output above).</li> <li>In the kubernetes cluster, we've: <ul> <li>created the namespace "gitlab".</li> <li>created a service account "gitlab-sa" in the "gitlab" namespace.</li> <li>generated and approved the client certificate ok (csr details below).</li> </ul></li> <li>The certificates on the runner have read permissions for everyone.</li> </ul> <p>I must be missing something somewhere. </p> <h3>GitLab Runner Config</h3> <pre><code>concurrent = 1 check_interval = 0 [[runners]] name = "rd-002-optic-nexus" url = "https://our.gitlab.instance.com/" token = "21590677f31b57bce610ef3f4cb20d" executor = "kubernetes" [runners.kubernetes] host = "https://111.222.x.xxx" cert_file = "/usr/local/share/ca-certificates/kube-client.crt" key_file = "/usr/local/share/ca-certificates/kube-client.key" ca_file = "/usr/local/share/ca-certificates/kubernetes-ca.crt" namespace = "gitlab" namespace_overwrite_allowed = "" privileged = false cpu_limit = "1" memory_limit = "1Gi" service_cpu_limit = "1" service_memory_limit = "1Gi" helper_cpu_limit = "500m" helper_memory_limit = "100Mi" [runners.kubernetes.node_selector] gitlab = "true" </code></pre> <h3>Kubernetes Client CSR</h3> <pre><code>apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: name: gitlab-sa.gitlab spec: groups: - system:authenticated request: $(cat server.csr | base64 | tr -d '\n') usages: - digital signature - key encipherment - server auth </code></pre> <p>Any thoughts? Anything I'm missing?</p>
<p>Your client certificate has a usage for server auth instead of client auth. For your TLS client to use the certificate to authenticate to the Kubernetes apiserver it needs to provide a certificate with the client auth usage. </p>
<p>Is there any kubectl command to bootstrap yaml file for certain object?</p> <p>For example </p> <pre><code>kubectl generate deployment --yml </code></pre> <p>After which I would fill it with values I want.</p> <p>Thanks</p>
<p>You could try playing with:</p> <pre><code>kubectl create &lt;resourcetype&gt; &lt;otheroptions&gt; --output=yaml --dry-run </code></pre> <p>See:</p> <ul> <li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create</a></li> </ul> <p>for different <code>&lt;resourcetype&gt;</code> you could use this with.</p>
<p>I just created image using Docker file and for changing user I just used:</p> <pre><code>USER myuser </code></pre> <p>We are using a directory to store data, we change that directory permission using:</p> <pre><code>chown -R myuser:myuser /data-dir </code></pre> <p>This Docker file is for etcd, where we want /data-dir use by etcd to store data. Now, we map the /data-dir to efs volume using kubernetes yml file.</p> <p>With the below code:</p> <pre><code>volumeMounts: - name: etcdefs mountPath: /data-dir volumes: - name: etcdefs persistentVolumeClaim: claimName: efs-etcd </code></pre> <p>After this, I expect, that mapped directory /data-dir should have permission as myuser:myuser but it making the directory as root:root</p> <p>Can any one suggest what I am doing wrong here ?</p>
<p>This is because of docker. It mounts volume with only root permission and you can change it with <code>chmod</code> but only after the container is started.</p> <p>You can read more about it here <a href="https://github.com/moby/moby/issues/2259" rel="noreferrer">https://github.com/moby/moby/issues/2259</a> This issues is here for a long time.</p> <p>What you can do in kubernetes is use <code>fsGroup</code> and <strong>force</strong> that volume is writable by GID specified. This is working solution and documented as well. More information here <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p> <p>Here is an example deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: alpine spec: replicas: 1 template: metadata: labels: app: alpine spec: securityContext: fsGroup: 1000 containers: - name: alpine image: alpine volumeMounts: - mountPath: /var/alpine name: alpine volumes: - name: alpine awsElasticBlockStore: volumeID: vol-1234567890 fsType: ext4 </code></pre>
<p>I have a small problem and need to know what is the best way to approach this/solve my issue.</p> <p>I have deployed few pods on Kubernetes and so far I have enjoyed learning about and working with Kubernetes. Did all the persistent volume, volume claim...etc. and can see my data on the host, as I need those files for further processing.</p> <p>Now the issue is 2 pods (2 replicas) sharing the same volume claim are writing to the same location on the host, expected, but unfortunately causing the data to be duplicated in the output file.</p> <p>What I need is:</p> <ul> <li>To have a unique output of each pod on the host. Is the only way to achieve this is by having two deployment files, in my case, and each to use a different volume claim/persistent volume ? At the same time not sure if this is an optimal approach for future updates, upgrades, availability of certain number of pods ... etc.</li> <li>Or can I still have one deployment file with 2 or more replicas and still avoid the output duplication when sharing the same pvc ? </li> </ul> <p>Please note that I have one node deployment and that's why I'm using hostpath at the moment.</p> <p>creating pv:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: ls-pv labels: type: local spec: storageClassName: manual capacity: storage: 100Gi accessModes: - ReadWriteOnce hostPath: path: "/ls-data/my-data2" </code></pre> <p>claim-pv:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ls-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 100Gi </code></pre> <p>How I use my pv inside my deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: logstash namespace: default labels: component: logstash spec: replicas: 2 selector: matchLabels: component: logstash #omitted ports: - containerPort: 5044 name: logstash-input protocol: TCP - containerPort: 9600 name: transport protocol: TCP volumeMounts: - name: ls-pv-store mountPath: "/logstash-data" volumes: - name: ls-pv-store persistentVolumeClaim: claimName: ls-pv-claim </code></pre>
<p>Depending on what exactly you are trying to achieve you could use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">Statefulsets</a> instead of Deployments. Each Pod spawn from the Statefulset's Pod template can have it's own separate PersistentVolumeClaim that is created from the volumeClaimTemplate (see the link for an example). You will need a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses" rel="noreferrer">StorageClass</a> set up for this.</p> <p>If you are looking for something simpler you write to <code>/mnt/volume/$HOSTNAME</code> from each Pod. This will also ensure that they are using separate files as the hostnames for the Pods are unique.</p>
<p>I have a running elasticsearch cluster and I am trying to connect kibana to this cluster (same node). Currently the page hangs when I try to open the service in my browser using :. . In my kibana pod logs, the last few log messages in the pod are:</p> <pre><code>{"type":"log","@timestamp":"2017-10-13T17:23:46Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"} {"type":"log","@timestamp":"2017-10-13T17:23:46Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-10-13T17:23:49Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} </code></pre> <p>My kibana.yml file that is mounted into the kibana pod has the following config:</p> <pre><code>server.name: kibana-logging server.host: 0.0.0.0 elasticsearch.url: http://elasticsearch:9300 xpack.security.enabled: false xpack.monitoring.ui.container.elasticsearch.enabled: true </code></pre> <p>and my elasticsearch.yml file has the following config settings (I have 3 es pods)</p> <pre><code>cluster.name: elasticsearch-logs node.name: ${HOSTNAME} network.host: 0.0.0.0 bootstrap.memory_lock: false xpack.security.enabled: false discovery.zen.minimum_master_nodes: 2 discovery.zen.ping.unicast.hosts: ["172.17.0.3:9300", "172.17.0.4:9300", "172.17.0.4:9300"] </code></pre> <p>I feel like the issue is currently with the <code>network.host</code> field but I'm not sure. What fields am I missing/do I need to modify in order to connect to a kibana pod to elasticsearch if they are in the same cluster/node? Thanks!</p> <p>ES Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: elasticsearch labels: component: elasticsearch role: master spec: type: NodePort selector: component: elasticsearch role: master ports: - name: http port: 9200 targetPort: 9200 nodePort: 30303 protocol: TCP </code></pre> <p>Kibana Svc</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kibana namespace: default labels: component: kibana spec: type: NodePort selector: component: kibana ports: - port: 80 targetPort: 5601 protocol: TCP </code></pre> <p>EDIT: After changing port to 9200 in kibana.yml here is what i see in the logs at the end when I try and access kibana:</p> <pre><code>{"type":"log","@timestamp":"2017-10-13T21:36:30Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"} {"type":"log","@timestamp":"2017-10-13T21:36:30Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-10-13T21:36:33Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - Request Timeout after 3000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-10-13T21:37:02Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-*/esqueue/_search?version=true =&gt; getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"} {"type":"log","@timestamp":"2017-10-13T21:37:32Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"} {"type":"log","@timestamp":"2017-10-13T21:37:33Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"} {"type":"log","@timestamp":"2017-10-13T21:37:37Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"} {"type":"log","@timestamp":"2017-10-13T21:37:38Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"} {"type":"log","@timestamp":"2017-10-13T21:37:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"} </code></pre>
<p>The issue here is that you exposed Elasticsearch on port 9200 but are trying to connect to port 9300 in your kibana.yml file.</p> <p>You either need to edit your kibana.yml file to use:</p> <pre><code>elasticsearch.url: http://elasticsearch:9200 </code></pre> <p>Or change the port in the elasticsearch service to 9300.</p>
<p>I have had success before specifying the hosts in my elasticsearch.yaml file by IP (hardcoding address:port) but I was told this is bad practice. I am trying to switch to using just the pod names for my ES cluster and now the pods aren't discovered/used as master. I have a elasticsearch.yml configMap for all 3 pods that I mount which has the following specs:</p> <pre><code>cluster.name: elasticsearch-logs node.name: ${HOSTNAME} node.master: true node.data: true network.host: _local_ transport.tcp.port: 9300 http.port: 9200 bootstrap.memory_lock: false xpack.security.enabled: false discovery.zen.minimum_master_nodes: 2 discovery.zen.ping.unicast.hosts: ["es-0:9300", "es-1:9300", "es-2:9300"] </code></pre> <p>Along with this I have 2 services. One is a headless service and the other is a ClusterIP.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: elasticsearch-svc labels: component: elasticsearch role: master spec: selector: component: elasticsearch role: master ports: - name: transport port: 9300 targetPort: 9300 clusterIP: None apiVersion: v1 kind: Service metadata: name: elasticsearch-discovery labels: component: elasticsearch role: master spec: selector: component: elasticsearch role: master ports: - name: transport port: 9300 protocol: TCP </code></pre> <p>And in the main StatefulSet file that creates the ES pods I have the port specs:</p> <pre><code>ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP </code></pre> <p>I am trying to get all 3 pods to act as master (and data/client). When I look at one of pod logs (here es-0) after creating my services/statefulsets I see the following repeating errors:</p> <pre><code>[2017-10-16T15:31:29,078][WARN ][o.e.d.z.UnicastZenPing ] [es-0] timed out after [5s] resolving host [es-1:9300] [2017-10-16T15:31:29,079][WARN ][o.e.d.z.UnicastZenPing ] [es-0] timed out after [5s] resolving host [es-2:9300] [2017-10-16T15:31:32,080][WARN ][o.e.d.z.ZenDiscovery ] [es-0] not enough master nodes discovered during pinging (found [[Candidate{node={es-0}{TUE-h8SNR6q7WbWUl2Pm-A}{XrTrBg3ATqSvlB3hTlezpg}{172.17.0.3}{172.17.0.3:9300}{ml.max_open_jobs=10, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again [2017-10-16T15:31:36,111][WARN ][o.e.d.z.UnicastZenPing ] [es-0] failed to resolve host [es-1:9300] java.net.UnknownHostException: es-1 at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[?:1.8.0_141] at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_141] at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_141] at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:908) ~[elasticsearch-5.6.3.jar:5.6.3] at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:863) ~[elasticsearch-5.6.3.jar:5.6.3] at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:691) ~[elasticsearch-5.6.3.jar:5.6.3] at org.elasticsearch.discovery.zen.UnicastZenPing.lambda$null$0(UnicastZenPing.java:212) ~[elasticsearch-5.6.3.jar:5.6.3] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_141] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.3.jar:5.6.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141] [2017-10-16T15:31:36,116][WARN ][o.e.d.z.UnicastZenPing ] [es-0] failed to resolve host [es-2:9300] java.net.UnknownHostException: es-2 at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[?:1.8.0_141] at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_141] at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_141] at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:908) ~[elasticsearch-5.6.3.jar:5.6.3] at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:863) ~[elasticsearch-5.6.3.jar:5.6.3] at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:691) ~[elasticsearch-5.6.3.jar:5.6.3] at org.elasticsearch.discovery.zen.UnicastZenPing.lambda$null$0(UnicastZenPing.java:212) ~[elasticsearch-5.6.3.jar:5.6.3] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_141] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.3.jar:5.6.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141] [2017-10-16T15:31:39,120][WARN ][o.e.d.z.ZenDiscovery ] [es-0] not enough master nodes discovered during pinging (found [[Candidate{node={es-0}{TUE-h8SNR6q7WbWUl2Pm-A}{XrTrBg3ATqSvlB3hTlezpg}{172.17.0.3}{172.17.0.3:9300}{ml.max_open_jobs=10, ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again </code></pre> <p>I am still able to reach elasticsearch through the browser at <code>node-ip:node-port</code> but I get 503 errors once I try and do <code>/_cluster/state</code></p> <p>I believe I have an error on the "networking" side with the ports but I'm not sure where exactly. What should I look into? Thanks!</p> <p>StatefulSet</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: es labels: component: elasticsearch role: master spec: serviceName: elasticsearch replicas: 3 template: metadata: labels: component: elasticsearch role: master annotations: pod.alpha.kubernetes.io/init-containers: '[ { "name": "init-sysctl", "image": "alpine:3.4", "imagePullPolicy": "IfNotPresent", "command": ["sysctl", "-w", "vm.max_map_count=262144"], "securityContext": { "privileged": true } } ]' spec: subdomain: elasticsearch-svc containers: - name: es-master securityContext: privileged: true capabilities: add: - IPC_LOCK image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3 imagePullPolicy: Always env: - name: "ES_JAVA_OPTS" value: "-Xms512m -Xmx512m" ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: storage mountPath: /data - name: config-volume mountPath: /usr/share/elasticsearch/config/elasticsearch.yml subPath: elasticsearch.yml volumes: - name: config-volume configMap: name: elasticsearch-config volumeClaimTemplates: - metadata: name: storage annotations: volume.beta.kubernetes.io/storage-class: standard spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 12Gi </code></pre>
<p>You need to connect with the full dns name:</p> <pre><code>es-0.elasticsearch-internal:9300 </code></pre>
<p>I am running Zookeeper in an OpenShift/Kubernetes environment. I have setup zookeeper as a <code>StatefulSet</code> in order to reliably persist config data. </p> <p>I configured three servers in my <code>zoo.cfg</code> by hostname, but on startup, hostname resolution fails. I verified hostnames are indeed resolvable using nslookup inside my cluster.</p> <p>zoo.cfg:</p> <pre><code>clientPort=2181 dataDir=/var/lib/zookeeper/data dataLogDir=/var/lib/zookeeper/log tickTime=2000 initLimit=10 syncLimit=2000 maxClientCnxns=60 minSessionTimeout= 4000 maxSessionTimeout= 40000 autopurge.snapRetainCount=3 autopurge.purgeInteval=0 server.1=zookeeper-0.zookeeper-headless:2888:3888 server.2=zookeeper-1.zookeeper-headless:2888:3888 server.3=zookeeper-2.zookeeper-headless:2888:3888 </code></pre> <p>Relevant parts of my OpenShift / Kubernetes configuration:</p> <pre><code> # StatefulSet - apiVersion: apps/v1beta1 kind: StatefulSet metadata: labels: app: zookeeper name: zookeeper spec: serviceName: zookeeper-headless replicas: 3 template: metadata: labels: app: zookeeper spec: containers: - image: 172.30.158.156:5000/os-cloud-platform/zookeeper:latest name: zookeeper ports: - containerPort: 2181 protocol: TCP name: client - containerPort: 2888 protocol: TCP name: server - containerPort: 3888 protocol: TCP name: leader-election dnsPolicy: ClusterFirst schedulerName: default-scheduler # Service - apiVersion: v1 kind: Service metadata: labels: app: zookeeper name: zookeeper spec: ports: - name: client port: 2181 protocol: TCP targetPort: 2181 selector: app: zookeeper sessionAffinity: None type: ClusterIP - apiVersion: v1 kind: Service metadata: name: zookeeper-headless labels: app: zookeeper spec: ports: - port: 2888 name: server - port: 3888 name: leader-election clusterIP: None selector: app: zookeeper </code></pre> <p>OpenShift logs show <code>UnknownHostException</code>s, though:</p> <pre><code>2017-10-06 10:59:18,289 [myid:] - WARN [main:QuorumPeer$QuorumServer@155] - Failed to resolve address: zookeeper-2.zookeeper-headless java.net.UnknownHostException: zookeeper-2.zookeeper-headless: No address associated with hostname at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at java.net.InetAddress.getByName(InetAddress.java:1076) at org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer.recreateSocketAddresses(QuorumPeer.java:148) at org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer.&lt;init&gt;(QuorumPeer.java:133) at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:228) at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:140) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:101) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) ... </code></pre> <p>What could be the cause? I verified that the hostname (e.g. <code>zookeeper-2.zookeeper-headless</code>) is available from other pods through <code>nslookup</code>.</p>
<p>I found a working solution for this issue. ZooKeeper reads the list of servers in the ensemble on startup and looks for its "own" entry. It then uses this entry to determine which port and interface to listen on.</p> <pre><code>server.1=zookeeper-0.zookeeper-headless:2888:3888 server.2=zookeeper-1.zookeeper-headless:2888:3888 server.3=zookeeper-2.zookeeper-headless:2888:3888 </code></pre> <p>Since the provided hostname will resolve to 127.0.0.1 on this machine, ZooKeeper will listen on the local loopback interface and therefore does not accept connections from the other ZooKeeper servers.</p> <pre><code>server.1=0.0.0.0:2888:3888 server.2=zookeeper-1.zookeeper-headless:2888:3888 server.3=zookeeper-2.zookeeper-headless:2888:3888 </code></pre> <p>To automate things in the cluster, I wrote a bash script that will replace the one "own" entry on container startup.</p> <p><strong>EDIT:</strong> As asked in the comments, here is my <code>ENTRYPOINT</code> script that takes care of placing the myid file and setting the appropriate hostname for each <code>zoo.cfg</code>:</p> <pre><code>#!/bin/bash # This script extracts the number out of the pod's hostname and sets it as zookeepers id. # Exact paths may vary according to your setup MYID_FILE="/var/lib/zookeeper/data/myid" ZOOCFG_FILE="/conf/zoo.cfg" # Create myid-file # Extract only numbers from hostname id=$(hostname | tr -d -c 0-9) echo $id &gt; "${MYID_FILE}" # change own hostname to 0.0.0.0 # otherwise, the own hostname will resolve to 127.0.0.1 # https://stackoverflow.com/a/40750900/5764665 fullHostname="$(hostname).zookeeper-headless" sed -i -e "s/${fullHostname}/0.0.0.0/g" "${ZOOCFG_FILE}" echo "Executing $@" exec "$@" </code></pre>
<p>How Can I enable batch/v2alpha1API for a google container engine cluster ? Which is by passing</p> <p><code>--runtime-config=batch/v2alpha1=true</code> </p> <p>to the API server .</p> <p>I'm using version kubernetes 1.7.6. Where should I go to enable that !!!</p>
<p>You cannot change the runtime configuration of the Kubernetes apiserver in Google Container Engine and by policy alpha APIs are not enabled because they have no official support policy. From <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/kubernetes-api/</a>:</p> <blockquote> <p>Alpha level:</p> <ul> <li>The version names contain alpha (e.g. v1alpha1).</li> <li>May be buggy. </li> <li>Enabling the feature may expose bugs.</li> <li>Disabled by default.</li> <li>Support for feature may be dropped at any time without notice.</li> <li>The API may change in incompatible ways in a later software release without notice.</li> <li>Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.</li> </ul> </blockquote> <p>If you want that particular alpha API enabled in Google Container Engine, you can create an <a href="https://cloud.google.com/container-engine/docs/alpha-clusters" rel="nofollow noreferrer">alpha cluster</a>.</p>
<p>My helm depends on another helm from public repository. I've installed it manually and put the command to documentation. But I'd like to do it automatically. </p> <p>Is there some way to set such dependencies?</p>
<p>You may take a look on helm <a href="https://docs.helm.sh/developing_charts/#chart-dependencies" rel="noreferrer">chart dependencies</a> with <a href="https://docs.helm.sh/chart_best_practices/#requirements-files" rel="noreferrer">requirements.yaml</a><br> And as an example - <a href="https://github.com/kubernetes/charts/blob/master/incubator/kafka/requirements.yaml" rel="noreferrer">Kafka helm chart</a> with zookeeper dependency. </p>
<p>I have been trying to play around with creating secrets for Kubernetes cluster using the python client. I keep getting an error that says</p> <pre><code>Traceback (most recent call last): File "create_secrets.py", line 19, in &lt;module&gt; api_response = v1.create_namespaced_secret(namespace, body) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 7271, in create_namespaced_secret (data) = self.create_namespaced_secret_with_http_info(namespace, body, **kwargs) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 7361, in create_namespaced_secret_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 335, in call_api _preload_content, _request_timeout) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 148, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 393, in request body=body) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 287, in POST body=body) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 240, in request raise ApiException(http_resp=r) kubernetes.client.rest.ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Mon, 16 Oct 2017 04:17:35 GMT', 'Content-Length': '234'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"none in version \"v1\" cannot be handled as a Secret: no kind \"none\" is registered for version \"v1\"","reason":"BadRequest","code":400} </code></pre> <p>This is my code that I am trying to execute to create a secret.</p> <pre><code>from __future__ import print_function import time import kubernetes.client from pprint import pprint from kubernetes import client, config config.load_kube_config() v1 = client.CoreV1Api() namespace = 'kube-system' metadata = {'name': 'pk-test-tls', 'namespace': 'kube-system'} data= {'tls.crt': '###BASE64 encoded crt###', 'tls.key': '###BASE64 encoded Key###'} api_version = 'v1' kind = 'none' body = kubernetes.client.V1Secret(api_version, data , kind, metadata, type='kubernetes.io/tls') api_response = v1.create_namespaced_secret(namespace, body) pprint(api_response) </code></pre> <p>What am I missing here?</p>
<p>Almost everything that you have written is alright but pay attention to the message received from <code>kube-apiserver</code>:</p> <blockquote> <p>HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;none in version &quot;v1&quot; cannot be handled as a Secret: no kind &quot;none&quot; is registered for version &quot;v1&quot;&quot;,&quot;reason&quot;:&quot;BadRequest&quot;,&quot;code&quot;:400}</p> </blockquote> <p>Especially <strong>no kind &quot;none&quot;</strong>. Is it just typo or do you have something on your mind here?</p> <p>You have list of kinds here <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#types-kinds" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#types-kinds</a></p> <p>If you change kind to &quot;Secret&quot; then everything will be working fine.</p>
<p>whilst 'hardening' the accounts - namely removing or toning down accounts with editor permissions on the projects I removed editor from what appears to be the kubernetes account that container engine uses on the back end of <code>gcloud</code> commands.</p> <p>Once you remove the last role from an account it vanishes - hard lesson to learn! <code> Removed editor serviceAccount:[email protected] </code></p> <ul> <li>It meant I initially couldn't deploy because it couldn't access container registry.</li> <li>So I deleted the cluster and recreated expecting the account to get recreated. That failed due to insufficient permissions.</li> <li>so I manually removed the compute instances (it wouldn't have permissions to recreate them), then templates and then the cluster.</li> <li><p>As the UI now thinks you have no clusters it looks like you are back to the beginning. So I ran my scripts and they failed.</p> <p>ERROR: (gcloud.container.clusters.create) Opetion [https://container.googleapis.com/v1/projects/xxxx/zones/europe-west2-b/operations/operation-xxxx' startTime: u'2017-10-17T17:59:41.515667863Z' status: StatusValueValuesEnum(DONE, 3) statusMessage: u'Deploy error: "Not all instances running in IGM. Expect 1. Current actions &amp;{Abandoning:0 Creating:0 CreatingWithoutRetries:0 Deleting:0 None:0 Recreating:1 Refreshing:0 Restarting:0 Verifying:0 ForceSendFields:[] NullFields:[]}. Errors [<a href="https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/gke-xxxx-default-pool-xxxx:PERMISSIONS_ERROR]" rel="nofollow noreferrer">https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/gke-xxxx-default-pool-xxxx:PERMISSIONS_ERROR]</a>".' targetLink: u'<a href="https://container.googleapis.com/v1/projects/xxxx/zones/europe-west2-b/clusters/xxxx" rel="nofollow noreferrer">https://container.googleapis.com/v1/projects/xxxx/zones/europe-west2-b/clusters/xxxx</a>' zone: u'europe-west2-b'>] finished with error: Deploy error: "Not all instances running in IGM. Expect 1. Current actions &amp;{Abandoning:0 Creating:0 CreatingWithoutRetries:0 Deleting:0 None:0 Recreating:1 Refreshing:0 Restarting:0 Verifying:0 ForceSendFields:[] NullFields:[]}. Errors [<a href="https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/xxxx:PERMISSIONS_ERROR]" rel="nofollow noreferrer">https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/xxxx:PERMISSIONS_ERROR]</a>". Updated property [container/cluster].</p></li> </ul> <p>when I try to create through UI I get this</p> <p><code> Permission denied (HTTP 403): Google Compute Engine: Required 'compute.zones.get' permission for 'projects/xxxx/zones/us-central1-a' </code></p> <p>Have done a number on it! My problem is that I don't see a way of giving permissions back to whatever account it is trying to use (as I cannot see that account <em>if</em> it exists) nor can I see how to attach a new service account with permissions that are needed to whatever is doing the work under the hood.</p> <p>UPDATE:</p> <p>So ...</p> <p>I recreated the account at the organisation level. Gave it service account role there because you cannot modify the domain of the accounts at project level.</p> <p>I have then modified that at the project level to have editor permissions.</p> <p>This means i can deploy a cluster but ... still cannot create load balancer - insufficient permissions</p> <pre><code>Error creating load balancer (will retry): Error getting LB for service default/bot: googleapi: Error 403: Required 'compute.forwardingRules.get' permission for 'projects/xxxx/regions/europe-west2/forwardingRules/xxxx', forbidden </code></pre> <p>the user having the problem this time is: <code>[email protected]</code></p>
<p>So ...</p> <p>I played with recreating accounts etc. Eventually got Kubernetes working again. A week later tried to use datastore and discovered that AppEngine was dead beyond dead.</p> <p>The only recourse was to start a new project from scratch.</p> <p>The answer to this question is (some may laugh at its self evidence, but we are all in a rush at some point).</p> <h1>DO NOT CREATE USER ACCOUNTS OR GIVE THEM PERMISSIONS BEYOND WHAT THEY NEED BECAUSE DELETING THEM LATER IS REALLY NOT WORTH THE RISK.</h1> <p>Thankyou for listening :D</p>
<p>We are interested in running certain commands as pods and services, as they start or stop. Using the life-cycle hooks in the yml files does not work for us, since these commands are not optional. We have considered running a watcher pod that uses the watch api to run these commands. But we can't figure out how to use the watch api so that it does not keep sending the same events again and again. Is there a way to tell the watch api to only send new events since connection was opened? If expecting a stateful watch api is unreasonable, will it be possible to pass it a timestamp or a monotonically increasing id to avoid getting already seen events?</p> <p>Basically what we are doing now we are running a pod with a daemon process that communicates with the api. we can find the events as stream. But we are interested to run some task when a pod created or deleted.</p>
<p>Run kube proxy to use curl without authentication</p> <pre><code>kubectl proxy </code></pre> <p>List all events with a watch;</p> <pre><code>curl -s 127.0.0.1:8001/api/v1/watch/events </code></pre> <p>Run the curl to watch the events and filter it with jq for pod starts and stops.</p> <pre><code>curl -s 127.0.0.1:8001/api/v1/watch/events | jq --raw-output \ 'if .object.reason == "Started" then . elif .object.reason == "Killing" then . else empty end | [.object.firstTimestamp, .object.reason, .object.metadata.namespace, .object.metadata.name] | @csv' </code></pre> <p><a href="http://thilina.piyasundara.org/2017/10/watch-kubernetes-pod-events-stopstart.html" rel="noreferrer">More details</a></p>
<p>I've created a JOB pod with 2 INIT containers At pod creation, my job completed successfully but no sign of the init containers</p> <p>To me, the job should have wait for the completion of the 2 init containers before starting</p> <p>My job blueprint (taken from Kubernetes documentation) in case you'd like to reproduce the problem</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "init-myservice", "image": "busybox", "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"] }, { "name": "init-mydb", "image": "busybox", "command": ["sh", "-c", "until nslookup mydb; do echo waiting for mydb; sleep 2; done;"] } ]' spec: template: metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never </code></pre> <p>When checking my job pod</p> <pre><code>$ kubectl describe pod pi-v2dn9 Name: pi-v2dn9 Namespace: default Security Policy: anyuid Node: 192.168.111.4/192.168.111.4 Start Time: Thu, 19 Oct 2017 08:58:39 +0000 Labels: controller-uid=b3091c77-b4ab-11e7-a3ea-fa163ea1c70b job-name=pi Status: Succeeded IP: 10.131.0.46 Controllers: Job/pi Containers: pi: Container ID: docker://4bc5bb4c9fc65c1aa1999c3bdc09b01e54043dcdd464410edd0c9cad334c9c67 Image: perl Image ID: docker-pullable://docker.io/perl@sha256:80bd8136a0f3e2c7d202236fc5d8f1192dbfa9ec661ecdd5e96a446e9c7913a8 Port: Command: perl -Mbignum=bpi -wle print bpi(2000) State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 19 Oct 2017 08:58:53 +0000 Finished: Thu, 19 Oct 2017 08:58:58 +0000 Ready: False Restart Count: 0 Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-fxdf1 (ro) Environment Variables: Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-fxdf1: Type: Secret (a volume populated by a Secret) SecretName: default-token-fxdf1 QoS Class: BestEffort Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message 26s 26s 1 {default-scheduler } Normal Scheduled Successfully assigned pi-v2dn9 to 192.168.111.4 25s 25s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Pulling pulling image "perl" 12s 12s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Pulled Successfully pulled image "perl" 12s 12s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Created Created container with docker id 4bc5bb4c9fc6; Security:[seccomp=unconfined] 12s 12s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Started Started container with docker id 4bc5bb4c9fc6 </code></pre> <p>NO SIGN OF THE INIT CONTAINERS !!!</p> <p>My Environment: --- Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2+43a9be4", GitCommit:"43a9be4", GitTreeState:"clean", BuildDate:"2017-04-20T15:38:11Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2+43a9be4", GitCommit:"43a9be4", GitTreeState:"clean", BuildDate:"2017-04-20T15:38:11Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}</p> <p>I'm working in a Cluster from OpenStack</p> <p>OS is Red Hat Enterprise Linux Server 7.3 (Maipo)</p> <p>Thanks in advance for any help.</p>
<p>Try adding the annotation to the spec template rather than the Job object:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: metadata: name: pi annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "init-myservice", "image": "busybox", "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"] }, { "name": "init-mydb", "image": "busybox", "command": ["sh", "-c", "until nslookup mydb; do echo waiting for mydb; sleep 2; done;"] } ]' spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never </code></pre>
<p>I am trying to get my app running on minikube, reachable at the minikube ip address. Below are the commands I am using to get it running. It all works as I want it to (if I try to log in, it returns the expected 401 since the mongo DB is empty).</p> <p>I would like to actually include a database in my app with, say, an existing user. I cannot find a clear way with how to include data (say in a tar file that works with mongo) with what I am doing. How can I accomplish this?</p> <pre><code>### Get mongo up docker pull mongo:3.2 kubectl run mongo --image=mongo:3.2 --port=27017 kubectl expose deployment mongo ### Get API up cd /api docker build -t api:v1 . kubectl run api --image=api:v1 --port=3000 --env="MONGODB_URI=mongodb://mongo:27017/myapp-dev" kubectl apply -f api-deployment.yml kubectl expose deployment api --type=LoadBalancer ### Get dashboard up docker build -t myapp-dashboard:v1 . kubectl apply -f ./tooling/minikube/myapp-dashboard-qa-config-map.yml kubectl apply -f ./tooling/minikube/myapp-dashboard-deployment.yml kubectl expose deployment myapp-dashboard --type=LoadBalancer ### Ingress setup minikube addons enable ingress kubectl create -f ingress.yml kubectl apply -f ./tooling/minikube/ingress.yml </code></pre>
<p>The nice way of doing it making use of the <code>initdb</code> infrastructure in the <code>mongo</code> image. The <code>mongo:3.2</code> includes an <a href="https://github.com/docker-library/mongo/blob/master/3.2/docker-entrypoint.sh" rel="noreferrer"><code>entrypoint</code> shell script</a> that iterates trough <code>/docker-entrypoint-initdb.d/*.{sh,js}</code> (in the container).</p> <p>Depending on the type of data you need to insert into the newly made database using a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer"><code>ConfigMap</code></a> or a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer"><code>Secret</code></a> as a volume is the way to go. But you can't do that with <code>kubectl run</code>.</p> <p><strong>1.</strong> Create a <code>mongo.yaml</code> like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mongo spec: template: metadata: labels: app: mongo spec: containers: - image: mongo:3.2 name: mongo ports: - containerPort: 27017 volumeMounts: - name: mongo-initdb mountPath: /docker-entrypoint-initdb.d volumes: - name: mongo-initdb configMap: name: mongo-initdb </code></pre> <p><strong>2.</strong> Create <code>createuser.sh</code> like this (just an <a href="https://docs.mongodb.com/manual/tutorial/create-users/" rel="noreferrer">example</a>):</p> <pre><code>mongo &lt;&lt;EOF use reporting db.createUser( { user: "reportsUser", pwd: "12345678", roles: [ { role: "read", db: "reporting" }, { role: "read", db: "products" }, { role: "read", db: "sales" }, { role: "readWrite", db: "accounts" } ] } ) EOF </code></pre> <p><strong>3.</strong> Create the <a href="https://docs.mongodb.com/manual/tutorial/create-users/" rel="noreferrer"><code>ConfigMap</code></a>:</p> <pre><code>$ kubectl create configmap mongo-initdb --from-file=createuser.sh configmap "mongo-initdb" created </code></pre> <p><strong>4.</strong> Create the mongo deployment (instead of <code>kubectl run mongo --image=mongo:3.2 --port=27017</code>):</p> <pre><code>$ kubectl apply -f mongo.yaml deployment "mongo" created </code></pre> <p><strong>5.</strong> Check the logs:</p> <pre><code>$ kubectl logs -f deploy/mongo [...] /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/createuser.sh MongoDB shell version: 3.2.17 [...] Successfully added user: { [...] </code></pre> <p>As I mentioned a very similar solution can be done using <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer"><code>Secrets</code></a> for sensitive data. The <code>kubectl create configmap --from-file=...</code> command also accepts a name of a directory instead of a file and will work as expected.</p>
<p>I've created a JOB pod with 2 INIT containers At pod creation, my job completed successfully but no sign of the init containers</p> <p>To me, the job should have wait for the completion of the 2 init containers before starting</p> <p>My job blueprint (taken from Kubernetes documentation) in case you'd like to reproduce the problem</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "init-myservice", "image": "busybox", "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"] }, { "name": "init-mydb", "image": "busybox", "command": ["sh", "-c", "until nslookup mydb; do echo waiting for mydb; sleep 2; done;"] } ]' spec: template: metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never </code></pre> <p>When checking my job pod</p> <pre><code>$ kubectl describe pod pi-v2dn9 Name: pi-v2dn9 Namespace: default Security Policy: anyuid Node: 192.168.111.4/192.168.111.4 Start Time: Thu, 19 Oct 2017 08:58:39 +0000 Labels: controller-uid=b3091c77-b4ab-11e7-a3ea-fa163ea1c70b job-name=pi Status: Succeeded IP: 10.131.0.46 Controllers: Job/pi Containers: pi: Container ID: docker://4bc5bb4c9fc65c1aa1999c3bdc09b01e54043dcdd464410edd0c9cad334c9c67 Image: perl Image ID: docker-pullable://docker.io/perl@sha256:80bd8136a0f3e2c7d202236fc5d8f1192dbfa9ec661ecdd5e96a446e9c7913a8 Port: Command: perl -Mbignum=bpi -wle print bpi(2000) State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 19 Oct 2017 08:58:53 +0000 Finished: Thu, 19 Oct 2017 08:58:58 +0000 Ready: False Restart Count: 0 Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-fxdf1 (ro) Environment Variables: Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-fxdf1: Type: Secret (a volume populated by a Secret) SecretName: default-token-fxdf1 QoS Class: BestEffort Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message 26s 26s 1 {default-scheduler } Normal Scheduled Successfully assigned pi-v2dn9 to 192.168.111.4 25s 25s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Pulling pulling image "perl" 12s 12s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Pulled Successfully pulled image "perl" 12s 12s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Created Created container with docker id 4bc5bb4c9fc6; Security:[seccomp=unconfined] 12s 12s 1 {kubelet 192.168.111.4} spec.containers{pi} Normal Started Started container with docker id 4bc5bb4c9fc6 </code></pre> <p>NO SIGN OF THE INIT CONTAINERS !!!</p> <p>My Environment: --- Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2+43a9be4", GitCommit:"43a9be4", GitTreeState:"clean", BuildDate:"2017-04-20T15:38:11Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2+43a9be4", GitCommit:"43a9be4", GitTreeState:"clean", BuildDate:"2017-04-20T15:38:11Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}</p> <p>I'm working in a Cluster from OpenStack</p> <p>OS is Red Hat Enterprise Linux Server 7.3 (Maipo)</p> <p>Thanks in advance for any help.</p>
<p>Your annotations are in the wrong place. Specifically for init containers, they should be defined in the pod metadata, whereas you have added it to the job metadata.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: metadata: name: pi annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "init-myservice", "image": "busybox", "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"] }, { "name": "init-mydb", "image": "busybox", "command": ["sh", "-c", "until nslookup mydb; do echo waiting for mydb; sleep 2; done;"] } ]' spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never </code></pre>
<h2>Background</h2> <p>I recently migrated my Spring app to Spring Boot (w/ embedded Tomcat), and I am currently moving it into Kubernetes. As a part of the move into Kubernetes, I am separating out the Apache config into its own service and deployment in Kubernetes to act as a proxy for my Spring Boot app.</p> <p>My current setup is Apache with a LoadBalancer service that accepts requests from the world. This takes those requests and forwards them to my Spring Boot app, which has a ClusterIP service.</p> <p>Also important to note: my Apache redirects all http to https.</p> <h2>Issue</h2> <p>Whenever my Spring Boot app returns a redirect to the client, the location header in the response is http instead of https (only requests made over https will get through the Apache proxy to my app).</p> <p><em>Example</em>:</p> <p>Non-logged in user goes to:</p> <pre><code>https://example.com/admin </code></pre> <p>If not authenticated, the admin page will redirect the user to the login page. This SHOULD be one redirect to:</p> <pre><code>https://example.com/login </code></pre> <p>However, as is, my app redirects the user to:</p> <pre><code>http://example.com/login </code></pre> <p>And then the user is redirected again by Apache to:</p> <pre><code>https://example.com/login </code></pre> <h2>What I've Tried</h2> <p>I've checked my logs to make sure that the requests my app receives include the <code>X-Forwarded-Proto: https</code> header, which from what I understand should make the location header in the redirect response https.</p> <p>As mentioned in several Stack Overflow posts, I tried adding <code>server.use-forward-headers=true</code> to my <code>application.properties</code> file, but this did nothing. I also tried adding <code>server.tomcat.protocol-header=X-Forwarded-Proto</code> with it, but this did nothing as well (and from what I read, is the default anyway).</p> <h2>Other Notes</h2> <ul> <li>My cluster's <code>networkCIDR</code> is included in Tomcat's list of internal proxies for <code>RemoteIPValve</code></li> <li><code>X-Forwarded-For</code> doesn't seem to have an impact either, so I think the issue is with all forward headers</li> </ul>
<p>I figured it out. My mistake was assuming that <code>networkCIDR</code> was the IP address returned by <code>request.getRemoteAddr()</code>, when it was actually my k8s internal cluster IP. This makes sense given that the request is coming from Apache, which is also inside the cluster. The internal cluster IP was NOT in the list of internal proxies, and thus <code>RemoteIPValve</code> was not utilizing the forward headers.</p> <p>By adding the <code>server.tomcat.internal-proxies</code> property to my <code>application.properties</code> with the default values specified in <code>RemoteIpValve</code> AND my internal cluster IP, everything worked as it should.</p> <p>See <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/howto-embedded-servlet-containers.html#howto-customize-tomcat-behind-a-proxy-server" rel="noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/howto-embedded-servlet-containers.html#howto-customize-tomcat-behind-a-proxy-server</a></p>
<p>I really hope somebody would help me out with this question, as I have been trying to figure it out for days.</p> <p>I have container running in kubernetes in GKE. In /var/log/containers/my_container.log, I have something like this (among some other logs with different formats):</p> <pre><code>{"log":"17-Oct-2017;04:36:29.744 : [main] [server:] [id:] [yt:] ERROR no.myproject.service.Server - call failed for some reason\n","stream":"stdout","time":"2017-10-17T04:36:29.750702216Z"} </code></pre> <p>This log appears on Stackdriver (Fluentd output in GKE) as INFO log and like:</p> <pre><code>23:02:32.000 17-Oct-2017;04:36:29.744 : [main] [server:] [id:] [yt:] ERROR no.myproject.service.Server - call failed for some reason </code></pre> <p>So</p> <pre><code>23:02:32.000 </code></pre> <p>is added to it (which is the normal behavior of Stackdriver). I will refer to this format as format 2. </p> <p>As this log message actually is an ERROR log message (based on its payload content) I want it to appear as ERROR in Stackdriver (Fluentd).</p> <p>I am trying:</p> <pre><code>&lt;filter reform.**&gt; type parser format /^(?&lt;time&gt;\d{2} [^\s]*) : (?&lt;message2&gt;[^ \]]*)\] (?&lt;message3&gt;[^ \]]*)\] (?&lt;message4&gt;[^ \]]*)\] (?&lt;message5&gt;[^ \]]*)\] (?&lt;severity&gt;\w)\s+(?&lt;log2222&gt;.*)/ reserve_data true suppress_parse_error_log false key_name log &lt;/filter&gt; </code></pre> <p>hoping to get the severity of the message change to ERROR and also getting the content of the [..] fields in the log as some new key/values (in this case message2: main, etc).</p> <p>But after adding this filter to my config file, the output logs are still as before and I don't see any change.</p> <p>What am I missing? When I am writting my Regex pattern, I am not sure if I actually should consider the "log" field of the message in the log file in kubernetes or the one that I called format 2 (with the time added to it - on Stackdriver).</p> <p>I would really appreciate any advice, it would be a great help.</p>
<p>I don't know the tool you're using, but it looks like first part of your regex does not match the exact format of the prefixed time in your log string.</p> <p>Actually <code>\d{2}</code> will only match 2 digits.</p> <p>To match the entire time prefix you may use <code>(?:\d{2}:){2}\d{2}\.\d{3}</code> instead.</p> <p>One additional point regarding severity: you wrote <code>(?&lt;severity&gt;\w)</code> which captures only one word character. You may use <code>(?&lt;severity&gt;\w+)</code> to match several characters.</p> <p>Your regex would then become:</p> <pre><code>^(?&lt;time&gt;(?:\d{2}:){2}\d{2}\.\d{3} [^\s]*) : (?&lt;message2&gt;[^ \]]*)\] (?&lt;message3&gt;[^ \]]*)\] (?&lt;message4&gt;[^ \]]*)\] (?&lt;message5&gt;[^ \]]*)\] (?&lt;severity&gt;\w+)\s+(?&lt;log2222&gt;.*) </code></pre> <p>That <a href="https://regex101.com/r/o5lu2j/1" rel="nofollow noreferrer">demo</a> shoes a match.</p>
<p>I just heard of the native Kubernetes support in the future Docker version. I never used Kubernetes before, so I started reading about it. But I got a little bit confused: Kubernetes is described as <strong>orchestration</strong> tool and also as an alternative to Dockers <strong>swarm mode</strong>. </p> <p>So if Kubernetes does orchestration, is it also an alternative to docker-compose? Or can compose <em>and</em> Kubernetes be used together?</p> <p>Some specific questions: Let's say I want (or have) to use Kubernetes:</p> <ul> <li>I have a docker-compose file containing multiple microservices, but they are running as a standalone app on a single machine. Can (or should) it be replaced by Kubernetes?</li> <li>I have a docker-compose file with multiple services configured in swarm mode (running on multiple machines). Which part has to be replaced by Kubernetes? The whole compose file? Or is it somehow possible to define basic configuration (env_var, volumes, command, ...) within compose file and use Kubernetes only to orchestrate the clustering?</li> </ul>
<blockquote> <p>So if Kubernetes does orchestration, is it also an alternative to docker-compose? </p> </blockquote> <p>Short Answer: NO </p> <p>It's not just orchestration, essentially <code>Kubernetes</code> is a production grade container orchestration and scheduling engine. It is far more advanced than <code>docker-compose</code> itself. I would say <code>docker swarm</code>, <code>kubernetes</code> and <code>amazon ecs</code> belong in the same category. </p> <blockquote> <p>Or can compose and Kubernetes be used together?</p> </blockquote> <p>In the next version of docker engine you will be able to use docker-compose to create <code>kubernetes</code> objects. But as of now you can't.</p> <blockquote> <p>I have a docker-compose file containing multiple microservices, but they are running as a standalone app on a single machine. Can (or should) it be replaced by Kubernetes?</p> </blockquote> <p>Ok, in the context of running it in production, I would say <strong>absolutely</strong>, you should definitely look to host your applications on a <code>kubernetes</code> cluster because it provides </p> <ul> <li>resilience (rescheduling of pods if they die)</li> <li>scaling (scale pods based on cpu or any other metrics)</li> <li>load-balancing (provides VIP knows service and attaches all pods to it)</li> <li>secrets and config management</li> <li>namespaces (logical grouping of kubernetes objects)</li> <li>network polices (custom policies to control traffic flow between pods)</li> </ul> <p>and many more features out of the box. And when you declare a state <code>kubernetes</code> would always try to achieve and maintain that state.</p> <blockquote> <p>I have a docker-compose file with multiple services configured in swarm mode (running on multiple machines). Which part has to be replaced by Kubernetes? The whole compose file? Or is it somehow possible to define basic configuration (env_var, volumes, command, ...) within compose file and use Kubernetes only to orchestrate the clustering?</p> </blockquote> <p>I would replace the whole swarm cluster and compose files constructs with a <code>kubernetes</code> cluster and object definition <code>yaml</code>s. Having said that from my experience those <code>yamls</code> can get bit verbose so, if you are keen have a look at <a href="https://docs.helm.sh/using_helm/#quickstart-guide" rel="noreferrer">Helm</a>. It is a package manager for <code>kubernetes</code> which, you don't <strong>have</strong> to use but I think it is one of the best tools in <code>kubernetes</code> ecosystem at the moment and there are plenty of open source charts readily available.</p> <p>I would heavily recommend playing around with <code>kubernetes</code> using <code>minikube</code> on your local system just to get familiar with the general concepts. And then you will be able to answer the above questions for yourselves.</p>
<p>I want to setup a pre-defined PostgreSQL cluster in a bare meta kubernetes 1.7 with <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local PV</a> enable. I have three work nodes. I create local PV on each node and deploy the stateful set successfully (with some complex script to setup Postgres replication).</p> <p>However I'm noticed that there's a kind of naming convention between the volumeClaimTemplates and PersistentVolumeClaim. For example</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres volumeClaimTemplates: - metadata: name: pgvolume </code></pre> <p>The created pvc are <code>pgvolume-postgres-0</code>, <code>pgvolume-postgres-1</code>, <code>pgvolume-postgres-2</code> .</p> <p>With some <a href="https://docs.openshift.org/3.11/install_config/persistent_storage/selector_label_binding.html" rel="nofollow noreferrer">tricky</a>, I manually create PVC and bind to the target PV by selector. I test the stateful set again. It seems the stateful set is very happy to use these PVC.</p> <p>I finish my test successfully but I still have this question. Can I rely on volumeClaimTemplates naming convention? Is this an undocumented feature?</p>
<p>Based on the statefulset <a href="https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#statefulset-v1beta1-apps" rel="noreferrer">API reference</a> </p> <blockquote> <p>volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name.</p> </blockquote> <p>So I guess you can rely on it. </p> <p>Moreover, you can define a storage class to leverage dynamic provisioning of persistent volumes, so you won't have to create them manually.</p> <pre><code> volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: my-storage-class resources: requests: storage: 1Gi </code></pre> <p>Please refer to <a href="http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-classes-kubernetes.html" rel="noreferrer">Dynamic Provisioning and Storage Classes in Kubernetes</a> for more details.</p>
<p>I build docker images on one server and then load it onto to the nodes(using docker save and docker load cmds).</p> <p>Then I make change to the dep_config file to pull it locally instead of repo.(by setting imagepullpolicy).</p> <p>After that step I do "kubectl apply -f dep_config.yml".</p> <p>Now, the pod restarts only for the 1st time but subsequent load of images and apply of config file, the pods dont restart.</p> <p>Is there way I can restart the pod with "kubectl apply" even if there is no change in the config_deploy.yml file.</p>
<p>A common practice is to change a non-used label of the deployment using the <code>patch</code> command</p> <pre><code>kubectl patch deployment your-deployment -p \ "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"$(date +%s)\"}}}}}" </code></pre> <p>Basically you can change anything that's inside the pod template, even things, like in this example, that no one notices. </p>
<p>I'm trying to replace my legacy job scheduler with Kubernetes job and wondering how to write sequential jobs as a Kubernetes job.</p> <p>First, I wrote the following script to execute <code>job1</code> and <code>job2</code> in the written order but it didn't work as I expected.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: sequential spec: activeDeadlineSeconds: 100 template: metadata: name: sequential_jobs spec: containers: - name: job1 image: image1 - name: job2 image: image2 restartPolicy: Never </code></pre> <p>The job described above seems to run <code>job1</code> and <code>job2</code> in parallel. Is there any good way to run <code>job1</code> and <code>job2</code> in the written order?</p> <p>Appended.</p> <p>I recently found <a href="https://github.com/argoproj/argo" rel="noreferrer">https://github.com/argoproj/argo</a> very good for my usecase.</p>
<p>After a few attempts, I did this and that solved the basic problem (similar to what the OP has posted). This configuration ensures that <code>job-1</code> completes before <code>job-2</code> begins. If <code>job-1</code> fails, <code>job-2</code> container is not started. I still need to work on the retries and failure handling, but the basics works. Hopefully, this will help others:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: sequential-job spec: initContainers: - name: job-1 image: busybox # runs for 15 seconds; echoes job name and timestamp command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" &amp;&amp; sleep 5s; done;'] - name: job-2 image: busybox # runs for 15 seconds; echoes job name and timestamp command: ['sh', '-c', 'for i in 1 2 3; do echo "job-2 `date`" &amp;&amp; sleep 5s; done;'] # I don't really need the 'containers', but syntax requires # it so, I'm using it as a place where I can report the # completion status containers: - name: job-done image: busybox command: ['sh', '-c', 'echo "job-1 and job-2 completed"'] restartPolicy: Never </code></pre> <p><strong>Update</strong></p> <p>The same configuration as above also works inside a Job spec:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: sequential-jobs spec: template: metadata: name: sequential-job spec: initContainers: - name: job-1 image: busybox command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" &amp;&amp; sleep 5s; done;'] - name: job-2 image: busybox command: ['sh', '-c', 'for i in 1 2 3; do echo "job-2 `date`" &amp;&amp; sleep 5s; done;'] containers: - name: job-done image: busybox command: ['sh', '-c', 'echo "job-1 and job-2 completed"'] restartPolicy: Never </code></pre>
<p>In kubernetes I can use a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="noreferrer">PersistentVolumeClaim</a> to create some storage, which I can later mount in some container.</p> <p>However if the user in the container is not root, that user will not be able to access that directory because it is owned by root.</p> <p>What is the right way to access such a volume? (I did not find any user/permission options both when creating and mounting that volume.)</p>
<p>First, find out the UID number your process is running as.</p> <p>Then you can tell Kubernetes to chown (sort of) the mount point of the volume for your pod by adding <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context" rel="nofollow noreferrer"><code>.spec.securityContext.fsGroup</code></a>:</p> <pre><code>spec: ... securityContext: fsGroup: 2000 </code></pre> <blockquote> <p><code>fsGroup</code>: integer: A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume.</p> </blockquote>
<p>I'm trying to start a dashboard inside <a href="http://labs.play-with-k8s.com/" rel="nofollow noreferrer">play-with-kubernetes</a></p> <p>Commands I'm running:</p> <h1>start admin node</h1> <pre><code>kubeadm init --apiserver-advertise-address $(hostname -i) </code></pre> <h1>start network</h1> <pre><code>kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" </code></pre> <h1>allow master to hold nodes(?)</h1> <pre><code>kubectl taint nodes --all node-role.kubernetes.io/master- </code></pre> <h1>Wait until dns is up</h1> <pre><code>kubectl get pods --all-namespaces </code></pre> <h1>join node (copy from admin startup, not from here)</h1> <pre><code>kubeadm join --token 43d52c.d72308004d523ac4 10.0.21.3:6443 </code></pre> <h1>download and run dashboard</h1> <pre><code>curl -L -s https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml | sed 's/targetPort: 8443/targetPort: 8443\n type: NodePort/' | \ kubectl apply -f - </code></pre> <p>Unfortunatelly dashboard is not available. What should I do to correctly deploy it inside play-with-kubernetes?</p>
<p>You need heapster for dashboard to work. So execute these as well:</p> <pre><code>kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/rbac/heapster-rbac.yaml kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/influxdb/heapster.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml </code></pre> <p>Also, unless you want to fiddle with authentication you need to grant dashboard admin privileges with something like this:</p> <pre><code>kubectl create clusterrolebinding insecure-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard </code></pre> <p>Eventually a port link will appear (<code>30xxx</code>) but you will need to change the url scheme to https from http - and convince your browser that you don't care about the insecure certificate.</p> <p>You should have a working dashboard now. Piece of cake ;)</p>
<p>I'm trying to create an ingress within minikube. I have already enabled the ingress add on and checked all the associated services and pods have been added and are running.</p> <p>When I create the ingress I point it to a service.NodePort that is in the same namespace as the ingress. But when I describe the ingress the backend IP address is <code>&lt;none&gt;</code></p> <p>This is my deployment yaml</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: proxy labels: name: proxy --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: deployment namespace: proxy labels: app: proxy spec: replicas: 1 template: metadata: labels: app: proxy spec: containers: - name: proxy image: wildapplications/proxy:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8080 imagePullSecrets: - name: regsecret --- apiVersion: v1 kind: Service metadata: name: service namespace: proxy spec: type: NodePort ports: - port: 8080 targetPort: 8080 selector: app: proxy externalName: proxy --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress namespace: proxy annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: echo.example.com http: paths: - path: /test backend: serviceName: service servicePort: 8080 </code></pre> <p>when I describe the ingress I get</p> <pre><code>Name: ingress Namespace: proxy Address: 192.168.99.100 Default backend: default-http-backend:80 (172.17.0.14:8080) Rules: Host Path Backends ---- ---- -------- echo.example.com /test service:8080 (&lt;none&gt;) Annotations: rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 16m ingress-controller Ingress proxy/ingress Normal CREATE 15m ingress-controller Ingress proxy/ingress Normal UPDATE 15m ingress-controller Ingress proxy/ingress </code></pre> <p>Is there anything glaringly obvious as to why the ingress isnt resolving the backend specified to the service created directly above it?</p>
<p>I found the solution to my question so i'll post just in case someone else comes across something similar.</p> <p>I was trying to access the ingress through my minikube ip address (<code>minikube ip</code> to get the ip), this was providing a 404 because I was not using the host to navigate to it. </p> <p>To solve the 404 I executed</p> <pre><code>echo "$(minikube ip) echo.example.com" | sudo tee -a /etc/hosts </code></pre> <p>and then from there navigating to the host url in my browser.</p>
<p>This would be similar to putting a computer into a sleep mode. FYI: I'm asking this in the context of managing containers using Kubernetes.</p> <p>The reason for asking is that we would like to run many interactive jobs and want to suspend these jobs when users are not actively working on them so that resources can be released and used by other users.</p>
<p>Yes, you can use <code>docker stop</code> and <code>docker start</code> commands. It would like you suspend pc.</p> <p>All data produced inside container will be saved. But main process will receive <code>SIGTERM</code> signal. After docker start this process will be started again</p> <p>Also take a look at <code>docker pause command</code>.</p> <blockquote> <p>The docker pause command suspends all processes in the specified containers. On Linux, this uses the cgroups freezer. Traditionally, when suspending a process the SIGSTOP signal is used, which is observable by the process being suspended. With the cgroups freezer the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. On Windows, only Hyper-V containers can be paused.</p> </blockquote>
<p>I have a cluster of Kubernetes with two web apps deployed , I can't understand the way of assign same port 80 and 443 for this apps for access everyone with you own domain, web1.com and web2.com redirects to specific service. Looking in the web i found topics like a : Ingress Controller with Nginx Proxy reverse and traefik for manage request and route.</p> <p>How can I do this?</p> <p>Thank you</p>
<p>I will assume you already have 2 <code>Service</code>s defined for your apps (<code>s1</code> and <code>s2</code> below).</p> <p>Kubernetes <code>Ingress</code> supports <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting" rel="nofollow noreferrer">named based virtual hosting</a> (and much more):</p> <blockquote> <p>The following Ingress tells the backing loadbalancer to route requests based on the Host header.</p> </blockquote> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80 </code></pre>
<p>Attempting to create a set of resources based on a file via <code>oc create</code> fails if they already exist.</p> <p>According to the <a href="https://docs.openshift.com/enterprise/3.2/cli_reference/basic_cli_operations.html#create" rel="noreferrer">docs here</a> <code>oc create</code> should:</p> <blockquote> <p>Parse a configuration file and create one or more OpenShift Enterprise objects ... <strong><em>Any existing resources are ignored.</em></strong></p> </blockquote> <p>(emphasis mine).</p> <p>I can't see any config options for this command or globally that would alter this behaviour and it seems to me to be counter to the docs.</p> <p>The command I ran is <code>oc create -f some.file</code></p> <p>The output is:</p> <pre><code>Error from server: services 'my-app' already exists Error from server: buildconfigs 'my-app' already exists Error from server: imagestreams 'my-app' already exists Error from server: deploymentconfigs 'my-app' already exists Error from server: routes 'my-app' already exists Error from server: secrets 'my-app' already exists </code></pre> <p>It also exits with a non-zero exit code, so it's not just a warning. Am I missing something obvious here or misunderstanding what the documentation is saying?</p> <p>I just want to be able to apply this file and ensure the state of the OpenShift project afterwards.</p>
<p>The documentation is perhaps a bit badly worded. What it is saying is that if you try and create an object of a specific type where an object of that type with that name already exists, your attempt to create the new one will be ignored.</p> <p>The situation you have will occur if you tried to create multiple instances of an application from the same raw resource definitions. There is no way using <code>oc create -f</code> from a set of raw resource definitions to override names from the command line so a second deployment is distinct.</p> <p>What you would need to do if you want to create multiple instances from the same resource definitions, is to convert the definitions into a template and parameterise on the name so that you can pass in a different name for different instances. That way there will not be a clash.</p> <p>Also, when you do create a set of resources, it is usually better to use the one name across all the resource types and not use a different name for each, eg., use just 'my-app-name' for all of them, and not separately, 'my-buildconfig', 'my-imagestream'.</p> <p>More importantly, you need to ensure you add a label with same key/value on them all so it is easy then to work with them together, including deleting them all in one go.</p> <p>What is the type of application you are trying to deploy? I can possibly point you at example templates you can use as a guide.</p> <hr> <p>UPDATE 1</p> <p>If you want to be able run <code>oc create -f</code> and have it not complain if the resources already exist, but create them if they don't, use <code>oc apply -f</code> instead.</p>
<p>Referring to the kubernetes docs, under "Using the API -> Accessing the API -> Authenticating -> Authentication strategies -> Service Account Tokens", it says the following;</p> <pre><code> --service-account-key-file A file containing a PEM encoded key for signing bearer tokens. If unspecified, the API server’s TLS private key will be used. </code></pre> <p>and under "Using the API -> Accessing the API -> Managing Service Accounts -> Service account automation -> Token Controller", it says the following;</p> <pre><code>You must pass a service account private key file to the token controller in the controller-manager by using the --service-account-private-key-file option. The private key will be used to sign generated service account tokens. Similarly, you must pass the corresponding public key to the kube-apiserver using the --service-account-key-file option. The public key will be used to verify the tokens during authentication. </code></pre> <p>I am a bit confused, the former says that the flag (for admission controller running as a part apiserver, right?) will be used to sign the token, but the latter says that it will be used to verify the token and that the token will be signed by controller manager. </p> <p>Please help!</p>
<p>The controller manager creates the tokens, signing them with the private key and storing them in Secret API objects. </p> <p>When the tokens are presented to the API server, the API server verifies the signature using the public key(s) set via flags. </p> <p>Admission is unrelated to signing or verifying the tokens. It is used to add a Secret volume mount to pod specs as pods are created, in order to mount a service account token into the pod for use by the application to speak to the Kubernetes API</p>
<p>I have a multi-container pod in my kubernetes deployment:</p> <ul> <li>java</li> <li>redis</li> <li>nginx</li> </ul> <p>For every of those containers, there's a container with Prometheus exporter as well. </p> <p>The question is how can I expose those ports to Prometheus if annotations section supports only one port per pod?</p> <pre><code>annotations: prometheus.io/scrape: 'true' prometheus.io/port: 'xxxx' </code></pre> <p>but I need something like this:</p> <pre><code>annotations: prometheus.io/scrape: 'true' prometheus.io/port_1: 'xxxx' prometheus.io/port_2: 'yyyy' prometheus.io/port_3: 'zzzz' </code></pre> <p>Maybe there's some other method to scrape all metrics from my multi-container pods? Thanks in advance for any kind of help.</p>
<p>Here's an <a href="https://gist.github.com/bakins/5bf7d4e719f36c1c555d81134d8887eb" rel="noreferrer">example</a> job for Prometheus. Put it in your own config.</p> <p>Next, add:</p> <pre><code>annotations: prometheus.io/scrape: 'true' </code></pre> <p>to your pod metadata.</p> <p>And on every container, which provides /metrics to prom, create an appropriate port, named <code>metrics</code>.</p> <p>That's it. Prometheus would scrape only those ports, and there would be no situation, like when your redis instance would get http requests on its 6379 port.</p>
<p>I have nginx and php-fpm containers in Google Container Engine and I would like to run it through unix socket instead of network.</p> <p>I also have the Google SQL Proxy container and it uses empty volume to provide connection to Google SQL databases through sockets placed on that volume that is shared with other containers. So I was thinking of using the same approach for PHP but it is not working.</p> <p>I have empty volume of emptyDir type mounted to php and nginx containers. The volume is mounted, I can see it in both containers. But if I try to use nginx I will get (socket)file not found in logs.</p> <p>I have also added the www-data(same uid as in php container) user into nginx container and set it in nginx conf but it didn't help.</p> <p>Is Google SQL Proxy special in some way and I have to use network for PHP or am I missing something here?</p>
<p>I managed to get the approach you described working. I use a <code>emptyDir</code> in a <code>deployment</code> so a shared <code>/sock</code> folder is mounted to both nginx and fpm:</p> <pre><code> - name: fpm ... volumeMounts: - name: php-socket mountPath: /sock ... - name: nginx ... volumeMounts: - name: php-socket mountPath: /sock ... volumes: - name: php-socket emptyDir: {} </code></pre> <p>I found this article helpful for the actual nginx and fpm configuration: <a href="https://medium.com/@shrikeh/setting-up-nginx-and-php-fpm-in-docker-with-unix-sockets-6fdfbdc19f91" rel="noreferrer">https://medium.com/@shrikeh/setting-up-nginx-and-php-fpm-in-docker-with-unix-sockets-6fdfbdc19f91</a></p> <p>The snag I hit was a file in my php-fpm docker container config called <code>zz-docker.conf</code> which contains <code>listen = [::]:9000</code>. Since the name of the file is alphabetically high, it was clobbering my other config file, stopping the unix socket communication working (i.e. <code>listen = /sock/php.sock</code>). I removed the problem line using this in the <code>Dockerfile</code>:</p> <pre><code>sed -i -e ‘/listen/d’ /usr/local/etc/php-fpm.d/zz-docker.conf </code></pre>
<p>We've setup the most recent versions of</p> <ul> <li>API Manager</li> <li>Identity Server</li> <li>Data Analytics Server</li> </ul> <p>in separate pods on a kubernetes cluster (we've used the existing scripts as a basis for creating the docker images), all sharing relevant MySQL databases hosted in another pod on the same cluster. After a lot of configuring everything works as expected, we can add users/roles, create/publish/invoke APIs and collect analytics on these calls. The last issue we're facing however is, that once an API got published, it's possible to invoke it in the store. The problem appears once the API-manager pod gets restarted (for example by scaling the relevant deployment down/up, not by using the carbon dashboard), once the API-Manager is up and running again after such a (hard) restart, the published APIs are still visible in the store, however, if one tries to make a call to one of the (previously working) endpoints, an 404 message is returned and the API-manager logs the following error:</p> <blockquote> <p>INFO {org.apache.synapse.mediators.builtin.LogMediator} - STATUS = Message dispatched to the main sequence. Invalid URL, RESOURCE = /"context"/"endpoint being called"</p> </blockquote> <p>where "context" is the context set for a given API while publishing it and "endpoint being called" the REST endpoint of the service which should receive the call. Further, if we try to make a change to the API in the publisher (i.e. re-publish it), there seems to be a 50/50 chance that it works and fixes all issues till the next restart of the pod or an error is thrown when clicking the "Next: Implement" button on the first step, which has been recorded <a href="https://wso2.org/jira/browse/APIMANAGER-3022" rel="nofollow noreferrer">here</a> already (although it says that it has been resolved in a previous version already).</p> <p>The remaining setup is the following:</p> <p><strong>Users/Roles</strong></p> <ul> <li><p>User: prod, used for creating and publishing an API in the publisher</p></li> <li><p>Roles: prod-role (empty role with no priviledges), internal/creator, internal/publisher</p></li> <li><p>User: cons, used for invoking an API in the store</p></li> <li><p>Roles: cons-role (empty role, no priviledges, just for providing access to the API in the store), internal/subscriber</p></li> </ul> <p><strong>Databases</strong></p> <ul> <li><p>WSO2_CARBON_DB referees to a database on the MySQL pod, used only by the API-Manager</p></li> <li><p>WSO2REG_DB is supposed to be the shared registry database between the three modules (am, das, is)</p></li> </ul> <p>excerpt from master-datasources.xml from the am:</p> <pre><code> &lt;datasource&gt; &lt;name&gt;WSO2_CARBON_DB&lt;/name&gt; &lt;description&gt;The datasource used by the registry&lt;/description&gt; &lt;jndiConfig&gt; &lt;name&gt;jdbc/WSO2CarbonDB&lt;/name&gt; &lt;/jndiConfig&gt; &lt;definition type="RDBMS"&gt; &lt;configuration&gt; &lt;url&gt;jdbc:mysql://mysql-apimdb:3306/amcarbondb?autoReconnect=true&amp;amp;useSSL=false&lt;/url&gt; &lt;username&gt;wso2carbon&lt;/username&gt; &lt;password&gt;abc&lt;/password&gt; &lt;driverClassName&gt;com.mysql.jdbc.Driver&lt;/driverClassName&gt; &lt;maxActive&gt;50&lt;/maxActive&gt; &lt;maxWait&gt;60000&lt;/maxWait&gt; &lt;testOnBorrow&gt;true&lt;/testOnBorrow&gt; &lt;validationQuery&gt;SELECT 1&lt;/validationQuery&gt; &lt;validationInterval&gt;30000&lt;/validationInterval&gt; &lt;/configuration&gt; &lt;/definition&gt; &lt;/datasource&gt; &lt;datasource&gt; &lt;name&gt;WSO2AM_DB&lt;/name&gt; &lt;description&gt;The datasource used for API Manager database&lt;/description&gt; &lt;jndiConfig&gt; &lt;name&gt;jdbc/WSO2AM_DB&lt;/name&gt; &lt;/jndiConfig&gt; &lt;definition type="RDBMS"&gt; &lt;configuration&gt; &lt;url&gt;jdbc:mysql://mysql-apimdb:3306/apimgtdb?autoReconnect=true&amp;amp;useSSL=false&lt;/url&gt; &lt;username&gt;wso2carbon&lt;/username&gt; &lt;password&gt;abc&lt;/password&gt; &lt;defaultAutoCommit&gt;false&lt;/defaultAutoCommit&gt; &lt;driverClassName&gt;com.mysql.jdbc.Driver&lt;/driverClassName&gt; &lt;maxActive&gt;50&lt;/maxActive&gt; &lt;maxWait&gt;60000&lt;/maxWait&gt; &lt;testOnBorrow&gt;true&lt;/testOnBorrow&gt; &lt;validationQuery&gt;SELECT 1&lt;/validationQuery&gt; &lt;validationInterval&gt;30000&lt;/validationInterval&gt; &lt;/configuration&gt; &lt;/definition&gt; &lt;/datasource&gt; &lt;datasource&gt; &lt;name&gt;WSO2UM_DB&lt;/name&gt; &lt;description&gt;The datasource used by user manager&lt;/description&gt; &lt;jndiConfig&gt; &lt;name&gt;jdbc/WSO2UM_DB&lt;/name&gt; &lt;/jndiConfig&gt; &lt;definition type="RDBMS"&gt; &lt;configuration&gt; &lt;url&gt;jdbc:mysql://mysql-apimdb:3306/userdb?autoReconnect=true&amp;amp;useSSL=false&lt;/url&gt; &lt;username&gt;wso2carbon&lt;/username&gt; &lt;password&gt;abc&lt;/password&gt; &lt;driverClassName&gt;com.mysql.jdbc.Driver&lt;/driverClassName&gt; &lt;maxActive&gt;50&lt;/maxActive&gt; &lt;maxWait&gt;60000&lt;/maxWait&gt; &lt;testOnBorrow&gt;true&lt;/testOnBorrow&gt; &lt;validationQuery&gt;SELECT 1&lt;/validationQuery&gt; &lt;validationInterval&gt;30000&lt;/validationInterval&gt; &lt;/configuration&gt; &lt;/definition&gt; &lt;/datasource&gt; &lt;datasource&gt; &lt;name&gt;WSO2REG_DB&lt;/name&gt; &lt;description&gt;The datasource used for registry&lt;/description&gt; &lt;jndiConfig&gt; &lt;name&gt;jdbc/WSO2REG_DB&lt;/name&gt; &lt;/jndiConfig&gt; &lt;definition type="RDBMS"&gt; &lt;configuration&gt; &lt;url&gt;jdbc:mysql://mysql-apimdb:3306/regdb?autoReconnect=true&amp;amp;useSSL=false&lt;/url&gt; &lt;username&gt;wso2carbon&lt;/username&gt; &lt;password&gt;abc&lt;/password&gt; &lt;driverClassName&gt;com.mysql.jdbc.Driver&lt;/driverClassName&gt; &lt;maxActive&gt;50&lt;/maxActive&gt; &lt;maxWait&gt;60000&lt;/maxWait&gt; &lt;testOnBorrow&gt;true&lt;/testOnBorrow&gt; &lt;validationQuery&gt;SELECT 1&lt;/validationQuery&gt; &lt;validationInterval&gt;30000&lt;/validationInterval&gt; &lt;/configuration&gt; &lt;/definition&gt; &lt;/datasource&gt; </code></pre> <p>All of these databases get populated upon usage and are initialized using the DBscripts from the different products for the initial setup. Further, the registry.xml is configured as follows (showing just the changes, the remainder of the file is as it is configured by default): </p> <pre><code>&lt;currentDBConfig&gt;wso2registry&lt;/currentDBConfig&gt; &lt;readOnly&gt;false&lt;/readOnly&gt; &lt;enableCache&gt;false&lt;/enableCache&gt; &lt;registryRoot&gt;/&lt;/registryRoot&gt; &lt;dbConfig name="wso2registry"&gt; &lt;dataSource&gt;jdbc/WSO2CarbonDB&lt;/dataSource&gt; &lt;/dbConfig&gt; &lt;dbConfig name="govregistry"&gt; &lt;dataSource&gt;jdbc/WSO2REG_DB&lt;/dataSource&gt; &lt;/dbConfig&gt; &lt;remoteInstance url="https://localhost:9943/registry"&gt; &lt;id&gt;gov&lt;/id&gt; &lt;dbConfig&gt;govregistry&lt;/dbConfig&gt; &lt;cacheId&gt;wso2carbon@jdbc:mysql://mysql-apimdb:3306/regdb&lt;/cacheId&gt; &lt;readOnly&gt;false&lt;/readOnly&gt; &lt;enableCache&gt;false&lt;/enableCache&gt; &lt;registryRoot&gt;/&lt;/registryRoot&gt; &lt;/remoteInstance&gt; &lt;mount path="/_system/governance" overwrite="true"&gt; &lt;instanceId&gt;gov&lt;/instanceId&gt; &lt;targetPath&gt;/_system/governance&lt;/targetPath&gt; &lt;/mount&gt; &lt;mount path="/_system/config" overwrite="true"&gt; &lt;instanceId&gt;gov&lt;/instanceId&gt; &lt;targetPath&gt;/_system/config&lt;/targetPath&gt; &lt;/mount&gt; </code></pre> <p>In the end it seems to be an issue with some data being lost upon restart as the memory on the am pod is not persistent, since we've moved all databases to a persistent storage on another pod however, this shouldn't be a problem -- unless we missed something. Is there another registry/data source that need to be added/changed? All other default data sources not shown above (STATS, MB_STORE, METRICS) are setup to use the MySQL server as well, so unless there is something else than the data sources provided in the datasource folder, all data should remain on the corresponding MySQL databases.</p>
<p>I guess you haven't used any persistent storage for API Manager. When you create an API, the API artifact is created in the file system and stored in wso2am-2.1.0/repository/deployment/server/synapse-configs/de‌​fault/api location. You need to have a persistent storage for 'wso2am-2.1.0/repository/deployment/server/'. In your case, during the restart, you don't have relevant data as you have a fresh pack.</p>
<p>I'm running a kubernetes 1.6.2 cluster across three nodes in different zones in GKE and I'm trying to deploy my statefulset where each pod in the statefulset gets a PV attached to it. The problem is that kubernetes is creating the PVs in the one zone where I don't have a node!</p> <pre><code>$ kubectl describe node gke-multi-consul-default-pool-747c9378-zls3|grep 'zone=us-central1' failure-domain.beta.kubernetes.io/zone=us-central1-a $ kubectl describe node gke-multi-consul-default-pool-7e987593-qjtt|grep 'zone=us-central1' failure-domain.beta.kubernetes.io/zone=us-central1-f $ kubectl describe node gke-multi-consul-default-pool-8e9199ea-91pj|grep 'zone=us-central1' failure-domain.beta.kubernetes.io/zone=us-central1-c $ kubectl describe pv pvc-3f668058-2c2a-11e7-a7cd-42010a8001e2|grep 'zone=us-central1' failure-domain.beta.kubernetes.io/zone=us-central1-b </code></pre> <p>I'm using the standard storageclass which has no default zone set:</p> <pre><code>$ kubectl describe storageclass standard Name: standard IsDefaultClass: Yes Annotations: storageclass.beta.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/gce-pd Parameters: type=pd-standard Events: &lt;none&gt; </code></pre> <p>So I thought that the scheduler would automatically provision the volumes in a zone where a cluster node existed, but it doesn't seem to be doing that.</p> <p>For reference, here is the yaml for my statefulset:</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: "{{ template "fullname" . }}" labels: heritage: {{.Release.Service | quote }} release: {{.Release.Name | quote }} chart: "{{.Chart.Name}}-{{.Chart.Version}}" component: "{{.Release.Name}}-{{.Values.Component}}" spec: serviceName: "{{ template "fullname" . }}" replicas: {{default 3 .Values.Replicas}} template: metadata: name: "{{ template "fullname" . }}" labels: heritage: {{.Release.Service | quote }} release: {{.Release.Name | quote }} chart: "{{.Chart.Name}}-{{.Chart.Version}}" component: "{{.Release.Name}}-{{.Values.Component}}" app: "consul" annotations: pod.alpha.kubernetes.io/initialized: "true" spec: securityContext: fsGroup: 1000 containers: - name: "{{ template "fullname" . }}" image: "{{.Values.Image}}:{{.Values.ImageTag}}" imagePullPolicy: "{{.Values.ImagePullPolicy}}" ports: - name: http containerPort: {{.Values.HttpPort}} - name: rpc containerPort: {{.Values.RpcPort}} - name: serflan-tcp protocol: "TCP" containerPort: {{.Values.SerflanPort}} - name: serflan-udp protocol: "UDP" containerPort: {{.Values.SerflanUdpPort}} - name: serfwan-tcp protocol: "TCP" containerPort: {{.Values.SerfwanPort}} - name: serfwan-udp protocol: "UDP" containerPort: {{.Values.SerfwanUdpPort}} - name: server containerPort: {{.Values.ServerPort}} - name: consuldns containerPort: {{.Values.ConsulDnsPort}} resources: requests: cpu: "{{.Values.Cpu}}" memory: "{{.Values.Memory}}" env: - name: INITIAL_CLUSTER_SIZE value: {{ default 3 .Values.Replicas | quote }} - name: STATEFULSET_NAME value: "{{ template "fullname" . }}" - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: STATEFULSET_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: datadir mountPath: /var/lib/consul - name: gossip-key mountPath: /etc/secrets readOnly: true - name: config mountPath: /etc/consul - name: tls mountPath: /etc/tls lifecycle: preStop: exec: command: - /bin/sh - -c - consul leave livenessProbe: exec: command: - consul - members initialDelaySeconds: 300 timeoutSeconds: 5 command: - "/bin/sh" - "-ec" - "/tmp/consul-start.sh" volumes: - name: config configMap: name: consul - name: gossip-key secret: secretName: {{ template "fullname" . }}-gossip-key - name: tls secret: secretName: consul volumeClaimTemplates: - metadata: name: datadir annotations: {{- if .Values.StorageClass }} volume.beta.kubernetes.io/storage-class: {{.Values.StorageClass | quote}} {{- else }} volume.alpha.kubernetes.io/storage-class: default {{- end }} spec: accessModes: - "ReadWriteOnce" resources: requests: # upstream recommended max is 700M storage: "{{.Values.Storage}}" </code></pre>
<p>There is a bug open for this issue <a href="https://github.com/kubernetes/kubernetes/issues/50115" rel="nofollow noreferrer">here</a>.</p> <p>The workaround in the meantime is to set the <code>zones</code> parameter in your <code>StorageClass</code> to specify the exact zones where your Kubernetes cluster has nodes. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#gce" rel="nofollow noreferrer">Here</a> is an example.</p>
<p>I've setup a NodePort service using the following config:</p> <p><strong>wordpress-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: wordpress name: wordpress spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP selector: app: wordpress </code></pre> <p>Is this sufficient to access the service externally, if so how can I now access the service? What details do I need - and how do I determine them - for example node IP.</p>
<blockquote> <p>For Kubernetes on GCE:</p> </blockquote> <p>We had the same question regarding services of type NodePort: How do we access node port services from our own host?</p> <p>@ivan.sim 's answer (nodeIp:nodePort) is on mark however, you still wouldn't be able to access your service unless you add a firewall ingress (inbound to google cloud) traffic rule on the VPC network console to allow your host to be able to access your compute node</p> <p><a href="https://i.stack.imgur.com/8zFOt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8zFOt.png" alt="enter image description here"></a> the above rule is dangerous and should be used only during development</p> <p>You can find the node port using either the Google Cloud console or by running subsequent kubectl commands to find out the node running your pod which has your container. i.e <code>kubectl get pods , kubectl describe pod your-pod-name, kubectl describe node node-that-runs-you-pod</code> .status.addresses has your ExternalIP</p> <p>It would be great if we could extract the node ip running our container in the pod using only a <code>label/selector</code> and a few line of commands, so here is what we did, in this case our selector is <code>app: your-label</code>:</p> <pre><code>$ nodename=$(kubectl get pods -o jsonpath='{.items[?(@.metadata.labels.app=="your-label")].spec.nodeName}') $ nodeIp=$(kubectl get nodes -o jsonpath='{.items[?(@.metadata.name=="'$(echo $nodename)'")].status.addresses[?(@.type=="ExternalIP")].address}') $ echo nodeIp </code></pre> <p><strong>notice</strong>: we used json path to extract the information we desired, for more on json path see: <a href="https://kubernetes.io/docs/user-guide/jsonpath/" rel="noreferrer">json path</a></p> <p>You could certainly turn this into a script that takes a label/selector as input and outputs an external ip of the node running your container !!!</p> <p>To get the nodeport just type:</p> <pre><code>$ kubectl get services </code></pre> <p>under the <strong>PORT(S)</strong> columns you will see something like tagetPort:nodePort. this nodeport is what you want .</p> <pre><code>nodeIp:nodePort </code></pre>
<p>I have to write a deployment file for my application which takes run time commands and arguments at runtime. e.g. <code>./foo bar -a=A -b=B</code> This is my deployment file :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: foo spec: replicas: 1 template: metadata: labels: name: foo spec: containers: - name: foo image: username/image:tag # command that to be executed at run time command: ["bar"] args: # This is the flag to pass at runtime - -a=A - -b=B ports: - containerPort: 9500 </code></pre> <p>It says container command 'foo' not found or doesnot exist. i'm passing a script as entrypoint and it has <code>exec /usr/local/bin/foo</code> . What's wrong with it ?</p>
<p>If you want start <code>./foo bar -a=A -b=B</code> in container, your deployment should contains</p> <pre><code>- name: foo image: username/image:tag command: ["foo"] args: ["bar", "-a=A", "-b=B"] </code></pre> <p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a></p> <p>Be sure you have path to <strong>foo</strong> binary in your <strong>$PATH</strong> enviroment variable inside <code>username/image:tag</code> image (or use full path <code>/usr/local/bin/foo</code>).</p> <p>You can check <code>$PATH</code> by </p> <pre><code>kubectl run test -ti --rm --image=username/image:tag --command -- echo $PATH kubectl run test -ti --rm --image=username/image:tag --command -- foo </code></pre>
<p>The supported versions are listed here:</p> <p><a href="https://cloud.google.com/container-engine/supported-versions" rel="noreferrer">https://cloud.google.com/container-engine/supported-versions</a></p> <p>but I'm wondering if there is a way to programatically get this list (besides scraping that page, I guess) via <code>gcloud</code> or some similar tool?</p> <p>I find that hard-coding a single version breaks often because Google keeps updating the supported versions. At the same time, I /would/ like to specify at least the large version (e.g., 1.7.x) because it appears that 1.8.x introduces some breaking changes, for example.</p>
<p>The gcloud "get-server-config" will get you the data you want. Specifying the "--format" option can also return it in a way that's easy to parse:</p> <pre><code>gcloud container get-server-config --zone=us-central1-f --format=json </code></pre> <p>If you wish to control when updates happen, the maintenance window option may also help you control when you want them to occur. <a href="https://cloud.google.com/container-engine/docs/maintenance-window" rel="noreferrer">https://cloud.google.com/container-engine/docs/maintenance-window</a></p>
<p>When installing <code>cryptography</code> package I get the following error:</p> <pre><code>Invalid environment marker: platform_python_implementation != 'PyPy' </code></pre> <p>It seems upgrading setuptools would solve this. Is there a way I could edit Build Config YAML file so that <code>pip install --upgrade setuptools</code> runs before any package is built?</p>
<p>Run:</p> <pre><code>oc set env bc/yourappname UPGRADE_PIP_TO_LATEST=true </code></pre> <p>See:</p> <ul> <li><a href="https://github.com/sclorg/s2i-python-container/tree/master/2.7" rel="noreferrer">https://github.com/sclorg/s2i-python-container/tree/master/2.7</a></li> </ul> <p>When you do this it should update <code>pip</code>, <code>setuptools</code> and <code>wheel</code> packages.</p> <p>Only problem is that right at this minute, the changes that were made such that <code>setuptools</code> and <code>wheel</code> were also updated aren't yet in RHEL based Python S2I images. So if you are using OpenShift Container Platform (as used by OpenShift Online), it will not work as required.</p> <p>First option to workaround that is to use the CentOS based images instead for now:</p> <pre><code>oc new-app centos/python-27-centos7~https://url-to-your-repo </code></pre> <p>Second option is to add an executable shell script called <code>.s2i/bin/assemble</code> in your source code repo which contains:</p> <pre><code>#!/bin/bash set -eo pipefail pip install --upgrade pip setuptools wheel /usr/libexec/s2i/assemble </code></pre> <p>This will be executed instead of the normal <code>assemble</code> script, allowing you to install the updates. You then run the original <code>assemble</code> script.</p>
<p>My Kubernetes pods and containers are not starting. They are stuck in with the status <code>ContainerCreating</code>. </p> <p>I ran the command <code>kubectl describe po PODNAME</code>, which lists the events and I see the following error:</p> <pre><code>Type Reason Message Warning FailedSync Error syncing pod Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. </code></pre> <p>The <code>Count</code> column indicates that these errors are being repeated over and over again, roughly once a second. The full output is below from this command is below, but how do I go about debugging this? I'm not even sure what these errors mean.</p> <pre><code>Name: ocr-extra-2939512459-3hkv1 Namespace: ocr-da-cluster Node: gke-da-ocr-api-gce-cluster-extra-pool-65029b63-6qs2/10.240.0.11 Start Time: Tue, 24 Oct 2017 21:05:01 -0400 Labels: component=ocr pod-template-hash=2939512459 role=extra Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"ocr-da-cluster","name":"ocr-extra-2939512459","uid":"d58bd050-b8f3-11e7-9f9e-4201... Status: Pending IP: Created By: ReplicaSet/ocr-extra-2939512459 Controlled By: ReplicaSet/ocr-extra-2939512459 Containers: ocr-node: Container ID: Image: us.gcr.io/ocr-api/ocr-image Image ID: Ports: 80/TCP, 443/TCP, 5555/TCP, 15672/TCP, 25672/TCP, 4369/TCP, 11211/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 31 memory: 10Gi Liveness: http-get http://:http/ocr/live delay=270s timeout=30s period=60s #success=1 #failure=5 Readiness: http-get http://:http/_ah/warmup delay=180s timeout=60s period=120s #success=1 #failure=3 Environment: NAMESPACE: ocr-da-cluster (v1:metadata.namespace) Mounts: /var/log/apache2 from apachelog (rw) /var/log/celery from cellog (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dhjr5 (ro) log-apache2-error: Container ID: Image: busybox Image ID: Port: &lt;none&gt; Args: /bin/sh -c echo Apache2 Error &amp;&amp; sleep 90 &amp;&amp; tail -n+1 -F /var/log/apache2/error.log State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 20m Environment: &lt;none&gt; Mounts: /var/log/apache2 from apachelog (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dhjr5 (ro) log-worker-1: Container ID: Image: busybox Image ID: Port: &lt;none&gt; Args: /bin/sh -c echo Celery Worker &amp;&amp; sleep 90 &amp;&amp; tail -n+1 -F /var/log/celery/worker*.log State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 20m Environment: &lt;none&gt; Mounts: /var/log/celery from cellog (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dhjr5 (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: apachelog: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: cellog: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-dhjr5: Type: Secret (a volume populated by a Secret) SecretName: default-token-dhjr5 Optional: false QoS Class: Burstable Node-Selectors: beta.kubernetes.io/instance-type=n1-highcpu-32 Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300s Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 10m 10m 2 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (10), Insufficient memory (2), MatchNodeSelector (2). 10m 10m 1 default-scheduler Normal Scheduled Successfully assigned ocr-extra-2939512459-3hkv1 to gke-da-ocr-api-gce-cluster-extra-pool-65029b63-6qs2 10m 10m 1 kubelet, gke-da-ocr-api-gce-cluster-extra-pool-65029b63-6qs2 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "apachelog" 10m 10m 1 kubelet, gke-da-ocr-api-gce-cluster-extra-pool-65029b63-6qs2 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "cellog" 10m 10m 1 kubelet, gke-da-ocr-api-gce-cluster-extra-pool-65029b63-6qs2 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-dhjr5" 10m 1s 382 kubelet, gke-da-ocr-api-gce-cluster-extra-pool-65029b63-6qs2 Warning FailedSync Error syncing pod 10m 0s 382 kubelet, gke-da-ocr-api-gce-cluster-extra-pool-65029b63-6qs2 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. </code></pre>
<p>Check your resource limits. I faced the same issue and the reason in my case was because I was using <code>m</code> instead of <code>Mi</code> for memory limits and memory requests.</p>
<p>I set up Kubernetes on CoreOS on bare metal using the <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-on-generic-platforms.html" rel="noreferrer">generic install scripts</a>. It's running the current stable release, 1298.6.0, with Kubernetes version 1.5.4.</p> <p>We'd like to have a highly available master setup, but we don't have enough hardware at this time to dedicate three servers to serving only as Kubernetes masters, so I would like to be able to allow user pods to be scheduled on the Kubernetes master. I set --register-schedulable=true in /etc/systemd/system/kubelet.service but it still showed up as SchedulingDisabled.</p> <p>I tried to add settings for including the node as a worker, including adding worker TLS certs to /etc/kubernetes/ssl, adding those settings to kubelet.service, adding an /etc/kubernetes/worker-kubeconfig.yaml that pointed to those certs, and added that information to the /etc/kubernetes/manifests/kube-proxy.yaml. I used my existing nodes as a template for what to add. This registered another node under the master's hostname and then both it and the original master node showed up as NotReady,SchedulingDisabled.</p> <p><a href="https://stackoverflow.com/questions/37873164/are-there-issues-with-running-user-pods-on-a-kubernetes-master-node">This question</a> indicates that scheduling pods on the master node should be possible, but there is barely anything else that I can find on the subject.</p>
<p>If you are using Kubernetes 1.7 and above:</p> <pre><code>kubectl taint node mymasternode node-role.kubernetes.io/master:NoSchedule- </code></pre>
<p>Is there any way to deploy a particular version of Kubernetes to ACS in Azure? Using Azure resource manager, or <code>az</code> command.</p> <p>It doesn't seem like <a href="https://learn.microsoft.com/en-us/azure/templates/microsoft.containerservice/containerservices" rel="nofollow noreferrer">template format</a> for container service shows this info.</p>
<p>You can specify the version in ACS in selected regions. See the template example. <a href="https://github.com/weinong/azure-quickstart-templates/tree/master/101-acs-kubernetes-with-version" rel="nofollow noreferrer">https://github.com/weinong/azure-quickstart-templates/tree/master/101-acs-kubernetes-with-version</a></p> <p>We will be updating azure cli with this feature soon. </p>
<p>I want to create a kubernetes cluster.</p> <p>I want to have a single cluster with master, worker and proxy nodes in High Availability with nodes in both locations.</p> <p>Is it documented the maximum distance or latency supported between locations? Additionally I would appreciate links to official documentation, I couldn't find anything related.</p> <p>Thank you.</p>
<p>There is no latency limitation between nodes in kubernetes cluster. They are configurable parameters.</p> <p>For <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">kubelet</a> on worker node it is:</p> <pre><code>--node-status-update-frequency duration Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in nodecontroller. (default 10s) </code></pre> <p>For <a href="https://kubernetes.io/docs/admin/kube-controller-manager/" rel="nofollow noreferrer">controller-manager</a> on master node they are:</p> <pre><code> --node-monitor-grace-period duration Amount of time which we allow running Node to be unresponsive before marking it unhealthy. Must be N times more than kubelet's nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet to post node status. (default 40s) --node-monitor-period duration The period for syncing NodeStatus in NodeController. (default 5s) --node-startup-grace-period duration Amount of time which we allow starting Node to be unresponsive before marking it unhealthy. (default 1m0s) </code></pre>
<p>The <code>kubectl cp</code> command only seems to work when the pod is still running. Is there a way to copy a directory of output files from a completed pod to my local machine?</p>
<p>By definition not a <em>completed</em> Pod, as those are ephemeral, however the answer to your question is to change the definition of what "completed" means.</p> <p>The most straightforward answer to your question is to either mount a network Volume into the Pod, so its files survive termination, or to have the Pod copy its own files out to some extra-cluster location (maybe s3, or an FTP site).</p> <p>But I suspect you don't mean under those those circumstances, or you would have already done so.</p> <p>One other example might be to have the Pod wait some defined timeout period for the appearance of a sentinel file so you can inform to the Pod you have successfully copied the files out and that it is now free to terminate as expected. Or if it's more convenient, have the Pod listen on a socket and stream a tar (or zip) to a connection, enabling a more traditional request-response lifecycle, and at the end of the response then the Pod shuts down.</p> <p>Implied in all those work-around steps is that you are notified of the "almost done"-ness of the Pod through another means. Without more information about your setup, it's hard to go into that, and might not even be necessary. So feel free to add clarifications as necessary.</p>
<p>I am trying to follow this short doc about how to use Gitlab CI with a Kubernetes Cluster that I am creating with Openstack: <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/kubernetes.html</a> </p> <p>I manage to create it but any time I create the ConfigMap and Deployment as specified in the previous link the pods it creates are stuck in a CrashLoopBackOff like this:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE gitlab gitlab-runner-3998042981-f8dlh 0/1 CrashLoopBackOff 36 2h gitlab gitlab-runner-3998042981-g9m5g 0/1 CrashLoopBackOff 36 2h gitlab gitlab-runner-3998042981-q0bth 0/1 CrashLoopBackOff 36 2h gitlab gitlab-runner-3998042981-rjztk 0/1 CrashLoopBackOff 36 2h kube-system coredns-1977636023-1q47s 1/1 Running 0 21h kube-system grafana-1173934969-vw49f 1/1 Running 0 21h kube-system node-exporter-gitlab-ci-hc6k3ffax54o-minion-0 1/1 Running 0 21h kube-system node-exporter-gitlab-ci-hc6k3ffax54o-minion-1 1/1 Running 0 21h kube-system prometheus-873144915-s9m6j 1/1 Running 0 21h </code></pre> <p>My problem is that I am not able to know why this happens since pod logs are not available when they are not created. </p> <p>Apart from that I just do not know what to do with the specified volumes since I just think this has some relation with the crashloops.</p> <p>Deployment specifies:</p> <pre><code>- configMap: name: gitlab-runner name: config - hostPath: path: /usr/share/ca-certificates/mozilla name: cacerts </code></pre> <p>I have found that:</p> <blockquote> <p>A hostPath volume mounts a file or directory from the host node’s filesystem into your pod</p> </blockquote> <p>After running the pods without the cacerts volume everything is created but afterwards no job will be executed.</p> <p>Log from any pod:</p> <pre><code>Starting multi-runner from /etc/gitlab-runner/config.toml ... builds=0 Running in system-mode. Configuration loaded builds=0 Metrics server disabled ERROR: Checking for jobs... forbidden runner=&lt;PARTOFTHETOKEN&gt; ERROR: Checking for jobs... forbidden runner=&lt;PARTOFTHETOKEN&gt; ERROR: Checking for jobs... forbidden runner=&lt;PARTOFTHETOKEN&gt; ERROR: Runner https://URL/ci&lt;TOKEN&gt; is not healthy and will be disabled! </code></pre>
<p>Actual docs about having Gitlab CI running on a kubernetes cluster are not clear enough. </p> <p>You need to run somewhere gitlab-runner register with the token you get from the Runner's admin page of your Gitlab instance and grab another token from resulting config (cat /etc/gitlab-runner/config.toml | grep token) and paste it into your deployment config so it can now receive jobs from CI.</p> <blockquote> <p>UPDATE 2019: gitlab.com docs now make it clear: <a href="https://docs.gitlab.com/runner/register/#gnulinux" rel="nofollow noreferrer">https://docs.gitlab.com/runner/register/#gnulinux</a></p> </blockquote>
<p>The problem seems to have been <a href="https://stackoverflow.com/questions/33711700/how-to-prevent-two-volume-claims-to-claim-the-same-volume-on-kubernetes">solved a long time ago</a>, as the answer and the comments does not provide real solutions, I would like to get some help from experienced users</p> <p>The error is the following (when describing the pod, which keeps on the ContainerCreating state) : </p> <pre><code>Multi-Attach error for volume "pvc-xxx" Volume is already exclusively attached to one node and can't be attached to another </code></pre> <p>This all run on GKE. I had a previous cluster, and the problem never occured. I have reused the same disk when creating this new cluster — not sure if it is related</p> <p><a href="https://gist.github.com/deesx/cb2b4180cc066c2e08d5cd0fbad06323" rel="noreferrer">Here is the full yaml config files</a> (I'm leaving the concerned code part commented as to highlight it; it is not when in effective use)</p> <p>Thanks in advance if obvious workarounds</p>
<p>Based on your description what you are experiencing is exactly what is supposed to happen.</p> <p>You are using <code>gcePersistentDisk</code> in your <a href="https://gist.github.com/deesx/cb2b4180cc066c2e08d5cd0fbad06323#file-volume-yml" rel="noreferrer">PV/PVC definition</a>. The <code>accessMode</code> is <code>ReadWriteOnce</code> - this means that this PV can only be attached to a single <strong>Node</strong> (stressing <strong>Node</strong> here, there can be multiple Pods running on the same <strong>Node</strong> using the same PV). There is not much you can do about this; <code>gcePersistentDisk</code> is like a remote block device, it's not possible to mount it on multiple <strong>Nodes</strong> simultaneously (unless <em>read only</em>).</p> <p>There is a nice <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">table that shows which PVs support <code>ReadWriteMany</code></a> (that is, write access on multiple <strong>Nodes</strong> at the same time):</p> <blockquote> <p><strong>Important!</strong> A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.</p> </blockquote>
<p>I am currently using the Kubernetes Executor for Gitlab CI and since:</p> <p><a href="https://docs.gitlab.com/runner/executors/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/executors/kubernetes.html</a>:"<em>At this time hostPath, PVC, configMap, and secret volume types are supported".</em></p> <p>I was wondering if there is a possibility to have a Flex Volume with a Persistent Volume Claim in Kubernetes.</p>
<p>Any type of PV can back a PVC. You need to create the PV by hand and then specify the name in <a href="https://kubernetes.io/docs/api-reference/v1.8/#persistentvolumeclaimspec-v1-core" rel="nofollow noreferrer"><code>.spec.volumeName</code></a> of the PVC (or use <code>.spec.selector</code> with labels). Like so:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-pv-claim spec: volumeName: task-pv-volume storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>As a reference I used this PV (but the type of the PV does not matter):</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/tmp/data" </code></pre> <p>(Alternatively, automatic provisioning with your own storageclass is also possible, but I guess this is not your use case.)</p>
<p>I have a kubernetes cluster that is running in out network and have setup an NFS server on another machine in the same network. I am able to ssh to any of the nodes in the cluster and mount from the server by running <code>sudo mount -t nfs 10.17.10.190:/export/test /mnt</code> but whenever my test pod tries to use an nfs persistent volume that points at that server it fails with this message:</p> <pre><code>Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 19s 19s 1 default-scheduler Normal Scheduled Successfully assigned nfs-web-58z83 to wal-vm-newt02 19s 3s 6 kubelet, wal-vm-newt02 Warning FailedMount MountVolume.SetUp failed for volume "kubernetes.io/nfs/bad55e9c-7303-11e7-9c2f-005056b40350-test-nfs" (spec.Name: "test-nfs") pod "bad55e9c-7303-11e7-9c2f-005056b40350" (UID: "bad55e9c-7303-11e7-9c2f-005056b40350") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: 10.17.10.190:/exports/test /var/lib/kubelet/pods/bad55e9c-7303-11e7-9c2f-005056b40350/volumes/kubernetes.io~nfs/test-nfs nfs [] Output: mount.nfs: access denied by server while mounting 10.17.10.190:/exports/test </code></pre> <p>Does anyone know how I can fix this and make it so that I can mount from the external NFS server?</p> <p>The nodes of the cluster are running on <code>10.17.10.185 - 10.17.10.189</code> and all of the pods run with ips that start with <code>10.0.x.x</code>. All of the nodes on the cluster and the NFS server are running Ubuntu. The NFS server is running on <code>10.17.10.190</code> with this <code>/etc/exports</code>:</p> <pre><code>/export 10.17.10.185/255.0.0.0(rw,sync,no_subtree_check) </code></pre> <p>I set up a persistent volume and persistent volume claim and they both create successfully showing this output from running <code>kubectl get pv,pvc</code>:</p> <pre><code>NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv/test-nfs 1Mi RWX Retain Bound staging/test-nfs 15m NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc/test-nfs Bound test-nfs 1Mi RWX 15m </code></pre> <p>They were created like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: test-nfs spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: # FIXME: use the right IP server: 10.17.10.190 path: "/exports/test" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi </code></pre> <p>My test pod is using this configuration:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 1 selector: role: web-frontend template: metadata: labels: role: web-frontend spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 volumeMounts: # name must match the volume name below - name: test-nfs mountPath: "/usr/share/nginx/html" volumes: - name: test-nfs persistentVolumeClaim: claimName: test-nfs </code></pre>
<p>It's probably because the uid used in your pod/container has not enough rights on the NFS server. </p> <p>You can runAsUser as mentioned by @Giorgio or try to edit uid-range annotations of your namespace and fix a value (ex : 666). Like this every pod in your namespace will run with uid 666. </p> <p>Don't forget to <code>chown 666</code> properly your NFS directory. </p>
<p>I want to collect metrics for PODs in K8s. <code>kube-state-metrics</code> looks good. I was not able to follow the <a href="https://github.com/kubernetes/kube-state-metrics#setup" rel="noreferrer">README</a>. I struck after <code>make container</code>. Is there any simple way to deploy the <code>kube-state-metrics</code>. </p> <p><strong>UPDATE1</strong></p> <p>Ran <code>kubectl apply -f kubernetes</code>. Getting below errors. I thinks, version mismatch. Any idea how to overcome?</p> <pre><code>serviceaccount "kube-state-metrics" configured service "kube-state-metrics" configured Error from server (BadRequest): error when creating "kubernetes/kube-state-metrics-cluster-role-binding.yaml": ClusterRoleBinding in version "v1" cannot be handled as a ClusterRoleBinding: no kind "ClusterRoleBinding" is registered for version "rbac.authorization.k8s.io/v1" Error from server (BadRequest): error when creating "kubernetes/kube-state-metrics-cluster-role.yaml": ClusterRole in version "v1" cannot be handled as a ClusterRole: no kind "ClusterRole" is registered for version "rbac.authorization.k8s.io/v1" Error from server (BadRequest): error when creating "kubernetes/kube-state-metrics-deployment.yaml": Deployment in version "v1beta2" cannot be handled as a Deployment: no kind "Deployment" is registered for version "apps/v1beta2" Error from server (BadRequest): error when creating "kubernetes/kube-state-metrics-role-binding.yaml": RoleBinding in version "v1" cannot be handled as a RoleBinding: no kind "RoleBinding" is registered for version "rbac.authorization.k8s.io/v1" Error from server (BadRequest): error when creating "kubernetes/kube-state-metrics-role.yaml": Role in version "v1" cannot be handled as a Role: no kind "Role" is registered for version "rbac.authorization.k8s.io/v1" </code></pre>
<p>You're close.</p> <p>Same page has a <code>Kubernetes Deployment</code> section.</p> <p><a href="https://github.com/kubernetes/kube-state-metrics#kubernetes-deployment" rel="noreferrer">https://github.com/kubernetes/kube-state-metrics#kubernetes-deployment</a></p> <p>Once you clone the github repository you simple run:</p> <p><code>kubectl apply -f kubernetes</code></p> <p>You can take a closer look at the deployment files here:</p> <p><a href="https://github.com/kubernetes/kube-state-metrics/tree/master/kubernetes" rel="noreferrer">https://github.com/kubernetes/kube-state-metrics/tree/master/kubernetes</a></p> <p>--- UPDATE ---</p> <p>If you're running an older version of K8s, which still uses Deployment version v1beta1 and no RBAC try the following (yaml file) example:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-state-metrics-deployment spec: replicas: 1 template: metadata: labels: k8s-app: kube-state-metrics version: "v0.4.1" spec: containers: - name: kube-state-metrics image: gcr.io/google_containers/kube-state-metrics:v0.4.1 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: 'true' name: kube-state-metrics labels: k8s-app: kube-state-metrics spec: ports: - name: http-metrics port: 8080 protocol: TCP selector: k8s-app: kube-state-metrics </code></pre>
<p>I have kubeDNS set up on a bare metal kubernetes cluster. I thought that would allow me to access services as described <a href="https://fabric8.io/guide/services.html#service-discovery-via-dns" rel="nofollow noreferrer">here</a> (http:// for those who don't want to follow the link), but when I run</p> <pre><code>curl https://monitoring-influxdb:8083 </code></pre> <p>I get the error </p> <blockquote> <p>curl: (6) Could not resolve host: monitoring-influxdb</p> </blockquote> <p>This is true when I run curl on a service name in any namespace. Is this an error with my kubDNS setup or are there different steps I need to take in order to achieve this? I get the expected output when I run the test at the end of <a href="http://jamiehannaford.com/setting-up-dns-discovery-in-kubernetes/" rel="nofollow noreferrer">this article</a>.</p> <p>For reference:</p> <p><a href="https://pastebin.com/GJrpvSZw" rel="nofollow noreferrer">kubeDNS controller yaml files</a></p> <p><a href="https://pastebin.com/YXHsaZW9" rel="nofollow noreferrer">kubeDNS service yaml file</a></p> <p><a href="https://pastebin.com/vYmMgVF7" rel="nofollow noreferrer">kubelet flags</a></p> <p><a href="https://pastebin.com/acrFcq4i" rel="nofollow noreferrer">output of kubectl get svc in default and kube-system namespaces</a></p>
<p>The service discovery that you're trying to is documented at <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-serv" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-serv</a>‌​ice, and is for communications <em>within</em> one pod talking to an existing service, not from nodes (or the master) to speak to Kubernetes services.</p> <p>You will want to leverage the DNS for the service in form of <code>&lt;servicename&gt;.&lt;namespace&gt;</code> or <code>&lt;servicename&gt;.&lt;namespace&gt;.svc.cluster.local</code>. To see this in operation, kick up an interactive pod with busybox (or use an existing pod of your own) with something like:</p> <ul> <li><code>kubectl run -i --tty alpine-interactive --image=alpine --restart=Never</code></li> </ul> <p>and within that shell that is provided there, make an nslookup command. From your example, I'm guessing you're trying to access influxDB from <a href="https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb" rel="noreferrer">https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb</a>, then it will be installed into the <code>kube-system</code> namespace, and the service name you'd use from another Pod internally to the cluster would be:</p> <ul> <li><code>monitoring-influxdb.kube-system.svc.cluster.local</code></li> </ul> <p>For example:</p> <pre><code>kubectl run -i --tty alpine --image=alpine --restart=Never If you don't see a command prompt, try pressing enter. / # nslookup monitoring-influxdb.kube-system.svc.cluster.local Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: monitoring-influxdb.kube-system.svc.cluster.local Address 1: 10.102.27.233 monitoring-influxdb.kube-system.svc.cluster.local </code></pre>
<p>I currently have GitLab omnibus setup on Docker. I plan to have HA for the same by adding it to Kubernetes and have persistence using Gluster. I have played around configuring Kubernetes with Gluster. Now it's time to bring GitLab into Kubernetes. GitLab uses PostgreSQL as the default db.</p> <p>My query is that to implement HA, should i<br> a) split GitLab into GitLab application and PostgreSQL container, and then run both (Application and DB) in their own cluster of pods i.e., separate <em>deployments</em> of replicas of GitLab app and PostgreSQL?<br> OR<br> b) keep using the omnibus installer and just have replicas of this single, standalone container?</p> <p>Does it really make any difference whether<br> 1) writes happen to a db cluster exposed via service to the GitLab app<br> OR<br> 2) writes happening directly to the omnibus GitLab container (which has db within itself)</p> <p>Just want to make sure that i don't unnecessarily end up making the setup complex. Having GitLab in Kubernetes along with Gluster already makes things a little complex. So does splitting app and db makes sense or just the omnibus setup will suffice? Concerned about concurrent writes to db.</p>
<p>According to <a href="http://docs.gitlab.com/ce/install/kubernetes/gitlab_omnibus.html#introduction" rel="nofollow noreferrer">http://docs.gitlab.com/ce/install/kubernetes/gitlab_omnibus.html#introduction</a> <strong>you should use dedicated Redis and PostgreSQL</strong> HA clusters. Option b) and 1)</p> <p>For less downtime better to use <strong>PostgreSQL master-slave</strong> cluster (<a href="https://www.postgresql.org/docs/10/static/different-replication-solutions.html" rel="nofollow noreferrer">https://www.postgresql.org/docs/10/static/different-replication-solutions.html</a>) and <strong>Redis Cluster master-slave</strong> (<a href="https://redis.io/topics/cluster-tutorial" rel="nofollow noreferrer">https://redis.io/topics/cluster-tutorial</a>). "Note that the minimal (Redis) cluster that works as expected requires to contain at least three master nodes".</p> <p>If you will use only GlusterFS to bring failover to PostgreSQL, you can get some errors requires manual repair when one DB instance crashes and another brings up. Like this: <a href="https://stackoverflow.com/questions/598200/how-do-i-fix-postgres-so-it-will-start-after-an-abrupt-shutdown">How do I fix Postgres so it will start after an abrupt shutdown?</a></p>
<p>I'm getting following error which attaching azureVhd to kubernetes pod.</p> <blockquote> <p>error validating "dev-dev-mongo-rc.yaml": error validating data: found invalid field azureVhd for v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>kubernetes version : $ kubectl version </p> <blockquote> <p>Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}</p> </blockquote> <p>replicationController-rc.yaml </p> <pre><code>kind: "ReplicationController" apiVersion: "v1" metadata: name: "test" spec: replicas: 1 selector: name: "test" template: metadata: labels: name: "test" spec: volumes: - name: "mongo-disk" azureVhd: vhdUrl: "https://portalvhds1459021060022.blob.core.windows.net/vhd-store/stable-mongo-01.vhd" fsType: "ext4" readOnly: false containers: - name: "mongo" image: "docker.io/mongo:3.2" resources: limits: cpu: "0.2" memory: "0.5Gi" ports: - containerPort: 27017 protocol: "TCP" resources: {} volumeMounts: - name: "mongo-disk" mountPath: "/data/db" imagePullPolicy: "IfNotPresent" restartPolicy: "Always" </code></pre>
<p><p> You could find complete k8s azure disk mount example here: <a href="https://github.com/andyzhangx/Demo/tree/master/linux/azuredisk" rel="nofollow noreferrer">https://github.com/andyzhangx/Demo/tree/master/linux/azuredisk</a></p> <p><p> yaml config of azure disk Static Provisioning is as following:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx-azuredisk spec: containers: - image: nginx name: nginx-azuredisk volumeMounts: - name: azure mountPath: /mnt/disk volumes: - name: azure azureDisk: diskName: test.vhd diskURI: https://YOURSTORAGEACCOUNT.blob.core.windows.net/vhds/test.vhd </code></pre>
<p>I'm running Ubuntu 17.04 (zesty) on a Dell XPS 13 (3854 MB of RAM and Intel Core i5-5200U CPU @ 2.20GHz) and trying to start up Minikube, but I'm getting a couple errors when I try to start it up.</p> <pre><code>➜ minikube version minikube version: v0.22.3 ➜ kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>I have VM VirtualBox Version 5.2.0 r118431 (Qt5.7.1). I've checked the BIOS settings and have virtualization enabled.</p> <pre><code>➜ minikube start Starting local Kubernetes v1.7.5 cluster... Starting VM... E1025 09:49:40.206594 22972 start.go:146] Error starting host: Error starting stopped host: Unable to start the VM: /usr/bin/VBoxManage startvm minikube --type headless failed: VBoxManage: error: The virtual machine 'minikube' has terminated unexpectedly during startup with exit code 1 (0x1) VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine . Retrying. E1025 09:49:40.207051 22972 start.go:152] Error starting host: Error starting stopped host: Unable to start the VM: /usr/bin/VBoxManage startvm minikube --type headless failed: VBoxManage: error: The virtual machine 'minikube' has terminated unexpectedly during startup with exit code 1 (0x1) VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MachineWrap, interface IMachine </code></pre> <p>I've tried some suggests that I've found online, like running <code>~/rm -rf .minikube/</code> and trying to start up minikube again. I've tried running <code>minikube stop</code> followed by a <code>minikube delete</code> and then trying to start minikube again. I've tried specifying the virtualbox driver when starting as well <code>minikube start --vm-driver=virtualbox</code>. These aren't working, I still get the same error.</p>
<p>This looks like an issue with your Virtualbox installation, have you tried reinstalling it?</p> <pre><code>sudo apt-get purge virtualbox virtualbox-dkms sudo apt-get install virtualbox-5.1 </code></pre>
<p>I am trying to setup a websocket connection to the Kubernetes Pod Exec API, based on the suggestions given in this SO post: <a href="https://stackoverflow.com/questions/34373472/how-to-execute-command-in-a-pod-kubernetes-using-api">How to execute command in a pod (kubernetes) using API?</a>. Here's what I have done so far -</p> <ol> <li>Installed Simple Web Socket Client extension in Chrome.</li> <li>Started <code>kubectl proxy --disable-filter=true</code> to run proxy with WS connections allowed. <code>kubectl.exe</code> version is 1.8.</li> <li>Used address <code>ws://localhost:8001/api/v1/namespaces/default/pods/nginx-3580832997-26zcn/exec?container=nginx&amp;stdin=1&amp;stdout=1&amp;stderr=1&amp;tty=1&amp;command=%2Fbin%2Fsh</code> in the Chrome extension to connect to the <code>exec</code> api.</li> </ol> <p>When I click connect, Chrome reports back an error with the message - </p> <p><code>Error during WebSocket handshake: Response must not include 'Sec-WebSocket-Protocol' header if not present in request</code></p> <p>Apparently, kubectl is sending back empty <code>Sec-WebSocket-Protocol</code> header in the response and Chrome is taking offense to that.</p> <p>I tried changing the code of Simple Web Socket Client <code>open</code> method to send empty protocols parameter to the Websocket client creation call, like - <code>ws = new WebSocket(url, []);</code> to coax Chrome in sending empty header in request, but Chrome doesn't send empty header.</p> <p>So what can be done to directly connect to the <code>exec</code> in Chrome?</p>
<p>This is a known issue; <a href="https://github.com/kubernetes/kubernetes/issues/25126" rel="nofollow noreferrer"><code>kubectl proxy</code> does not support websockets</a>. (You can verify this easily by starting up <code>kubectl proxy</code> and then attempting <code>kubectl --server=http://127.0.0.1:8001 exec ...</code>; you will receive the message <code>error: unable to upgrade connection: &lt;h3&gt;Unauthorized&lt;/h3&gt;</code> if the filter is enabled and <code>Error from server (BadRequest): Upgrade request required</code> if the filter is disabled).</p> <p>The <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#so-many-proxies" rel="nofollow noreferrer">confusion</a> might come from the fact that the <code>kube-apiserver</code> proxy does support websockets, but that proxy is different from the <code>kubectl proxy</code>.</p> <p>As I see you have 3 options now (in order of difficulty):</p> <ul> <li>Access <code>kube-apiserver</code> directly. You will likely need authentication that <code>kubectl proxy</code> is handling for you now</li> <li>Use SockJS, this is what Kubernetes Dashboard does for the <a href="https://github.com/kubernetes/dashboard/pull/1939" rel="nofollow noreferrer">exec feature</a></li> <li>Fix <a href="https://github.com/kubernetes/kubernetes/issues/25126" rel="nofollow noreferrer">#25126</a></li> </ul>
<p>When a container in a Pod is created in a Kubernetes cluster with a limit and request set, how aware can that container be of those limits and requests? Would an application running inside the container be able to get these limits and requests to, for example, reduce the amount of resources it uses if the limits and requests were particularly low?</p> <ul> <li>Kubernetes version: 1.8</li> <li>Container runtime: Docker</li> <li>Docker version: 1.12.6</li> </ul>
<p><a href="https://stackoverflow.com/questions/42187085/check-mem-limit-within-a-docker-container">Check mem_limit within a docker container</a> with the tl;dr of</p> <pre><code>cat /sys/fs/cgroup/memory/memory.limit_in_bytes </code></pre> <p>will show the limit, and then presumably the <code>requests</code> value is the allocated memory the container started with, but I would need to verify that assumption</p> <p>I personally don't even understand the unit when trying to apply <code>limits: cpu:</code> so I for sure wouldn't know how to verify that value</p>
<p>I have successfully configured a Kubernetes cluster using <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">Kops</a>.</p> <p>However, I cannot find where to change the auto-generated admin password - how can I do this?</p>
<p>At the moment there is no easy way to do that since there is no way via the Kops API to create a secret of type "Secret" (quite confusing I know). The only workaround is to change the credentials, in this case your password directly on your s3 configuration as explained here: </p> <p><a href="https://github.com/kubernetes/kops/blob/master/docs/secrets.md#workaround-for-changing-secrets-with-type-secret" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/secrets.md#workaround-for-changing-secrets-with-type-secret</a></p> <p>and force a rolling-update of your cluster by changing its configuration (I just updated to the new version of k8s for example) or simply run <code>kops rolling-update cluster --yes --force</code></p> <p>Apparently this will be addressed soon by one of the future releases.</p>
<p>I have recently installed minikube and kubectl. However when I run <code>kubectl get pods</code> or any other command related to kubectl I get the error</p> <p><code>Unable to connect to the server: unexpected EOF</code></p> <p>Does anyone know how to fix this?I am using Ubuntu server 16.04.Thanks in advance.</p>
<p>The following steps can be used for further debugging.<br/> </p> <ol> <li><p>Check the minikube local cluster status using <code>minikube status</code> command.</p> <pre><code> $: minikube status minikube: Running cluster: Running kubectl: Correctly Configured: pointing to minikube-vm at 172.0.x.y </code></pre></li> <li><p>If problem with kubectl configuratuion,then configure it using, <code>kubectl config use-context minikube</code> command.</p> <pre><code>$: kubectl config use-context minikube Switched to context "minikube". </code></pre></li> <li><p>Check the cluster status, using <code>kubectl cluster-info</code> command.</p> <pre><code>$: kubectl cluster-info Kubernetes master is running at ... Heapster is running at ... KubeDNS is running at ... ... To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre></li> </ol> <p>Note: It can even be due to very simple reason: internet speed (it happend to me just now). </p>
<p>I have an upstream server that simply starts a chunked encoding response and sends a small chunk every second for 100 chunks.</p> <p>When I hit the upstream server directly like so, all works fine.</p> <pre><code>curl --raw 10.244.7.248:4000/test/stream </code></pre> <p>However, when I go through nginx like so, the response gets cut off at random times.</p> <pre><code>curl --raw localhost/test/stream </code></pre> <p>Curl returns</p> <pre><code>curl: (18) transfer closed with outstanding read data remaining </code></pre> <p>and the nginx logs only show</p> <pre><code>::1 - [::1] - - [24/Oct/2017:11:09:02 +0000] "GET /test/stream HTTP/1.1" 200 440 "-" "curl/7.47.0" 95 44.783 [bar-bar-80] 10.244.7.248:4000 440 44.783 200 </code></pre> <p>Notice in this example, the response appears to complete successfully with a 200 but only after 44.783 seconds when it should have been roughly 100 seconds. Every time I try this I get a cut off at a different number of seconds.</p> <p>I am running Kubernetes Nginx Ingress Controller 0.9.0-beta.14 with <code>gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.14</code></p> <p>The nginx.conf is very long, but here are some snippets</p> <pre><code>daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 498976; worker_shutdown_timeout 10s ; events { multi_accept on; worker_connections 16384; use epoll; } http { real_ip_header X-Forwarded-For; real_ip_recursive on; set_real_ip_from 0.0.0.0/0; geoip_country /etc/nginx/GeoIP.dat; geoip_city /etc/nginx/GeoLiteCity.dat; geoip_proxy_recursive on; sendfile on; aio threads; aio_write on; tcp_nopush on; tcp_nodelay on; log_subrequest on; reset_timedout_connection on; keepalive_timeout 75s; keepalive_requests 100; client_header_buffer_size 1k; client_header_timeout 60s; large_client_header_buffers 4 8k; client_body_buffer_size 8k; client_body_timeout 60s; http2_max_field_size 4k; http2_max_header_size 16k; types_hash_max_size 2048; server_names_hash_max_size 8192; server_names_hash_bucket_size 128; map_hash_bucket_size 64; proxy_headers_hash_max_size 512; proxy_headers_hash_bucket_size 64; variables_hash_bucket_size 64; variables_hash_max_size 2048; underscores_in_headers off; ignore_invalid_headers on; include /etc/nginx/mime.types; default_type text/html; gzip on; gzip_comp_level 5; gzip_http_version 1.1; gzip_min_length 256; gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest +json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component; gzip_proxied any; # Custom headers for response server_tokens on; # disable warnings uninitialized_variable_warn off; # Additional available variables: # $namespace # $ingress_name # $service_name log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstrea m_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; map $request_uri $loggable { default 1; } access_log /var/log/nginx/access.log upstreaminfo if=$loggable; error_log /var/log/nginx/error.log notice; resolver 10.0.0.10 valid=30s; # Retain the default nginx handling of requests without a "Connection" header map $http_upgrade $connection_upgrade { default upgrade; '' close; } # Trust HTTP X-Forwarded-* Headers, but use direct values if they're missing. map $http_x_forwarded_for $the_real_ip { # Get IP address from X-Forwarded-For HTTP header default $realip_remote_addr; '' $remote_addr; } # trust http_x_forwarded_proto headers correctly indicate ssl offloading map $http_x_forwarded_proto $pass_access_scheme { default $http_x_forwarded_proto; '' $scheme; } map $http_x_forwarded_port $pass_server_port { default $http_x_forwarded_port; '' $server_port; } map $http_x_forwarded_host $best_http_host { default $http_x_forwarded_host; '' $this_host; } map $pass_server_port $pass_port { 443 443; default $pass_server_port; } # Map a response error watching the header Content-Type map $http_accept $httpAccept { default html; application/json json; application/xml xml; text/plain text; } map $httpAccept $httpReturnType { default text/html; json application/json; xml application/xml; text text/plain; } # Obtain best http host map $http_host $this_host { default $http_host; '' $host; } server_name_in_redirect off; port_in_redirect off; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # turn on session caching to drastically improve performance ssl_session_cache builtin:1000 shared:SSL:10m; ssl_session_timeout 10m; # allow configuring ssl session tickets ssl_session_tickets on; # slightly reduce the time-to-first-byte ssl_buffer_size 4k; # allow configuring custom ssl ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-R SA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:D HE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-C BC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; ssl_ecdh_curve auto; proxy_ssl_session_reuse on; upstream bar-bar-80 { # Load balance algorithm; empty for round robin, which is the default least_conn; keepalive 32; server 10.244.7.248:4000 max_fails=0 fail_timeout=0; } server { server_name bar.gigalixir.com; listen 80; listen [::]:80; set $proxy_upstream_name "-"; listen 443 ssl http2; listen [::]:443 ssl http2; ssl_certificate /ingress-controller/ssl/bar-bar-tls.pem; ssl_certificate_key /ingress-controller/ssl/bar-bar-tls.pem; more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains;"; location / { set $proxy_upstream_name "bar-bar-80"; set $namespace "bar"; set $ingress_name "bar"; set $service_name "bar"; # enforce ssl on server side if ($pass_access_scheme = http) { return 301 https://$best_http_host$request_uri; } port_in_redirect off; client_max_body_size "0"; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Real-IP $the_real_ip; proxy_set_header X-Forwarded-For $the_real_ip; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Scheme $pass_access_scheme; proxy_set_header X-Auth-Request-Redirect $request_uri; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_buffer_size "4k"; proxy_buffers 4 "4k"; proxy_request_buffering "on"; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; proxy_pass http://bar-bar-80; } } </code></pre>
<p>I determined that the connection is closed when the nginx configuration is reloaded, which happens periodically with the Kubernetes Nginx Ingress Controller. The nginx workers drain connections for 10 seconds which is not enough in this case. I filed an issue here <a href="https://github.com/kubernetes/kubernetes/issues/54505" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/54505</a></p> <p>From the github issue above</p> <blockquote> <p>you can adjust the drain of connections in the workers adjusting the value of the setting <code>worker-shutdown-timeout: XXXs</code> in the configuration configmap. The default value is 10 seconds.</p> </blockquote>
<p>Is there any tools/way to get CPU, MEM, NET metrics of <code>PODs</code>. Other than below links, is there any tools available</p> <ul> <li><a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer"><code>kube-state-metrics</code></a> - Able to deploy but no useful POD metrics. You can see the POD metrics <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md" rel="nofollow noreferrer">here</a>.</li> <li><a href="https://itnext.io/kubernetes-monitoring-with-prometheus-in-15-minutes-8e54d1de2e13" rel="nofollow noreferrer">kubernetes-monitoring-with-prometheus-in-15-minutes</a>- Installed kube-prometheus with "helm" tool, No POD metrics. Metrics List <a href="https://pastebin.com/1YfNrrmm" rel="nofollow noreferrer">here</a></li> <li><a href="https://github.com/camilb/prometheus-kubernetes" rel="nofollow noreferrer"><code>prometheus-kubernetes</code></a> - But it strucks at registering custom service forever. check <a href="https://stackoverflow.com/questions/46930397/struck-at-custom-resource-registration-k8s">here</a></li> <li><a href="https://coreos.com/blog/monitoring-kubernetes-with-prometheus.html" rel="nofollow noreferrer">Monitoring K8s with Prometheus </a> - In blog they mentioned <code>container_cpu metrics</code>, but I dont see any metrics like that</li> </ul> <p><strong>UPDATE1</strong></p> <p>Tried launch POD with <code>yaml file</code> as they mentioned in <a href="https://www.weave.works/blog/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/" rel="nofollow noreferrer">blog</a>. Installed <code>go lang</code> with <code>GOPATH</code> &amp; <code>GOROOT</code></p> <pre><code>ubuntu@ip-172-:~$ kubectl create -f prometheus.yaml panic: interface conversion: interface {} is []interface {}, not map[string]interface {} goroutine 1 [running]: k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.getObjectKind(0x14dcb20, 0xc420c56480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xffffffffffffff01, 0xc420f6bca0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:111 +0x539 k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.(*SchemaValidation).ValidateBytes(0xc4207b01d0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51628, 0x4ed384) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:49 +0x8f k8s.io/kubernetes/pkg/kubectl/validation.ConjunctiveSchema.ValidateBytes(0xc42073cba0, 0x2, 0x2, 0xc420b3ca80, 0x16c, 0x180, 0x4ed029, 0xc420b3ca80) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/validation/schema.go:130 +0x9a k8s.io/kubernetes/pkg/kubectl/validation.(*ConjunctiveSchema).ValidateBytes(0xc42073cbc0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51700, 0x443693) &lt;autogenerated&gt;:3 +0x7d k8s.io/kubernetes/pkg/kubectl/resource.ValidateSchema(0xc420b3ca80, 0x16c, 0x180, 0x2183f80, 0xc42073cbc0, 0x20, 0xc420b51700) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:222 +0x68 k8s.io/kubernetes/pkg/kubectl/resource.(*StreamVisitor).Visit(0xc420c2eb00, 0xc420c3d440, 0x218a000, 0xc420c3d4a0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:543 +0x269 k8s.io/kubernetes/pkg/kubectl/resource.(*FileVisitor).Visit(0xc420c3d2c0, 0xc420c3d440, 0x0, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:502 +0x181 k8s.io/kubernetes/pkg/kubectl/resource.EagerVisitorList.Visit(0xc420f6bc30, 0x1, 0x1, 0xc420903c50, 0x1, 0xc420903c50) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:211 +0x100 k8s.io/kubernetes/pkg/kubectl/resource.(*EagerVisitorList).Visit(0xc420c3d360, 0xc420903c50, 0x7ff854222000, 0x0) &lt;autogenerated&gt;:115 +0x69 k8s.io/kubernetes/pkg/kubectl/resource.FlattenListVisitor.Visit(0x2183d00, 0xc420c3d360, 0xc420c2eac0, 0xc420c2eb40, 0xc420c3d401, 0xc420c2eb40) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:417 +0xa3 k8s.io/kubernetes/pkg/kubectl/resource.(*FlattenListVisitor).Visit(0xc420c3d380, 0xc420c2eb40, 0x18, 0x18) &lt;autogenerated&gt;:130 +0x69 k8s.io/kubernetes/pkg/kubectl/resource.DecoratedVisitor.Visit(0x2183d80, 0xc420c3d380, 0xc420c3d3c0, 0x3, 0x4, 0xc420c3d400, 0xc420386901, 0xc420c3d400) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:325 +0xd8 k8s.io/kubernetes/pkg/kubectl/resource.(*DecoratedVisitor).Visit(0xc420903c20, 0xc420c3d400, 0x151b920, 0xc420f6bc60) &lt;autogenerated&gt;:153 +0x73 k8s.io/kubernetes/pkg/kubectl/resource.ContinueOnErrorVisitor.Visit(0x2183c80, 0xc420903c20, 0xc420c370e0, 0x7ff854222000, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:352 +0xf1 k8s.io/kubernetes/pkg/kubectl/resource.(*ContinueOnErrorVisitor).Visit(0xc420f6bc50, 0xc420c370e0, 0x40f3f8, 0x60) &lt;autogenerated&gt;:144 +0x60 k8s.io/kubernetes/pkg/kubectl/resource.(*Result).Visit(0xc4202c23f0, 0xc420c370e0, 0x6, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/result.go:95 +0x62 k8s.io/kubernetes/pkg/kubectl/cmd.RunCreate(0x21acd60, 0xc420320e40, 0xc42029d440, 0x2182e40, 0xc42000c018, 0x2182e40, 0xc42000c020, 0xc420173000, 0x176f608, 0x4) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:187 +0x4a8 k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdCreate.func1(0xc42029d440, 0xc4202aa580, 0x0, 0x2) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:73 +0x17f k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc42029d440, 0xc4202aa080, 0x2, 0x2, 0xc42029d440, 0xc4202aa080) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x22b k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420235b00, 0x8000102, 0x0, 0xffffffffffffffff) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x339 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420235b00, 0xc420320e40, 0x2182e00) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5 main.main() /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22 </code></pre> <p><code>prometheus.yaml</code></p> <pre><code># This scrape config scrapes kubelets - job_name: 'kubernetes-nodes' kubernetes_sd_configs: - role: node # couldn't get prometheus to validate the kublet cert for scraping, so don't bother for now tls_config: insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - target_label: __scheme__ replacement: https - source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname] target_label: instance </code></pre>
<p>You are looking for <a href="https://github.com/google/cadvisor" rel="nofollow noreferrer">cadvisor</a>.</p>
<p>Is it possible to configure a default namespace (other than "default") for a user in kubernetes?</p> <p>Thank you.</p>
<p>Yes, just set <code>.contexts[].context.namespace</code> in the kubeconfig file of the user.</p> <p>You can do that manually with a text editor (<code>~/.kube/config</code>):</p> <pre><code>apiVersion: v1 clusters: ... contexts: - context: cluster: development namespace: frontend user: developer name: dev-frontend current-context: dev-frontend kind: Config preferences: {} users: ... </code></pre> <p>Or with <code>kubectl config</code>. This example sets the namespace to <code>frontend</code> in the current context:</p> <pre><code>kubectl config set "contexts."`kubectl config current-context`".namespace" frontend </code></pre> <p>For more details and explanation see this <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">document</a>.</p>
<p>I am using <a href="https://github.com/cloudflare/cfssl" rel="nofollow noreferrer">cfssl</a> to generate CSR.</p> <p>I have below json format</p> <pre><code>{ "CN": "ambika", "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "O": "system:masters" } ] } root@vagrant-xenial64:~/bin# cat csr.json | cfssl genkey - | cfssljson -bare server 2017/10/25 08:28:07 [INFO] generate received request 2017/10/25 08:28:07 [INFO] received CSR 2017/10/25 08:28:07 [INFO] generating key: ecdsa-256 2017/10/25 08:28:07 [INFO] encoded CSR </code></pre> <p>In the next step Generate a CSR yaml blob and send it to the apiserver by running the following command:</p> <pre><code>root@vagrant-xenial64:~/bin# cat csr.yaml apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: name: ambika spec: groups: - system:masters request: $(cat server.csr | base64 | tr -d "\n") usages: - digital signature - key encipherment - client auth root@vagrant-xenial64:~/bin# kubectl create -f csr.yaml Error from server (BadRequest): error when creating "STDIN": CertificateSigningRequest in version "v1beta1" cannot be handled as a CertificateSigningRequest: [pos 684]: json: error decoding base64 binary 'LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlIbE1JR01BZ0VBTUNveEZ6QVZCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVE4d0RRWURWUVFERXdaaApiV0pwYTJFd1dUQVRCZ2NxaGtqT1BRSUJCZ2dxaGtqT1BRTUJCd05DQUFTdXVIbWF4bpaDU1TkpHZHh1cmN1ClQ1OU9YU0hEWUQ0d1J2S055NjlMOEkwL3RxTjF3WmRMckpYdSsxSDN5ZGp2N0FMaUJoSUh6aHRMMHd6QUN4b3AKb0FBd0NWUlLb1pJemowRUF3SURTQUF3UlFJaEFOTk9xaXFDa1lSYWgxQUFoUDQ1bWNzb09QM2p4RjIvdTB6ZgpHZW9BamhEQkFpQU55MEZkSU9vREhlZDFGd3UvTWFBditmWGJxN3V6RUdCQm9zUUFSUWFYcEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K': illegal base64 data at input byte 512 </code></pre> <p>I am following this link <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">Manage TLS Certificates in a Cluster</a></p>
<p>since your running this in yaml file, you need to include based64 encoded value int he yaml file. <code>$(cat server.csr | base64 | tr -d "\n")</code></p> <p>in the example page they are running it in the shell directly.</p> <p><code>cat server.csr | base64 | tr -d '\n' &gt; o</code> encode like this and include the value in the yaml file it will work.</p> <pre><code>apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: name: ambika spec: groups: - system:masters request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlIbU1JR01BZ0VBTUNveEZ6QVZCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVE4d0RRWURWUVFERXdaaApiV0pwYTJFd1dUQVRCZ2NxaGtqT1BRSUJCZ2dxaGtqT1BRTUJCd05DQUFTVWVLYmdpKzZHVHlhQlhDdk40S29SClRqMmFmRlUvVVZXV1Z5WHhJSnlMczhraE5pZDlRQnp0bzZHcDZESnYwTDdST1JuZnNXR2M2N0VVeXUzdU5LZ00Kb0FBd0NnWUlLb1pJemowRUF3SURTUUF3UmdJaEFJekkxOWxuR3ZueG1la1JyM28vSDI1ZWYyVklOWURlMkJZRQpaZUJiaHN1c0FpRUEyRXNWUE0xV2huZUMyMVFEL3ZIbXNnVFZqWWdVcHRPU3dSQ2lHSWZQekhjPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K usages: - digital signature - key encipherment - server auth </code></pre>
<p>I'm attempting to migrate over to Cloud SQL (Postgres). I have the following deployment in Kubernetes, having followed these instructions <a href="https://cloud.google.com/sql/docs/mysql/connect-container-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-container-engine</a> : </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: menu-service spec: replicas: 1 template: metadata: labels: app: menu-service spec: volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials - name: cloudsql emptyDir: - name: ssl-certs hostPath: path: /etc/ssl/certs containers: - image: gcr.io/cloudsql-docker/gce-proxy:1.11 name: cloudsql-proxy command: ["/cloud_sql_proxy", "--dir=/cloudsql", "-instances=tabb-168314:europe-west2:production=tcp:5432", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true - name: ssl-certs mountPath: /etc/ssl/certs - name: cloudsql mountPath: /cloudsql - name: menu-service image: eu.gcr.io/tabb-168314/menu-service:develop imagePullPolicy: Always env: - name: MICRO_BROKER value: "nats" - name: MICRO_BROKER_ADDRESS value: "nats.staging:4222" - name: MICRO_REGISTRY value: "kubernetes" - name: ENV value: "staging" - name: PORT value: "8080" - name: POSTGRES_HOST value: "127.0.0.1:5432" - name: POSTGRES_USER valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username - name: POSTGRES_PASS valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password - name: POSTGRES_DB value: "menus" ports: - containerPort: 8080 </code></pre> <p>But unfortunately I'm getting this error when trying to update the deployment: </p> <pre><code>MountVolume.SetUp failed for volume "kubernetes.io/secret/69b0ec99-baaf-11e7-82b8-42010a84010c-cloudsql-instance-credentials" (spec.Name: "cloudsql-instance-credentials") pod "69b0ec99-baaf-11e7-82b8-42010a84010c" (UID: "69b0ec99-baaf-11e7-82b8-42010a84010c") with: secrets "cloudsql-instance-credentials" not found Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "staging"/"menu-service-1982520680-qzwzn". list of unattached/unmounted volumes=[cloudsql-instance-credentials] </code></pre> <p>Have I missed something here? </p>
<p>You are missing (at least) one of the secrets needed to start up this Pod, namely <code>cloudsql-instance-credentials</code>.</p> <p>From <a href="https://cloud.google.com/sql/docs/mysql/connect-container-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-container-engine</a>:</p> <blockquote> <p>You need two secrets to enable your Container Engine application to access the data in your Cloud SQL instance:</p> <ul> <li><strong>The <code>cloudsql-instance-credentials</code> secret contains the service account.</strong></li> <li>The <code>cloudsql-db-credentials</code> secret provides the proxy user account and password. (I think you have this created, I can't see an error message about this one)</li> </ul> <p>To create your secrets:</p> <p>Create the secret containing the Service Account which enables authentication to Cloud SQL:</p> <pre><code>kubectl create secret generic cloudsql-instance-credentials \ --from-file=credentials.json=[PROXY_KEY_FILE_PATH] </code></pre> <p>[...]</p> </blockquote> <p>The link above also describes how to create a GCP service account for this purpose, if you don't have one created already.</p>
<p>I am trying to configure a persistent disk for Kafka in Kubernetes, I need to set the mount path to /kafka/POD_NAME/. </p> <p>I can get the pod name as explained <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">here</a> from: </p> <pre><code> env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name </code></pre> <p>I am asking how to use this <code>POD_NAME</code> inside the <code>mountPath</code> value, something like: `/</p>
<p>The Downward API, as the name indicates, is used to expose extra kubernetes related information to the containers. What you want is for your yaml file definitions to have variable or dynamic values based on the runtime pods being launched. This is currently not possible.</p> <p>You might want to try to do something with <a href="https://github.com/kubernetes/helm" rel="nofollow noreferrer">Helm</a> templates which in a StatefulSet with a PVC would allow you to have predictable pod+volume names and maybe insert a dynamic mountpath, but I haven't tried this myself.</p> <p>Further, if you're setting this up as recommended, which is in a StatefulSet, it is preferable that the inner spec template for the containers are exactly the same.</p>
<p>I am a little unsure about how to ensure I do not have overlapping CIDRs when using kops to create multiple clusters.</p> <p>I know it's possible to specify the <code>--cluster-cidr</code> when using the <a href="https://kubernetes.io/docs/admin/kube-controller-manager/" rel="nofollow noreferrer">kube-control-manager</a>, but I can't seem to find a way of doing this when using kops.</p>
<p>First you check which CIDRs you are already using</p> <pre><code>aws ec2 describe-vpcs | jq -cr '.Vpcs[] | [.CidrBlock, .VpcId, .Tags[0].Value]' </code></pre> <p>Than before you create the cluster, edit the main config</p> <pre><code>kops edit cluster $NAME </code></pre> <p>Change networkCIDR, nonMasqueradeCIDR and subnets.cidr</p> <p>Note that subnets.cidr MUST be inside networkCIDR range.</p> <p>Also note that networkCIDR and nonMasqueradeCIDR MUST NOT overlap</p> <p>Example of valid config:</p> <pre><code> networkCIDR: 10.80.0.0/16 nonMasqueradeCIDR: 10.81.0.0/16 subnets: - cidr: 10.80.0.0/24 </code></pre>
<p>Is there any way to set per-volume permissions/ownership in Kubernetes declaratively?</p> <p>Usecase:</p> <ul> <li>a pod is composed of two containers, running as two distinct users/groups, both of them non-root, and are unable to <code>sudo</code></li> <li>the containers mount a volume each, and need to create files in these volumes (e.g. both of them want to write logs)</li> </ul> <p>We know that we can use <code>fsGroup</code>, however that is a pod-level declaration. So even if we pick fsGroup equal to user in first container, then we are going to have permission issues in the other one. (ref: <a href="https://stackoverflow.com/questions/43544370/kubernetes-how-to-set-volumemount-user-group-and-file-permissions">Kubernetes: how to set VolumeMount user group and file permissions</a>)</p>
<p>One solution is to <strong>use init-container to change permissions of mounted directories</strong>.</p> <p>The init-container would need to mount both volumes (from both containers), and do the needed <code>chown</code>/<code>chmod</code> operations.</p> <p>Drawbacks:</p> <ul> <li>extra container that needs to be aware of other containers' specific (ie. uid/gid)</li> <li>init container needs to run as root to perform <code>chown</code></li> </ul>
<p>I have a Node app that's composed of several independent modules, talking to each other over AMQP. This app is started by an index.js file that instatiates every other index.js in the different folders of my project, which in turn instantiate the actual modules.</p> <p>I've been reading about microservices and I'd like to convert my application to Docker containers. I've found examples of how to convert a simple Node app to Docker, but I want to separate my app so that every module is in an independent Node container (this is because each module is independent and doesn't depend on other modules to work. They receive work from the message queue, and put results on the message queue).</p> <p>What I can't find is information on how I should organize and deploy my code. Should I have a different Node project (with separate packages.json) for each of my modules? Or should I have a single Node project for all my modules and deploy each of them individually?</p> <p>After I have my project organized, is there a script tool that will generate (build* and deploy) each of my modules to its own container? All examples I've found so far are "hello world" level samples that just pack one app.</p> <p>During development, will I have to deploy new containers for each change I test?</p> <p>*: build because I use ES6 and I have to use Babel.</p>
<p>There are several layers that you need to pass.</p> <p>First, you need to containerize each microservice into its own container image. Usually you will use Docker for that.</p> <p>Each app will have a separate Dockerfile which you will use to create docker images which you will push to docker image registries which will be pulled by whoever wants to run your app.</p> <p>As for the codebase organisation, you can have one repository with all your microservices, but you will need multiple Dockerfiles to create images for all of them.</p> <p>Then you need to orchestrate your containers created from your images (run them). If you want your app to run on one host you may get away with Docker compose, which lets you define which containers are run in what order in single yaml manifest file. Docs are here: <a href="https://docs.docker.com/compose/" rel="nofollow noreferrer">https://docs.docker.com/compose/</a></p> <p>If you want to run your app in a kubernetes cluster, you will want to create a k8s Deployment (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a>) which will run your containers as pods on your cluster nodes.</p>
<p>When a container in a Pod is created in a Kubernetes cluster with a limit and request set, how aware can that container be of those limits and requests? Would an application running inside the container be able to get these limits and requests to, for example, reduce the amount of resources it uses if the limits and requests were particularly low?</p> <ul> <li>Kubernetes version: 1.8</li> <li>Container runtime: Docker</li> <li>Docker version: 1.12.6</li> </ul>
<p>The <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">Downwards API</a> can be used to pass the requests and limits to the container process as environment variables</p>
<p>I'm wondering how can install a package inside the minikube VM. I need some tools. I have tried the /bin/toolbox container, but It does not have internet conexion.</p> <pre><code>[root@docker-fedora-24 ~]# dnf update --verbose cachedir: /var/cache/dnf DNF version: 1.1.9 Cannot download 'https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&amp;arch=x86_64': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=updates-released-f24&amp;arch=x86_64 [Could not resolve host: mirrors.fedoraproject.org]. Error: Failed to synchronize cache for repo 'updates' </code></pre> <p>I have tried the same toolbox script in my computer and it is properly working.</p> <p>What configuration parameters I'm missing in minikube or systemd-nspaw? Or how can I cook a customized minikube VM?</p> <p>Thanks a lot</p>
<p>You can run minicube without VM on your local docker (if you use linux):</p> <p><code>minikube start --vm-driver=none</code></p> <p>A alternative, run <strong>toolbox</strong> with <code>docker run --net=host ...</code> to make network for container more <em>transparent</em>. Troubleshoot your internet connection with <code>nslookup</code>, <code>traceroute</code>/<code>tracepath</code>, <code>curl -v</code>, <code>ifconfig</code>. <a href="http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch04_:_Simple_Network_Troubleshooting#.WfY1xGi0OUk" rel="nofollow noreferrer">http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:<em>Ch04</em>:_Simple_Network_Troubleshooting#.WfY1xGi0OUk</a></p>