prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>Is there any known Java API to use Kubernetes helm (something like the great KubernetesClient from fabric8)? I am trying to initial helm commands from Java but I couldn't find anything that actually works... </p> <p>Thanks, Nir</p>
<p>I've written <a href="https://microbean.github.io/microbean-helm" rel="noreferrer">microbean-helm</a>. This project:</p> <ol> <li>Checks out the <code>.proto</code> files from the Helm project…</li> <li>…generates their gRPC Java bindings…</li> <li>…and adds a couple of utility classes.</li> </ol> <p>The end result is that Tiller, the server-side component of Helm that does all the heavy lifting like chart installation, etc., is addressable and drivable from Java. I am guessing that that is what you really want.</p> <p>You may follow along at its <a href="https://microbean.github.io/microbean-helm/" rel="noreferrer">Github repository</a>.</p>
<p>I am running Kubernetes cluster 1.5.3 on IBM Bluemix, I would like to get the pod's resources utilization (memory and cpu) as raw data points. Is Kubernetes expose such API?</p> <blockquote> <p>➜ bluemix git:(master) ✗ k cluster-info Kubernetes master is running at <a href="https://x:x" rel="nofollow noreferrer">https://x:x</a></p> <p>Heapster is running at <a href="https://x:x/api/v1/proxy/namespaces/kube-system/services/heapster" rel="nofollow noreferrer">https://x:x/api/v1/proxy/namespaces/kube-system/services/heapster</a></p> <p>KubeDNS is running at <a href="https://x:x/api/v1/proxy/namespaces/kube-system/services/kube-dns" rel="nofollow noreferrer">https://x:x/api/v1/proxy/namespaces/kube-system/services/kube-dns</a></p> <p>kubernetes-dashboard is running at <a href="https://x:x/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard" rel="nofollow noreferrer">https://x:x/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard</a></p> </blockquote>
<p>You can use <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">heapster</a> or <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> to achieve this. In many kubernetes deployments <code>heapster</code>is already installed. Both can be easily deployed in-cluster.</p>
<p>I'm deploying taefik to my kubernetes cluster using helm. Here's what I have at the moment:</p> <pre><code>helm upgrade --install load-balancer --wait --set ssl.enabled=true,ssl.enforced=true,acme.enabled=true,[email protected] stable/traefik </code></pre> <p>I'm trying to configure letsencrypt. According to this <a href="https://docs.traefik.io/user-guide/examples/#lets-encrypt-support" rel="nofollow noreferrer">documentation</a> - you add the domains to the bottom of the .toml file.</p> <p>Looking at the code for the helm <a href="https://github.com/kubernetes/charts/blob/master/stable/traefik/templates/configmap.yaml" rel="nofollow noreferrer">chart</a>, there's no provision for such configuration.</p> <p>Is there another way to do this or do I need to fork the chart to create my own variation of the .toml file?</p>
<p>Turns out this is the chicken and the egg problem, described <a href="https://github.com/kubernetes/charts/issues/889" rel="noreferrer">here</a>.</p> <p>For the helm chart, if <code>acme.enabled</code> is set to <code>true</code>, then Treafik will automatically generate and serve certificates for domains configured in Kubernetes ingress rules. This is the purpose of the <code>onHostRule = true</code> <a href="https://github.com/kubernetes/charts/blob/master/stable/traefik/templates/configmap.yaml#L55" rel="noreferrer">line</a> in the yaml file (referenced above).</p> <p>To use Traefik with Let's Encrypt, we have to create an A record in our DNS server that points to the ip address of our load balancer. Which we can't do until Traefik is up and running. However, this configuration needs to exist <strong>before</strong> Traefik starts.</p> <p>The only solution (at this stage) is to kill the first Pod after the A record configuration has propagated.</p>
<p>I have a question regarding Kubernetes YAML string operations.</p> <p>I need to set an env variable based on the hostname of the container that is deployed and append a port number to this variable.</p> <pre><code> env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name </code></pre> <blockquote> <p>How do I create another env variable that uses MY_POD_NAME and makes it look like this uri://$MY_POD_NAME:9099/</p> </blockquote> <p>This has to be defined as an env variable. Are there string operations allowed in Kubernetes YAML files?</p>
<p>You can do something like </p> <pre><code>- name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_URI value: "uri://$(MY_POD_NAME):9099/" </code></pre> <p>We are using that since K8s 1.4</p> <p><code>$()</code> is processed by k8s itself, does not work everywhere, but works for env variables. </p> <p>If your container contains bash, you can also leverage bash variable expansion</p> <pre><code>"command": ["/bin/bash"], "args": [ "-c", "MY_POD_URI_BASH=uri://${MY_POD_NAME}:9099/ originalEntryPoint.sh ], </code></pre> <p><code>${}</code> is not touched by k8s, but evaluated later in container by bash. If you have a chance, prefer the first option with <code>$()</code></p> <p><strong>note</strong>: order matters in declaration. In example above, if <code>MY_POD_NAME</code> is defined later in env array, the expansion will not work.</p>
<p>I have an image runing a node.js application. Inside this application I have the method /kill that kill the process ("process.exit();").</p> <p>I have a deployment with three replicas running the above image. I expose the deployment through a service of type NodePort.</p> <p>When I call the /kill against one replica it exists and the request fails then kubernetes retry the request against the other replicas and all of them exit.</p> <p>This is my service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: hellokube-service labels: app: hellokube spec: type: NodePort ports: - port: 8080 targetPort: 8080 nodePort: 31111 selector: app: hellokube </code></pre> <p>Is it possible to avoid this retry?</p>
<p>It was not kubernetes who was retrying. It was the browser.</p> <p>My node.js method implementation is:</p> <pre><code>app.get("/kill", function(req, res, next){ logger.info("Somebody kill me."); process.exit(); }); </code></pre> <p>It is a get and it returns nothing. The browser detect an unexpected loose of connection with an idempotent operation and retry the operation. <a href="https://mnot.github.io/I-D/httpbis-retry/" rel="nofollow noreferrer">https://mnot.github.io/I-D/httpbis-retry/</a></p> <p>If I return something or I change the operation to a POST operation it does not happen.</p> <p>I didn't know about this browser behavior.</p> <p>If you make the invoke to the original operation using curl instead of a browser it does not retry. SOAPUI does retry.</p>
<p>I am using the ansible scripts from <code>kargo</code> to build my cluster. I am unable to find where the data is stored in etcd3, despite looking over the verbose logs from the apiserver.</p> <p>Here is what I see the hyperkube apiserver logs:</p> <pre><code>$ docker logs k8s_kube-apiserver.fd19548d_kube-apiserver-kube-master-01_kube-system_2f6ad6b0bf81ca6a0e2b4d499a25fc89_aa25196e [[ SNIP ]] I0127 23:31:55.871267 1 storage_factory.go:242] storing { podtemplates} in v1, reading as __internal from { /registry [https://10.60.68.11:2379 https://10.60.68.39:2379 https://10.60.68.35:2379] /etc/ssl/etcd/ssl/node-kube-master-01-key.pem /etc/ssl/etcd/ssl/node-kube-master-01.pem /etc/ssl/etcd/ssl/ca.pem true 1000 &lt;nil&gt;} I0127 23:31:55.875975 1 storage_factory.go:242] storing { events} in v1, reading as __internal from { /registry [https://10.60.68.11:2379 https://10.60.68.39:2379 https://10.60.68.35:2379] /etc/ssl/etcd/ssl/node-kube-master-01-key.pem /etc/ssl/etcd/ssl/node-kube-master-01.pem /etc/ssl/etcd/ssl/ca.pem true 1000 &lt;nil&gt;} I0127 23:31:55.876169 1 reflector.go:234] Listing and watching *api.PodTemplate from k8s.io/kubernetes/pkg/storage/cacher.go:215 I0127 23:31:55.877950 1 compact.go:55] compactor already exists for endpoints [https://10.60.68.11:2379 https://10.60.68.39:2379 https://10.60.68.35:2379] I0127 23:31:55.878148 1 storage_factory.go:242] storing { limitranges} in v1, reading as __internal from { /registry [https://10.60.68.11:2379 https://10.60.68.39:2379 https://10.60.68.35:2379] /etc/ssl/etcd/ssl/node-kube-master-01-key.pem /etc/ssl/etcd/ssl/node-kube-master-01.pem /etc/ssl/etcd/ssl/ca.pem true 1000 &lt;nil&gt;} I0127 23:31:55.879372 1 compact.go:55] compactor already exists for endpoints [https://10.60.68.11:2379 https://10.60.68.39:2379 https://10.60.68.35:2379] </code></pre> <p>the <code>hyperkube apiserver</code> is started with these arguments:</p> <pre><code>$ docker inspect k8s_kube-apiserver.b6395694_kube-apiserver-kube-master-01_kube-system_2f6ad6b0bf81ca6a0e2b4d499a25fc89_4338b366 [ { "Id": "33c76fa64bbd5d5a656e329cf87ed3707077659c69dc281127751f594460242b", "Created": "2017-01-27T23:35:10.691147667Z", "Path": "/hyperkube", "Args": [ "apiserver", "--advertise-address=10.60.68.23", "--etcd-servers=https://10.60.68.11:2379,https://10.60.68.39:2379,https://10.60.68.35:2379", "--etcd-quorum-read=true", "--etcd-cafile=/etc/ssl/etcd/ssl/ca.pem", "--etcd-certfile=/etc/ssl/etcd/ssl/node-kube-master-01.pem", "--etcd-keyfile=/etc/ssl/etcd/ssl/node-kube-master-01-key.pem", "--insecure-bind-address=127.0.0.1", "--apiserver-count=3", "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota", "--service-cluster-ip-range=10.233.0.0/18", "--service-node-port-range=30000-32767", "--client-ca-file=/etc/kubernetes/ssl/ca.pem", "--basic-auth-file=/etc/kubernetes/users/known_users.csv", "--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem", "--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem", "--token-auth-file=/etc/kubernetes/tokens/known_tokens.csv", "--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem", "--secure-port=443", "--insecure-port=8080", "--v=4", "--allow-privileged=true", "--cloud-provider=openstack", "--cloud-config=/etc/kubernetes/cloud_config", "--anonymous-auth=False" ], </code></pre> <p>No where does it override the default etcd prefix of <code>/registry</code>. I have no idea where apiserver is storing data.</p> <pre><code>$ docker exec -it etcd3 etcdctl --peers https://10.60.68.11:2379 ls /registry Error: 100: Key not found (/registry) [163] </code></pre>
<p>To get keys and values stored in etcd v3 by kubernetes:</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints=http://localhost:2379 get --prefix /registry </code></pre> <p>To get 1 specified key and value from etcd v3, e.g.:</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints=http://localhost:2379 get /registry/services/specs/default/kubernetes </code></pre> <p>Based on: <a href="https://github.com/coreos/etcd/blob/master/Documentation/dev-guide/interacting_v3.md" rel="nofollow noreferrer">https://github.com/coreos/etcd/blob/master/Documentation/dev-guide/interacting_v3.md</a></p>
<p>Is there a way to assign the ingress to a specific node?</p> <p>I know that it is possible to assign a pod to a specific node using <code>nodeSelectors</code> but that is not a valid option for ingress pods according to the spec.</p>
<p><code>Ingress</code> is just a logical way to represent how to route traffic to service/pod. Regarding the question, <code>Ingress Controller</code> might be the right term on which you should check instead. </p> <p>Read more here: <a href="https://github.com/kubernetes/ingress/tree/master/controllers" rel="nofollow noreferrer">ingress controller</a>. There's a possible chance <code>Ingress Controller pod</code> can be assigned to specific node.</p>
<p>I installed Kubernetes on my Ubuntu machine. For some debugging purposes I need to look at the kubelet log file (if there is any such file). </p> <p>I have looked in <code>/var/logs</code> but I couldn't find a such file. Where could that be?</p>
<p>If you run kubelet using <code>systemd</code>, then you could use the following method to see kubelet's logs:</p> <pre><code># journalctl -u kubelet </code></pre>
<p>We would need to get error events (like pod x is stuck in a crash loop, etc.) from Kubernetes itself. On Google Container Engine we can not find those logs anywhere and therefor can not add monitoring to it.</p> <p>Those Logs are usually provided by the API Server etc. which is not included in Google Logging. Is there a way of achieving what we need? Additionally it would be good to have those K8s Errors in the GCE Error Reporting.</p>
<p>Mmm... <code>kubectl describe pod</code> and <code>kubectl logs pod</code> should work for you. What I learned about using them came from</p> <p><a href="https://kukulinski.com/10-most-common-reasons-kubernetes-deployments-fail-part-1/" rel="nofollow noreferrer">https://kukulinski.com/10-most-common-reasons-kubernetes-deployments-fail-part-1/</a></p> <p>and references therein</p>
<p>I'm trying to put some code from my repo using Openshift online. My build compiles fine, but the deployment is failing:</p> <pre><code>error: update acceptor rejected nodejs-mongo-persistent-7: pods for rc "nodejs-mongo-persistent-7" took longer than 600 seconds to become ready </code></pre> <p>Looking at the event monitor, I see these errors:</p> <pre><code>Liveness probe failed: Get http://10.129.127.123:8080/pagecount: dial tcp 10.129.127.123:8080: getsockopt: connection refused Readiness probe failed: Get http://10.129.127.123:8080/pagecount: dial tcp 10.129.127.123:8080: getsockopt: connection refused </code></pre> <p>This error happens about 50 times until everything times out and my pod is killed.</p> <p>I'm not really a server guy, and have never worked with the environment before, so most help resources I just don't understand.</p> <p>Here's a screen cap of the event log: <a href="https://imgur.com/a/yv3fA" rel="nofollow noreferrer">http://imgur.com/a/yv3fA</a></p> <pre><code>From : $ sudo docker pull registry/nodejs-mongo-persistent:latest architecture=x86_64 authoritative-source-url=registry.access.redhat.com build-date=2017-04-21T09:41:19.146364 com.redhat.build-host=ip-10-29-120-133.ec2.internal com.redhat.component=rh-nodejs4-docker com.redhat.deployments-dir=/opt/app-root/src com.redhat.dev-mode=DEV_MODE:false com.redhat.dev-mode.port=DEBUG_PORT:5858 distribution-scope=public io.k8s.description=Platform for building and running Node.js 4 applications io.k8s.display-name=springstead-portfolio/nodejs-mongo-persistent-8:ff0aacc1 io.openshift.build.commit.author=Shawn Springstead &lt;[email protected]&gt; io.openshift.build.commit.date=Mon Jun 19 15:35:17 2017 -0400 io.openshift.build.commit.id=409c93610f0b2b264c84429106dc8bbcf0f3fee0 io.openshift.build.commit.message=correct server info io.openshift.build.commit.ref=master io.openshift.build.image=registry.access.redhat.com/rhscl/nodejs-4-rhel7@sha256:c5b21dc08cf5da8b6b0485147d946d8202f2be211c17bcef3a0fc26570217dd3 io.openshift.build.source-location=https://github.com/SpringsTea/Big-Mac-Index io.openshift.expose-services=8080:http io.openshift.s2i.scripts-url=image:///usr/libexec/s2i io.openshift.tags=builder,nodejs,nodejs4 io.s2i.scripts-url=image:///usr/libexec/s2i release=11.16 summary=Platform for building and running Node.js 4 applications vcs-ref=e688e26c75b1418982bef6a87b9bbacd6d47604c vcs-type=git vendor=Red Hat, Inc. version=4 </code></pre>
<p>First of all your Pod has the expose the ports the Liveness and Readiness probes need this is done in the Pod configuration.</p> <blockquote> <p>Liveness probes are executed by the kubelet, so all requests are made in the kubelet network namespace.</p> </blockquote> <p>Be sure not to run the probes on service ports but the local ports.</p>
<p>I'm trying to mount a vSphere volume in a pod but I keep getting:</p> <pre><code>vsphere_volume_util.go:123] Cloud provider not initialized properly </code></pre> <p>/etc/kubernetes/environment/vsphere.conf</p> <pre><code>[Global] user="xxxxxx" password="xxxxxx" server="xxxxxx" port="443" insecure-flag="1" datacenter="Frankfurt" datastore="dfrclupoc01-001" #working-dir="dockvols" [Disk] scsicontrollertype=pvscsi </code></pre> <p>In the "vmWare vSphere Web Client" I see:</p> <pre><code>&lt;mltdfrd01.xx.com&gt; &lt;Frankfurt&gt; &lt;dfrclupoc01-001&gt; </code></pre> <p>And under that store I have a folder "dockvols" with a subdirectory "11111111-1111-1111-1111-111111111111".</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: volumePath: "[Frankfurt/dfrclupoc01-001] dockvols/11111111-1111-1111-1111-111111111111/MyVolume.vmdk" fsType: ext4 kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvcmilo1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi apiVersion: v1 kind: Pod metadata: name: pod0001 spec: containers: - image: busybox name: pod0001 volumeMounts: - mountPath: /data name: pod-volume volumes: - name: pod-volume persistentVolumeClaim: claimName: pvcmilo1 </code></pre> <p>I tried different volume paths but I think the problem is earlier in the process.</p> <p>Log of the node starting at the moment I create the pod:</p> <pre><code>I0602 05:43:20.781563 84854 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/224c6b51-24fc-11e7-adcd-005056890fa6-default-token-j6vgj" (spec.Name: "default- token-j6vgj") pod "224c6b51-24fc-11e7-adcd-005056890fa6" (UID: "224c6b51-24fc-11e7-adcd-005056890fa6"). I0602 05:43:24.279729 84854 kubelet.go:1781] SyncLoop (ADD, "api"): "pod0001_default(ebe97189-4777-11e7-8979-005056890fa6)" E0602 05:43:24.378657 84854 vsphere_volume_util.go:123] Cloud provider not initialized properly I0602 05:43:24.382952 84854 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/ebe97189-4777-11e7-8979-005056890fa6-pv0001" (spec.Name: " pv0001") pod "ebe97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6") I0602 05:43:24.382985 84854 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/ebe97189-4777-11e7-8979-005056890fa6-default-token-zsrfn" (spec.Na me: "default-token-zsrfn") pod "ebe97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6") I0602 05:43:24.483237 84854 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/secret/ebe97189-4777-11e7-8979-005056890fa6-default-token-zsrfn" (spec.Name: "default-token- zsrfn") to pod "ebe97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6"). E0602 05:43:24.483265 84854 vsphere_volume_util.go:123] Cloud provider not initialized properly I0602 05:43:24.483296 84854 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/vsphere-volume/ebe97189-4777-11e7-8979-005056890fa6-pv0001" (spec.Name: "pv0001") to pod "eb e97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6"). E0602 05:43:24.492507 84854 mount_linux.go:119] Mount failed: exit status 32 Mounting command: mount Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[dfrclupoc01-001] dockvols/11111111-1111-1111-1111-111111111111/MyVolume.vmdk /var/lib/kubelet/pods/ebe97189-4777-11 e7-8979-005056890fa6/volumes/kubernetes.io~vsphere-volume/pv0001 [bind] Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[dfrclupoc01-001] dockvols/11111111-1111-1111-1111-111111111111/MyVolume.vmdk does not exist </code></pre> <p>Kubernete version: 1.5.2</p> <p>Thanks for any help, Milo</p>
<p>Seems I missed a lot of details:</p> <ul> <li>not only kubelet needs the cloudconfig but also the api-server and the controller-manager</li> <li>the wwn disks by-id where missing. I had to enable that in the vSphere environment for the vm by setting disk.EnableUUID to TRUE</li> <li>remove working-dirs entry. Seems to crash kubelet...</li> <li>some other details I forgot</li> </ul> <p>See <a href="https://vanderzee.org/linux/article-170620-144221" rel="nofollow noreferrer">https://vanderzee.org/linux/article-170620-144221</a> for details.</p>
<p>I have the following setup:</p> <p>A docker image <code>omg/telperion</code> on docker hub A kubernetes cluster (with 4 nodes, each with ~50GB RAM) and plenty resources</p> <p>I followed tutorials to pull images from dockerhub to kubernetes</p> <pre><code>SERVICE_NAME=telperion DOCKER_SERVER="https://index.docker.io/v1/" DOCKER_USERNAME=username DOCKER_PASSWORD=password DOCKER_EMAIL="[email protected]" # Create secret kubectl create secret docker-registry dockerhub --docker-server=$DOCKER_SERVER --docker-username=$DOCKER_USERNAME --docker-password=$DOCKER_PASSWORD --docker-email=$DOCKER_EMAIL # Create service yaml echo "apiVersion: v1 \n\ kind: Pod \n\ metadata: \n\ name: ${SERVICE_NAME} \n\ spec: \n\ containers: \n\ - name: ${SERVICE_NAME} \n\ image: omg/${SERVICE_NAME} \n\ imagePullPolicy: Always \n\ command: [ \"echo\",\"done deploying $SERVICE_NAME\" ] \n\ imagePullSecrets: \n\ - name: dockerhub" &gt; $SERVICE_NAME.yaml # Deploy to kubernetes kubectl create -f $SERVICE_NAME.yaml </code></pre> <p>Which results in the pod going into a <code>CrashLoopBackoff</code></p> <p><code>docker run -it -p8080:9546 omg/telperion</code> works fine.</p> <p>So my question is <strong>Is this debug-able?, if so, how do i debug this?</strong></p> <p>Some logs:</p> <pre><code>kubectl get nodes NAME STATUS AGE VERSION k8s-agent-adb12ed9-0 Ready 22h v1.6.6 k8s-agent-adb12ed9-1 Ready 22h v1.6.6 k8s-agent-adb12ed9-2 Ready 22h v1.6.6 k8s-master-adb12ed9-0 Ready,SchedulingDisabled 22h v1.6.6 </code></pre> <p>.</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE telperion 0/1 CrashLoopBackOff 10 28m </code></pre> <p>.</p> <pre><code>kubectl describe pod telperion Name: telperion Namespace: default Node: k8s-agent-adb12ed9-2/10.240.0.4 Start Time: Wed, 21 Jun 2017 10:18:23 +0000 Labels: &lt;none&gt; Annotations: &lt;none&gt; Status: Running IP: 10.244.1.4 Controllers: &lt;none&gt; Containers: telperion: Container ID: docker://c2dd021b3d619d1d4e2afafd7a71070e1e43132563fdc370e75008c0b876d567 Image: omg/telperion Image ID: docker-pullable://omg/telperion@sha256:c7e3beb0457b33cd2043c62ea7b11ae44a5629a5279a88c086ff4853828a6d96 Port: Command: echo done deploying telperion State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Wed, 21 Jun 2017 10:19:25 +0000 Finished: Wed, 21 Jun 2017 10:19:25 +0000 Ready: False Restart Count: 3 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-n7ll0 (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-n7ll0: Type: Secret (a volume populated by a Secret) SecretName: default-token-n7ll0 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 default-scheduler Normal Scheduled Successfully assigned telperion to k8s-agent-adb12ed9-2 1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id d9aa21fd16b682698235e49adf80366f90d02628e7ed5d40a6e046aaaf7bf774 1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id d9aa21fd16b682698235e49adf80366f90d02628e7ed5d40a6e046aaaf7bf774 1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id c6c8f61016b06d0488e16bbac0c9285fed744b933112fd5d116e3e41c86db919 1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id c6c8f61016b06d0488e16bbac0c9285fed744b933112fd5d116e3e41c86db919 1m 1m 2 kubelet, k8s-agent-adb12ed9-2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "telperion" with CrashLoopBackOff: "Back-off 10s restarting failed container=telperion pod=telperion_default(f4e36a12-566a-11e7-99a6-000d3aa32f49)" 1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id 3b911f1273518b380bfcbc71c9b7b770826c0ce884ac876fdb208e7c952a4631 1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id 3b911f1273518b380bfcbc71c9b7b770826c0ce884ac876fdb208e7c952a4631 1m 1m 2 kubelet, k8s-agent-adb12ed9-2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "telperion" with CrashLoopBackOff: "Back-off 20s restarting failed container=telperion pod=telperion_default(f4e36a12-566a-11e7-99a6-000d3aa32f49)" 1m 50s 4 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Pulling pulling image "omg/telperion" 47s 47s 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id c2dd021b3d619d1d4e2afafd7a71070e1e43132563fdc370e75008c0b876d567 1m 47s 4 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Pulled Successfully pulled image "omg/telperion" 47s 47s 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id c2dd021b3d619d1d4e2afafd7a71070e1e43132563fdc370e75008c0b876d567 1m 9s 8 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Warning BackOff Back-off restarting failed container 46s 9s 4 kubelet, k8s-agent-adb12ed9-2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "telperion" with CrashLoopBackOff: "Back-off 40s restarting failed container=telperion pod=telperion_default(f4e36a12-566a-11e7-99a6-000d3aa32f49)" </code></pre> <p>Edit 1: Errors reported by kubelet on master:</p> <pre><code>journalctl -u kubelet </code></pre> <p>.</p> <pre><code>Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: E0621 10:28:49.798140 1809 fsHandler.go:121] failed to collect filesystem stats - rootDiskErr: du command failed on /var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce with output Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: , stderr: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/task/13122/fd/4': No such file or directory Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/task/13122/fdinfo/4': No such file or directory Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/fd/3': No such file or directory Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/fdinfo/3': No such file or directory Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: - exit status 1, rootInodeErr: &lt;nil&gt;, extraDiskErr: &lt;nil&gt; </code></pre> <p>Edit 2: more logs</p> <pre><code>kubectl logs $SERVICE_NAME -p done deploying telperion </code></pre>
<p>You can access the logs of your pods with</p> <pre><code>kubectl logs [podname] -p </code></pre> <p>the -p option will read the logs of the previous (crashed) instance</p> <p>If the crash comes from the application, you should have useful logs in there.</p>
<p>I'm trying to deploy a grafana instance inside Kubernetes (server 1.6.4) in GCE. I'm using the following manifests:</p> <p><strong>Deployment</strong> (<a href="https://pastebin.com/HL5KqXtT" rel="nofollow noreferrer">full version</a>):</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: grafana spec: replicas: 1 template: metadata: labels: name: grafana spec: initContainers: … containers: - name: grafana image: grafana/grafana readinessProbe: httpGet: path: /login port: 3000 … </code></pre> <p><strong>Service</strong>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: grafana spec: selector: name: grafana ports: - protocol: TCP port: 3000 type: NodePort </code></pre> <p><strong>Ingress</strong>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: grafana spec: tls: - secretName: grafana.example.com backend: serviceName: grafana servicePort: 3000 </code></pre> <hr> <p>It turns out that grafana serves a 302 on <code>/</code> but the default GCE ingress healthcheck expects a 200 on <code>/</code> (<a href="https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#why-does-the-ingress-need-a-different-instance-group-then-the-gke-cluster" rel="nofollow noreferrer">source</a>). As you can see, there is a custom <em>readinessProbe</em> in the <a href="https://pastebin.com/HL5KqXtT" rel="nofollow noreferrer">Deployment</a> (line 22).</p> <p>Once I post these resources to the kube-apiserver, everything is created without error. Concretely, the <em>Ingress</em> gets a public ip4 address but <strong>the healtcheck is set up with the default <code>/</code> path instead of the custom one configured in the <code>readinessProbe</code></strong>. Because of this, I get a 502 if I <code>curl</code> the public ip4 address of the Ingress.</p> <p>The problem is fixable by manually changing the probe path to <code>/login</code> in the GCE console.</p>
<p>Quoting from <a href="https://groups.google.com/g/kubernetes-users/c/lhVfyfCwEyo/m/b187gC84CwAJ" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>The GLBC requires that you define the port (in your case 3000) within the Pod specification.</p> </blockquote> <p>The solution is to declare the port used for the healthcheck in <code>ports</code> besides adding a custom <code>readinessProbe</code>:</p> <pre><code>containers: - name: grafana readinessProbe: httpGet: path: /login port: 3000 ports: - name: grafana containerPort: 3000 … </code></pre>
<p>I have a container(Service C) which is listening to certain user event and based on the input it needs to spawn one or more instance of an another container(Service X). </p>
<p>From your use case description, it looks like deployment is what you are looking for <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a> . By using deployments you can dynamically scale the number of instances of the pod.</p>
<p>I have a simple Kubernetes deployment. It consists of a single, unreplicated container. There is no service exposing the container. The container has a health check which checks that it is correctly configured and can communicate with its external dependencies. I update the deployment using <code>kubectl apply</code>.</p> <p>After updating the deployment, I would like to check that the new version has been rolled out completely and is passing its health check. I can't work out how to configure my deployment to achieve that.</p> <p>I have tried various combinations of liveness and readiness probes, deployment strategies and ready/progress deployment properties. I've tried inspecting the status of the deployment, its pods and the rollout command. All to no avail.</p> <p>I get the impression that I should be looking at deployment conditions to understand the status, but I can't find clear documentation of what those conditions are or how to bring them into being.</p>
<p>You have not mentioned your deployment strategy. But one generic problem I have seen with k8s deployments is that if the application fails to boot up, it will be restarted infinitely. So you might have to <code>kubectl delete deploy/******</code> explicitly after detecting the deployment failed status. (There is also <code>failureThreshold</code> for probes, but I didn't try yet).</p> <p>Case <strong>Recreate</strong>:</p> <p>You can use the combination of <code>progressDeadlineSeconds</code> and <code>readinessProbe</code>. Let's say your application needs 60 seconds to boot-up/spin-up. You need to configure <code>progressDeadlineSeconds</code> a bit more than 60 seconds just be in the safer side. Now, after running your <code>kubectl apply -f my-deploy.yaml</code>, run the <code>kubectl rollout status deploy/my-deployment</code> command. For me it looks like this:</p> <pre><code>12:03:37 kubectl apply -f deploy.yaml 12:03:38 deployment "my-deployment" configured 12:04:18 kubectl rollout status deploy/my-deployment 12:04:18 Waiting for rollout to finish: 0 of 1 updated replicas are available (minimum required: 1)... 12:04:44 deployment "my-deployment" successfully rolled out </code></pre> <p>Once you execute the <code>rollout</code> command, kubectl will keep waiting till it has some answer. Also it returns with a proper exit code <code>echo $?</code> - you can check this programmatically and delete the deployment.</p> <p>Case <strong>rollingUpdate</strong>:</p> <p>If you have multiple replicas, then the above mentioned trick should work. If you have just one replica, then use <code>maxUnavailable: 0</code> and <code>maxSurge: 1</code> along with the above config.</p>
<p>Is there a way to pass a boolean value for spec.container.env.value ? I want to override, with helm, a boolean env variables in a docker parent image (<a href="https://github.com/APSL/docker-thumbor" rel="noreferrer">https://github.com/APSL/docker-thumbor</a>) : UPLOAD_ENABLED</p> <p>I made a simpler test</p> <p>If you try the following yaml : </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: envar-demo labels: purpose: demonstrate-envars spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 env: - name: DEMO_GREETING value: true </code></pre> <p>And try to create it with kubernetes, you got the following error : </p> <pre><code>kubectl create -f envars.yaml </code></pre> <p>the error : </p> <pre><code>error: error validating "envars.yaml": error validating data: expected type string, for field spec.containers[0].env[0].value, got bool; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>with validate=false </p> <pre><code>Error from server (BadRequest): error when creating "envars.yaml": Pod in version "v1" cannot be handled as a Pod: [pos 192]: json: expect char '"' but got char 't' </code></pre> <p>It doesn't work with integer values too</p>
<p><code>spec.container.env.value</code> is defined as <code>string</code>. see here: <a href="https://kubernetes.io/docs/api-reference/v1.6/#envvar-v1-core" rel="noreferrer">https://kubernetes.io/docs/api-reference/v1.6/#envvar-v1-core</a></p> <p>You'd have to cast/convert/coerse to boolean in your container when using this value</p>
<p>Dockerfile:</p> <pre><code>FROM openjdk:8-alpine RUN apk update &amp;&amp; \ apk add curl bash procps ENV SPARK_VER 2.1.1 ENV HADOOP_VER 2.7 ENV SPARK_HOME /opt/spark # Get Spark from US Apache mirror. RUN mkdir -p /opt &amp;&amp; \ cd /opt &amp;&amp; \ curl http://www.us.apache.org/dist/spark/spark-${SPARK_VER}/spark-${SPARK_VER}-bin-hadoop${HADOOP_VER}.tgz | \ tar -zx &amp;&amp; \ ln -s spark-${SPARK_VER}-bin-hadoop${HADOOP_VER} spark &amp;&amp; \ echo Spark ${SPARK_VER} installed in /opt ADD start-common.sh start-worker.sh start-master.sh / RUN chmod +x /start-common.sh /start-master.sh /start-worker.sh ENV PATH $PATH:/opt/spark/bin WORKDIR $SPARK_HOME EXPOSE 4040 6066 7077 8080 CMD ["spark-shell", "--master", "local[2]"] </code></pre> <p>spark-master-service.yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: spark-master labels: name: spark-master spec: type: NodePort ports: # the port that this service should serve on - name: webui port: 8080 targetPort: 8080 - name: spark port: 7077 targetPort: 7077 - name: rest port: 6066 targetPort: 6066 selector: name: spark-master </code></pre> <p>spark-master.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: name: spark-master name: spark-master spec: replicas: 1 template: metadata: labels: name: spark-master spec: containers: - name : spark-master imagePullPolicy: "IfNotPresent" image: spark-2.1.1-bin-hadoop2.7 name: spark-master ports: - containerPort: 8080 - containerPort: 7077 - containerPort: 6066 command: ["/start-master.sh"] </code></pre> <p>Error: Back-off restarting failed docker container Error syncing pod, skipping: failed to "StartContainer" for "spark-master" with CrashLoopBackOff: "Back-off 10s restarting failed container=spark-master pod=spark-master-286530801-7qv4l_default(34fecb5e-55eb-11e7-994e-525400f3f8c2)" </p> <p>Any idea? Thanks</p> <p><strong>UPDATE</strong></p> <pre><code> 2017-06-20T19:43:56.300935235Z starting org.apache.spark.deploy.master.Master, logging to /opt/spark/logs/spark--org.apache.spark.deploy.master.Master-1-spark-master-1682838347-9927h.out 2017-06-20T19:44:03.414011228Z failed to launch: nice -n 0 /opt/spark/bin/spark-class org.apache.spark.deploy.master.Master --host spark-master-1682838347-9927h --port 7077 --webui-port 8080 --ip spark-master --port 7077 2017-06-20T19:44:03.418640516Z nohup: can't execute '--': No such file or directory 2017-06-20T19:44:03.419814788Z full log in /opt/spark/logs/spark--org.apache.spark.deploy.maste 2017-06-20T19:43:50.343251857Z starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark--org.apache.spark.deploy.worker.Worker-1-spark-worker-243125562-0lh9k.out 2017-06-20T19:43:57.450929613Z failed to launch: nice -n 0 /opt/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://spark-master:7077 2017-06-20T19:43:57.465409083Z nohup: can't execute '--': No such file or directory 2017-06-20T19:43:57.466372593Z full log in /opt/spark/logs/spark--org.apache.spark.deploy.worker.Worker-1-spark-worker-243125562-0lh9k.out r.Master-1-spark-master-1682838347-9927h.out </code></pre>
<p>The version of nohup that ships with alpine does not support '--'. You need to install a gnu version of nohup through the coreutils alpine package in your docker file like this:</p> <p><code>RUN apk --update add coreutils </code></p> <p>Alternatively create your own start script that runs the class directly and run that instead</p> <p><code>/usr/spark/bin/spark-submit --class org.apache.spark.deploy.master.Master $SPARK_MASTER_INSTANCE --port $SPARK_MASTER_PORT --webui-port $SPARK_WEBUI_PORT </code></p>
<p>I am using kubectl 1.6.4:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I am trying to follow along with <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/#creating-the-frontend" rel="nofollow noreferrer">Connect a Front End to a Back End Using a Service</a>, and am attempting to create this deployment (deployment.yml):</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 1 template: metadata: labels: app: hello tier: frontend track: stable spec: containers: - name: nginx image: "gcr.io/google-samples/hello-frontend:1.0" lifecycle: preStop: exec: command: ["/usr/sbin/nginx","-s","quit"] </code></pre> <p>Upon <code>kubectl create -f deployment.yml</code>, I get the following error:</p> <blockquote> <p>error: error validating "/path/to/deployment.yml": error validating data: unexpected end of JSON input; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>However, this file is valid.</p> <p>I noticed in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>Deployment</code>s documentation</a> that <code>Deployment</code>s before 1.6.0 used <code>apiVersion: extensions/v1beta1</code> instead of <code>apiVersion: app/v1beta1</code>. So just for kicks I replaced <code>apiVersion: app/v1beta1</code> with <code>apiVersion: extensions/v1beta1</code>, even though I am running 1.6.4. To my surprise, it worked.</p> <p>What's wrong? Why do I need to use the old, pre-1.6.0 <code>apiVersion</code> line even though I am on 1.6.4?</p>
<p>Try deleting <code>~/.kube/schema</code> (I deleted <code>~/.kube/cache</code> as well, but I am pretty sure that had no effect). In my case, <code>~/.kube/schema</code> had several schemas:</p> <pre><code>$ l schema/ total 0 drwxr-xr-x 6 dmitry staff 204B Jan 9 11:23 v1.4.7 drwxr-xr-x 8 dmitry staff 272B Jan 11 00:13 v1.5.1 drwxr-xr-x 5 dmitry staff 170B Jun 17 15:05 . drwxr-xr-x 7 dmitry staff 238B Jun 22 19:32 v1.6.4 drwxr-xr-x 5 dmitry staff 170B Jun 22 22:47 .. </code></pre> <p>and kubectl was apparently using an old schema. <a href="https://github.com/kubernetes/kubernetes/issues/47937#issuecomment-310540682" rel="nofollow noreferrer">This might be a bug</a>. </p> <p>When you delete <code>~/.kube/schema</code>, the next time you attempt to create the yml file, kubectl will repopulate that directory, but only with the latest valid schema. And it will work.</p>
<p>I've installed kubernetes 1.2.0 with the following configuration</p> <pre><code>export nodes="[email protected] [email protected]" export role="ai i" export NUM_NODES=2 export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24 export FLANNEL_NET=172.16.0.0/16 export KUBE_PROXY_EXTRA_OPTS="--proxy-mode=iptables" </code></pre> <p>I've created a nginx pod and expose with load balancer and external IP address</p> <pre><code>kubectl expose pod my-nginx-3800858182-6qhap --external-ip=10.0.0.50 --port=80 --target-port=80 </code></pre> <p>I'm using kubernetes on bare metal so i've assigned 10.0.0.50 ip to master node.</p> <p>If i try curl 10.0.0.50 (from outside kubernetes) and use tcpdump on nginx pod i see traffic, the source ip is always from the kubernetes master node</p> <pre><code>17:30:55.470230 IP 172.16.60.1.43030 &gt; 172.16.60.2.80: ... 17:30:55.470343 IP 172.16.60.2.80 &gt; 172.16.60.1.43030: ... </code></pre> <p>i'm using mode-proxy=iptables. and need to get the actual source ip. what am i doing wrong ?</p>
<p>This was added as an annotation in Kubernetes 1.5 (docs <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer#annotation-to-modify-the-loadbalancer-behavior-for-preservation-of-source-ip" rel="nofollow noreferrer">here</a>).</p> <p>In 1.7, it has graduated to GA, so you can specify the load balancing policy on a Service with <code>spec.externalTrafficPolicy</code> field (docs <a href="https://github.com/MrHohn/kubernetes.github.io/blob/9cbed0cc6d1725c6eca4513fb2c8ed52b3bce04e/docs/tasks/access-application-cluster/create-external-load-balancer.md" rel="nofollow noreferrer">here</a>):</p> <pre><code>{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "example-service", }, "spec": { "ports": [{ "port": 8765, "targetPort": 9376 }], "selector": { "app": "example" }, "type": "LoadBalancer", "externalTrafficPolicy": "Local" } } </code></pre>
<p>Can a pod have two containers of different privilege level? For example Container 'A' is a regular container and Container 'B' is a privileged container that can alter the network stack. Both containers will be packaged in a single pod. Now if the pod's privileged parameter is set true, would it mean that both of the containers are now privileged? or otherwise?</p>
<p>The pod-level security context applies to all containers in the pod, but the per-container security context can override the settings for the individual container<a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/pod-security-context.md" rel="nofollow noreferrer"> [1]</a>. </p> <p>I'd suggest setting the pod-level default as secure (with minimal privilege) as possible, and only override the privilege setting for the containers that truly need it.</p>
<p>What's the best way to get the IP addresses of the other kubernetes pods on a local network?</p> <p>Currently, I'm using the following command and parsing the output: <code>kubectl describe pods</code>.</p> <p>Unfortunately, the command above often takes many seconds to complete (at least 3, and often 30+ seconds) and if a number of requests happen nearly simultaneously, I get 503 style errors. I've built a caching system around this command to cache the IP addresses on the local pod, but when a 10 or so pods wake up and need to create this cache, there is a large delay and often many errors. I feel like I'm doing something wrong. Getting the IP addresses of other pods on a network seems like it should be a straightforward process. So what's the best way to get them?</p> <p>For added details, I'm using Google's kubernetes system on their container engine. Running a standard Ubuntu image. </p> <hr> <p><strong>Context</strong>: To add context, I'm trying to put together a shared memcached between the pods on the cluster. To do that, they all need to know eachother's IP address. If there's an easier way to link pods/instances for the purposes of memcached, that would also be helpful.</p>
<p>Have you tried</p> <pre><code>kubectl get pods -o wide </code></pre> <p>This also returns IP addresses of the pods. Since this does not return ALL information <code>describe</code> returns, this might be faster.</p>
<p>This is my cluster info</p> <pre><code>kubectl cluster-info Kubernetes master is running at https://129.146.10.66:6443 Heapster is running at https://129.146.10.66:6443/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at https://129.146.10.66:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns </code></pre> <p>So, I have a service(mysqlbrokerservice) running as NodePort and the configuration looks like this</p> <pre><code>kubectl describe svc mysqlbrokerservice Name: mysqlbrokerservice Namespace: mysqlbroker Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=mysqlbroker Type: NodePort IP: 10.99.194.191 Port: mysqlbroker 8080/TCP NodePort: mysqlbroker 30000/TCP Endpoints: 10.244.1.198:8080 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>I can access the service through the public IP of the node where the pod is running like this <a href="http://129.146.34.181:30000/v2/catalog" rel="noreferrer">http://129.146.34.181:30000/v2/catalog</a>.</p> <p>Then what I wanted to see if I can access the service through https. I followed the direction in <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls</a></p> <p>I followed the example and used curl to get the resource. Here is the command. 129.146.10.66:6443 is my master ip.</p> <pre><code>curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET https://129.146.10.66:6443/api/v1/namespaces/mysqlbroker/services/mysqlbrokerservice:8080/proxy/v2/catalog --header "Authorization: Bearer $TOKEN" --insecure HTTP/1.0 200 Connection established </code></pre> <p>curl just sits there with no response. I then looked at my pod logs and it does not show that any request received.</p> <p>Can somebody explain what I am doing wrong here? What's the ideal solution if I want a service to be exposed through https?</p>
<p>If you click any of the URLs provided by <code>kubectl cluster-info</code> you will see that your browser prompts you to accept an insecure TLS connection.</p> <p>For HTTPs to work for this particular address you will need to buy a TLS certificate issued for the hostname (in this case, the IP address and you can't buy certs for IP addresses). The other option to add Kubernetes cluster's Root Certificate to your computer's Trusted Roots, but that wouldn't make it work on other computers.</p> <p>So I assume you're just trying to make an application running on Kubernetes accessible to the outside world, via HTTPs:</p> <p>For that, I recommend actually buying a domain name (or reusing a subdomain), buying a SSL/TLS certificate for that host name, and using an Ingress to configure a load balancer with HTTPs termination. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#tls</a> (If you're on GKE, Google Load Balancer, otherwise it would configure a local nginx instance to do this task.)</p>
<p>I'm using Kubernetes in a Google Container Engine cluster and I can see in the Stackdriver logs that one of the Pods is falling over and automatically restarting because of an OutOfMemory exception.</p> <p>Is there any standard way of monitoring these events?</p> <p>I'm using <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">Heapster</a> and it doesn't seem to provide any metrics about Pods shutting down and restarting.</p>
<p>According to our IT Sysadmin, most of the current solutions for real-time monitoring and alerting on pods failures are currently unstable or very lackluster.</p> <p>We ended up developing a small script which uses the Slack.com webservice for mail notifications and such.</p> <p>I am sorry if my answer is not a ready-to-use one click solution :-) </p> <p>This is a real feedback from our current experience and searches.</p> <p>I expect things will move fast in the near future on that topic!</p> <p>Our code (simple solution for effective results):</p> <p><a href="https://github.com/OpenSensee/K8SWatch" rel="nofollow noreferrer">https://github.com/OpenSensee/K8SWatch</a></p>
<p>1.7 is around the corner according to the <a href="https://github.com/kubernetes/features/blob/master/release-1.7/release-1.7.md" rel="nofollow noreferrer">release plan</a>. I'm wondering which Docker versions will be supported. Up until now I got this information from the Changelogs External Dependency Version information paragraph --> <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependency-version-information" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependency-version-information</a></p>
<p>I asked the same question in the Kubernetes Google Groups and got an official answer.</p> <p><a href="https://groups.google.com/forum/#!msg/kubernetes-dev/0_tLFqh0yao/EoU9094dBAAJ" rel="noreferrer">According to the sig-node team</a> Kubernetes will continue to support only Docker 1.12.x at the launch of Kubernetes 1.7. They will however add 1.13 support early in the lifecycle of K8s 1.7.</p> <p>Just FYI: Q2 2017 marks the EOL of Docker 1.12 according to their <a href="https://success.docker.com/Policies/Maintenance_Lifecycle" rel="noreferrer">Maintenance Lifecycle</a></p>
<p>I have a Kubernetes cluster executing Jobs of different types (I'm not using <code>parallelism</code>, these are independent and unrelated jobs).</p> <p>Each <code>Job</code> has a label <code>userid=xyz</code>, that's the id of the user who requested the job (users submit these requests trough my app, not to the cluster directly). </p> <p><strong>I want to limit the number of Jobs running at any given time by tag, for example, limit the jobs with <code>userid=foo</code> to 10, and <code>userid=bar</code> to 50. Is it possible to achieve this on Kubernetes?</strong></p>
<p>You could use a <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#object-count-quota" rel="nofollow noreferrer">ResourceQuota</a> to accomplish something similar.</p> <p>There is no direct way to limit the job objects themselves using Resource Quota, but there may be other ways you can accomplish the same effect, by limiting the number of CPUs, or the number of pods.</p> <p>For custom policies and logic around labels as you mentioned, admission control is a good place to enforce it. <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/admission_control_extension.md" rel="nofollow noreferrer">Extensible admission control</a> is going into alpha in Kubernetes 1.7.</p>
<p>I would like to run a one-off container from the command line in my Kubernetes cluster. The equivalent of:</p> <pre><code>docker run --rm -it centos /bin/bash </code></pre> <p>Is there a <code>kubectl</code> equivalent?</p>
<p>The <code>kubectl</code> equivalent of</p> <pre><code>docker run --rm -it centos /bin/bash </code></pre> <p>is</p> <pre><code>kubectl run tmp-shell --restart=Never --rm -i --tty --image centos -- /bin/bash </code></pre> <p>Notes:</p> <ul> <li><p>This will create a Pod named <code>tmp-shell</code>. If you don't specify <code>--restart=Never</code>, a Deploment will be created instead (credit: Urosh T's answer).</p> </li> <li><p><code>--rm</code> ensures the Pod is deleted when the shell exits.</p> </li> <li><p>If you want to detach from the shell and leave it running with the ability to re-attach, omit the <code>--rm</code>. You will then be able to reattach with: <code>kubectl attach $pod-name -c $pod-container -i -t</code> after you exit the shell.</p> </li> <li><p>If your shell does not start, check whether your cluster is out of resources (<code>kubectl describe nodes</code>). You can specify resource requests with <code>--requests</code>:</p> <pre><code>--requests='': The resource requirement requests for this container. For example, 'cpu=100m,memory=256Mi'. Note that server side components may assign requests depending on the server configuration, such as limit ranges. </code></pre> </li> </ul> <p>(Credit: <a href="https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster" rel="noreferrer">https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster</a>)</p>
<p>I have setup the mongodb replicaset on kubernetes with Statefulsets using this helm <a href="https://github.com/kubernetes/charts/tree/master/stable/mongodb-replicaset" rel="nofollow noreferrer">chart</a></p> <p>I can access the mongo instance inside the cluster. But I'd like to open it for access to the external world. I tried two approaches to accomplish it</p> <ol> <li><p>Create an additional service of type 'NodePort' mapping to the mongo instances with the selector label.</p></li> <li><p>Expose all 3 mongodb pods externally.</p> <p><code>kubectl expose pods mongo-release-mongodb-replicaset-2 --type=NodePort</code></p></li> </ol> <p>Here is my test script. </p> <pre><code> from pymongo import MongoClient client = MongoClient('192.168.99.100',30738) #approach 1 #client = MongoClient('mongodb://192.168.99.100:31455,192.168.99.100:31424,192.168.99.100:31569/?replicaSet=rs0') #approach 2 db=client.test db.test.insert({"key1":1}) values=db.test.find({'key1': 1}) for value in values: print value </code></pre> <p>With the first approach, I get the following error which makes sense, since the external service is not consistently hitting the primary node in replicaset. Multiple attempts would eventually connect to the master and the write works.</p> <pre><code>File "/Library/Python/2.7/site-packages/pymongo/pool.py", line 552, in _raise_connection_failure raise error pymongo.errors.NotMasterError: not master </code></pre> <p>With the second approach, since we are accessing each pod directly by their IP:port, I expected it to work but it throws the following exception</p> <pre><code>pymongo.errors.ServerSelectionTimeoutError: mongo-release-mongodb-replicaset-0.mongo-release-mongodb-replicaset.default.svc.cluster.local:27017: [Errno 8] nodename nor servname provided, or not known,mongo-release-mongodb-replicaset-2.mongo-release-mongodb-replicaset.default.svc.cluster.local:27017: [Errno 8] nodename nor servname provided, or not known,mongo-release-mongodb-replicaset-1.mongo-release-mongodb-replicaset.default.svc.cluster.local:27017: [Errno 8] nodename nor servname provided, or not known </code></pre> <p>From the error it appears that the DNS translation is causing issues? I looked at this <a href="https://stackoverflow.com/questions/41969626/expose-mongodb-on-kubernetes-with-statefulsets-outside-cluster">question</a> but didn't get much help out of it</p> <p>I'm running out of ideas. Can anyone help with this issue? Any alternative solutions are appreciated as well.</p> <p>Thanks</p>
<p>After spending some more time with the issue, I figured that the mongo endpoint in the above script was returning the replicaset's DNS' in the following format - 'mongo-release-mongodb-replicaset-0.mongo-release-mongodb-replicaset.default.svc.cluster.local:27017'. These addresses can only be resolved within the cluster namespace. Verified the same by running the following script in another pod </p> <pre><code>from pymongo import MongoClient client = MongoClient('mongodb://mongo-release-mongodb-replicaset-0.mongo-release-mongodb-replicaset:27017,mongo-release-mongodb-replicaset-1.mongo-release-mongodb-replicaset:27017,mongo-release-mongodb-replicaset-2.mongo-release-mongodb-replicaset:27017/?replicaSet=rs0') db=client.test db.test.insert({"key1":1}) values=db.test.find({'key1': 1}) for value in values: print value </code></pre>
<p>I am trying to deploy Postgres on Bluemix Container service (Kubernetes)</p> <p>I have created the Image and deployed it through the following yaml file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: tripbru-postgres labels: app: tripbruREST spec: ports: - port: 5432 targetPort: 5432 nodePort: 31432 selector: app: tripbruREST tier: frontend type: NodePort --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tripbru-postgres labels: app: tripbruREST spec: strategy: type: Recreate template: metadata: labels: app: tripbruREST tier: postgres spec: containers: - image: registry.ng.bluemix.net/eliza/postgres:9.5 name: postgres env: - name: POSTGRES_PASSWORD value: MYPASSWORD ports: - containerPort: 5432 name: postgres volumeMounts: - name: pg-data mountPath: /var/lib/postgresql/data - name: tz-config mountPath: /etc/localtime volumes: - name: pg-data emptyDir: {} - name: tz-config hostPath: path: /usr/share/zoneinfo/Europe/Madrid </code></pre> <p>This effectively deploys it:</p> <pre><code>icordoba$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.10.10.1 &lt;none&gt; 443/TCP 1d tripbru-postgres 10.10.10.232 &lt;nodes&gt; 5432:31432/TCP 1d </code></pre> <p>But I can't connect to the node IP address on port 31432. I have tested Postgres is running using:</p> <pre><code>kubectl exec -it tripbru-postgres-3667814974-pzmsk bash </code></pre> <p>I get in the docker instance and check Postgres is running ok.</p> <p>I am sure I am missing something. Do I need any other yaml file? Thanks.</p>
<p>I solved it using "Pod" and not Deployment. I also changed hostPath and note the ephemeral "emptyDir" volume format (this is a test in free Kubernetes service by Bluemix so I can't use real volumes). This is the working yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: postgres labels: name: postgres spec: containers: - name: postgres image: registry.ng.bluemix.net/eliza/postgres:9.5 env: - name: POSTGRES_PASSWORD value: MYPASSWORD ports: - containerPort: 5432 volumeMounts: - name: pg-data mountPath: /var/lib/postgresql/data - name: tz-config mountPath: /etc/localtime volumes: - name: pg-data #emptyDir: {} hostPath: path: "/opt/tripbruPostgres" - name: tz-config hostPath: path: /usr/share/zoneinfo/Europe/Madrid </code></pre> <p>(Note I still don't know what was wrong with my "Deployment" approach, but using Pod works as I don't need replication at this stage)</p>
<p>I have create a pipeline like below and please note that I have the script files namely- "backup_grafana.sh" and "gitPush.sh" in source code repository where the Jenkinsfile is present. But I am unable to execute the script because of the following error:-</p> <pre><code>/home/jenkins/workspace/grafana-backup@tmp/durable-52495dad/script.sh: line 1: backup_grafana.sh: not found </code></pre> <p>Please note that I am running jenkins master on kubernetes in a pod. So copying scripts files as suggested by the error is not possible because the pod may be destroyed and recreated dynamically(in this case with a new pod, my scripts will no longer be available in the jenkins master)</p> <pre><code>pipeline { agent { node { label 'jenkins-slave-python2.7' } } stages { stage('Take the grafana backup') { steps { sh 'backup_grafana.sh' } } stage('Push to the grafana-backup submodule repository') { steps { sh 'gitPush.sh' } } } } </code></pre> <p>Can you please suggest how can I run these scripts in Jenkinsfile? I would like to also mention that I want to run these scripts on a python slave that I have already created finely.</p>
<p>If the command 'sh backup_grafana.sh' fails to execute when it actually should have successfully executed, here are two possible solutions.</p> <p>1) Maybe you need a dot slash in front of those executable commands to tell your shell where they are. if they are not in your <code>$PATH</code>, you need to tell your shell that they can be found in the current directory. here's the fixed Jenkinsfile with four non-whitespace characters added:</p> <pre><code>pipeline { agent { node { label 'jenkins-slave-python2.7' } } stages { stage('Take the grafana backup') { steps { sh './backup_grafana.sh' } } stage('Push to the grafana-backup submodule repository') { steps { sh './gitPush.sh' } } } } </code></pre> <p>2) Check whether you have declared your file as a bash or sh script by declaring one of the following as the first line in your script:</p> <pre><code>#!/bin/bash </code></pre> <p>or</p> <pre><code>#!/bin/sh </code></pre>
<p>I am attempting to mount my Azure File Storage to a container using the method found here: <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/azure_file" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/azure_file</a></p> <p>Upon pod creation I am getting the error: "Output: mount error: could not resolve address for [encoded name of my file storage].file.core.windows.net: Unknown error" </p> <p>I have confirmed that my File Storage resource and the VM hosting the pod are in the same Azure location (East US). I am able to mount this share manually on the VM hosting the pod using the same address in the error above. Is it possible I am missing some sort of configuration in my container that is not explained in the Git Hub tutorial? </p> <p>I have tried creating my container without specifying the volume and was able to ping the address for the file storage from within the container so I am not sure where the cannot resolve address error is coming from.</p>
<p>I can't reproduce your error, but we can follow those steps to mount Azure file share to k8s container.<br> 1.create k8s via Azure new portal.<br> 2.SSH k8s master, create <code>secret</code>, create secret by k8s file: </p> <p>In this yaml file, we should write storage account and key in it, and we should <strong>base64</strong> encoded Azure storage account and key, like this:</p> <pre><code>root@k8s-master-3CC6E803-0:~# echo -n jasonshare321 | base64 amFzb25zaGFyZTMyMQ== root@k8s-master-3CC6E803-0:~# echo -n Bnbh0fjykD+b/EveNoR/elOp118+0vmLsbQqVGC3H0W23mSfbH9WfV1A60Qw3CAZ70Tm4Wgpse1LEtgSJF27cQ== | base64 Qm5iaDBmanlrRCtiL0V2ZU5vUi9lbE9wMTE4KzB2bUxzYlFxVkdDM0gwVzIzbVNmYkg5V2ZWMUE2MFF3M0NBWjcwVG00V2dwc2UxTEV0Z1NKRjI3Y1E9PQ== </code></pre> <p>Then create <code>azure-secret</code>:</p> <pre><code>root@k8s-master-3CC6E803-0:~# mkdir /azure_file root@k8s-master-3CC6E803-0:~# cd /azure_file/ root@k8s-master-3CC6E803-0:/azure_file# touch azure-secret.yaml root@k8s-master-3CC6E803-0:/azure_file# vi azure-secret.yaml </code></pre> <p>Here is the azure-secret:</p> <p>root@k8s-master-3CC6E803-0:/azure_file# cat azure-secret.yaml </p> <pre><code>apiVersion: v1 kind: Secret metadata: name: azure-secret type: Opaque data: azurestorageaccountname: amFzb25zaGFyZTMyMQ== azurestorageaccountkey: Qm5iaDBmanlrRCtiL0V2ZU5vUi9lbE9wMTE4KzB2bUxzYlFxVkdDM0gwVzIzbVNmYkg5V2ZWMUE2MFF3M0NBWjcwVG00V2dwc2UxTEV0Z1NKRjI3Y1E9PQ== </code></pre> <p>Then use kubectl to create secret,like this:</p> <pre><code>root@k8s-master-3CC6E803-0:/azure_file# kubectl create -f /azure_file/azure-secret.yaml secret "azure-secret" created root@k8s-master-3CC6E803-0:/azure_file# kubectl get secret NAME TYPE DATA AGE azure-secret Opaque 2 11s default-token-07cd5 kubernetes.io/service-account-token 3 35m </code></pre> <p>3.Create pod: create azure.yaml:</p> <pre><code>root@k8s-master-3CC6E803-0:/azure_file# touch azure.yaml root@k8s-master-3CC6E803-0:/azure_file# vi azure.yaml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - name: azure mountPath: /mnt/azure volumes: - name: azure azureFile: secretName: azure-secret shareName: testfileshare readOnly: false </code></pre> <p>Use this file to create pod: </p> <pre><code>root@k8s-master-3CC6E803-0:/azure_file# kubectl create -f /azure_file/azure.yaml pod "nginx" created root@k8s-master-3CC6E803-0:/azure_file# kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 17s </code></pre> <p>Now, pod create is completed, we can use this script to check file,like this:</p> <pre><code>root@k8s-master-3CC6E803-0:/azure_file# kubectl exec -it nginx bash root@nginx:/# ls bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var root@nginx:/# cd /mnt root@nginx:/mnt# ls azure root@nginx:/mnt# cd azure root@nginx:/mnt/azure# ls root@nginx:/mnt/azure# df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 30G 3.3G 26G 12% / tmpfs tmpfs 1.7G 0 1.7G 0% /dev tmpfs tmpfs 1.7G 0 1.7G 0% /sys/fs/cgroup /dev/sda1 ext4 30G 3.3G 26G 12% /etc/hosts //jasonshare321.file.core.windows.net/testfileshare cifs 50G 0 50G 0% /mnt/azure shm tmpfs 64M 0 64M 0% /dev/shm tmpfs tmpfs 1.7G 12K 1.7G 1% /run/secrets/kubernetes.io/serviceaccount </code></pre> <p><strong>Note</strong>:<br> 1.We should <strong>base64</strong> encoded Azure storage account and key.<br> 2.write the right <strong>file share name</strong> to azure.yaml:<a href="https://i.stack.imgur.com/SczM8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SczM8.png" alt="enter image description here"></a></p>
<p>I 'm Following <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">this guide</a> in order to set up a pod using minikube and pull an image from a private repository hosted at: hub.docker.com</p> <p>When trying to set up a pod to pull the image i see <code>CrashLoopBackoff</code></p> <p><strong>pod config:</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: ha/prod:latest imagePullSecrets: - name: regsecret </code></pre> <p><strong>output of "get pod"</strong></p> <pre><code>kubectl get pod private-reg NAME READY STATUS RESTARTS AGE private-reg 0/1 CrashLoopBackOff 5 4m </code></pre> <p>As far as i can see there is no issue with the images and if i pull them manually and run them, they works.</p> <p>(you can see <code>Successfully pulled image "ha/prod:latest"</code>) </p> <p>this issue also happens if i push a generic image to the repository such as centos and try to pull and run it using pod.</p> <p>Also, the secret seems to work fine and i can see the "pulls" counted in the private repository.</p> <p>Here is the output of the command:</p> <p><code>kubectl describe pods private-reg</code>:</p> <pre><code>[~]$ kubectl describe pods private-reg Name: private-reg Namespace: default Node: minikube/192.168.99.100 Start Time: Thu, 22 Jun 2017 17:13:24 +0300 Labels: &lt;none&gt; Annotations: &lt;none&gt; Status: Running IP: 172.17.0.5 Controllers: &lt;none&gt; Containers: private-reg-container: Container ID: docker://1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132 Image: ha/prod:latest Image ID: docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0 Port: State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 01 Jan 0001 00:00:00 +0000 Finished: Thu, 22 Jun 2017 17:20:04 +0300 Ready: False Restart Count: 6 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-bhvgz (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-bhvgz: Type: Secret (a volume populated by a Secret) SecretName: default-token-bhvgz Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 9m 9m 1 default-scheduler Normal Scheduled Successfully assigned private-reg to minikube 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a 8m 8m 2 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 10s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)" 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a 8m 8m 2 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)" 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841 8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841 8m 7m 3 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)" 7m 7m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478 7m 7m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478 7m 5m 7 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)" 5m 5m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02 5m 5m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02 5m 3m 12 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)" 9m 2m 7 kubelet, minikube spec.containers{private-reg-container} Normal Pulling pulling image "ha/prod:latest" 2m 2m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132 8m 2m 7 kubelet, minikube spec.containers{private-reg-container} Normal Pulled Successfully pulled image "ha/prod:latest" 2m 2m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132 8m &lt;invalid&gt; 40 kubelet, minikube spec.containers{private-reg-container} Warning BackOff Back-off restarting failed container 2m &lt;invalid&gt; 14 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)" </code></pre> <p>Here is the output of the command:</p> <p><code>kubectl --v=8 logs private-reg</code>:</p> <pre><code>I0622 17:35:01.043739 15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/apps/v1beta1/serverresources.json I0622 17:35:01.043951 15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/v1/serverresources.json I0622 17:35:01.045061 15981 cached_discovery.go:118] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/servergroups.json I0622 17:35:01.045175 15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg I0622 17:35:01.045182 15981 round_trippers.go:402] Request Headers: I0622 17:35:01.045187 15981 round_trippers.go:405] Accept: application/json, */* I0622 17:35:01.045191 15981 round_trippers.go:405] User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0622 17:35:01.072863 15981 round_trippers.go:420] Response Status: 200 OK in 27 milliseconds I0622 17:35:01.072900 15981 round_trippers.go:423] Response Headers: I0622 17:35:01.072921 15981 round_trippers.go:426] Content-Type: application/json I0622 17:35:01.072930 15981 round_trippers.go:426] Content-Length: 2216 I0622 17:35:01.072936 15981 round_trippers.go:426] Date: Thu, 22 Jun 2017 14:35:31 GMT I0622 17:35:01.072994 15981 request.go:991] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"private-reg","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/private-reg","uid":"f4340638-5754-11e7-978a-08002773375c","resourceVersion":"3070","creationTimestamp":"2017-06-22T14:13:24Z"},"spec":{"volumes":[{"name":"default-token-bhvgz","secret":{"secretName":"default-token-bhvgz","defaultMode":420}}],"containers":[{"name":"private-reg-container","image":"ha/prod:latest","resources":{},"volumeMounts":[{"name":"default-token-bhvgz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"imagePullSecrets":[{"name":"regsecret"}],"schedulerName":"default-scheduler"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z","reason":"ContainersNotReady","message":"containers with unready status: [private-reg-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"}],"hostIP":"192.168.99.100","podIP":"172.17.0.5","startTime":"2017-06-22T14:13:24Z","containerStatuses":[{"name":"private-reg-container","state":{"waiting":{"reason":"CrashLoopBackOff","message":"Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"}},"lastState":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":null,"finishedAt":"2017-06-22T14:30:36Z","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}},"ready":false,"restartCount":8,"image":"ha/prod:latest","imageID":"docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}],"qosClass":"BestEffort"}} I0622 17:35:01.074108 15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg/log I0622 17:35:01.074126 15981 round_trippers.go:402] Request Headers: I0622 17:35:01.074132 15981 round_trippers.go:405] Accept: application/json, */* I0622 17:35:01.074137 15981 round_trippers.go:405] User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0622 17:35:01.079257 15981 round_trippers.go:420] Response Status: 200 OK in 5 milliseconds I0622 17:35:01.079289 15981 round_trippers.go:423] Response Headers: I0622 17:35:01.079299 15981 round_trippers.go:426] Content-Type: text/plain I0622 17:35:01.079307 15981 round_trippers.go:426] Content-Length: 0 I0622 17:35:01.079315 15981 round_trippers.go:426] Date: Thu, 22 Jun 2017 14:35:31 GMT </code></pre> <p>How can i debug this issue ?</p> <p><strong>Update</strong></p> <p>The output of:</p> <p><code>kubectl --v=8 logs ps-agent-2028336249-3pk43 --namespace=default -p</code></p> <pre><code>I0625 11:30:01.569903 13420 round_trippers.go:395] GET I0625 11:30:01.569920 13420 round_trippers.go:402] Request Headers: I0625 11:30:01.569927 13420 round_trippers.go:405] User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0625 11:30:01.569934 13420 round_trippers.go:405] Accept: application/json, */* I0625 11:30:01.599026 13420 round_trippers.go:420] Response Status: 200 OK in 29 milliseconds I0625 11:30:01.599048 13420 round_trippers.go:423] Response Headers: I0625 11:30:01.599056 13420 round_trippers.go:426] Date: Sun, 25 Jun 2017 08:30:01 GMT I0625 11:30:01.599062 13420 round_trippers.go:426] Content-Type: application/json I0625 11:30:01.599069 13420 round_trippers.go:426] Content-Length: 2794 I0625 11:30:01.599264 13420 request.go:991] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"ps-agent-2028336249-3pk43","generateName":"ps-agent-2028336249-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/ps-agent-2028336249-3pk43","uid":"87c69072-597e-11e7-83cd-08002773375c","resourceVersion":"14354","creationTimestamp":"2017-06-25T08:16:03Z","labels":{"pod-template-hash":"2028336249","run":"ps-agent"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"ps-agent-2028336249\",\"uid\":\"87c577b5-597e-11e7-83cd-08002773375c\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"13446\"}}\n"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"ps-agent-2028336249","uid":"87c577b5-597e-11e7-83cd-08002773375c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-bhvgz","secret":{"secretName":"default-token-bhvgz","defaultMode":420}}],"containers":[{"name":"ps-agent","image":"ha/prod:ps-agent-latest","resources":{},"volumeMounts":[{"name":"default-token-bhvgz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"schedulerName":"default-scheduler"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-25T08:16:03Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-06-25T08:16:03Z","reason":"ContainersNotReady","message":"containers with unready status: [ps-agent]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-25T08:16:03Z"}],"hostIP":"192.168.99.100","podIP":"172.17.0.5","startTime":"2017-06-25T08:16:03Z","containerStatuses":[{"name":"ps-agent","state":{"waiting":{"reason":"CrashLoopBackOff","message":"Back-off 5m0s restarting failed container=ps-agent pod=ps-agent-2028336249-3pk43_default(87c69072-597e-11e7-83cd-08002773375c)"}},"lastState":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":null,"finishedAt":"2017-06-25T08:27:17Z","containerID":"docker://1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e621019e2f1f0ee081b"}},"ready":false,"restartCount":7,"image":"ha/prod:ps-agent-latest","imageID":"docker://sha256:eb5307c4366fc129d022703625a5f30ff175b5e1a24dbe39fd4c32e726a0ee7b","containerID":"docker://1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e621019e2f1f0ee081b"}],"qosClass":"BestEffort"}} I0625 11:30:01.600727 13420 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/ps-agent-2028336249-3pk43/log?previous=true I0625 11:30:01.600747 13420 round_trippers.go:402] Request Headers: I0625 11:30:01.600757 13420 round_trippers.go:405] Accept: application/json, */* I0625 11:30:01.600766 13420 round_trippers.go:405] User-Agent: kubectl/v1.6.6 (linux/amd64) kubernetes/7fa1c17 I0625 11:30:01.632473 13420 round_trippers.go:420] Response Status: 200 OK in 31 milliseconds I0625 11:30:01.632545 13420 round_trippers.go:423] Response Headers: I0625 11:30:01.632569 13420 round_trippers.go:426] Date: Sun, 25 Jun 2017 08:30:01 GMT I0625 11:30:01.632592 13420 round_trippers.go:426] Content-Type: text/plain I0625 11:30:01.632615 13420 round_trippers.go:426] Content-Length: 0 </code></pre>
<p>The issue caused by the docker container which exits as soon as the "start" process finishes. i added a command that runs forever and it worked. This issue mentioned <a href="https://stackoverflow.com/questions/31870222/how-can-i-keep-container-%20running-on-kubernetes">here</a></p>
<p>My service is not connecting/directing traffic to pod. I have 'sshed' into the pod and the server is working properly but the service times out.</p> <p>Deployment File:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: venues spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: venues version: v0.3 spec: containers: - name: venues image: some-image imagePullPolicy: Always ports: - containerPort: 3000 name: http-server </code></pre> <p>Service File:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: venues labels: name: venues spec: type: LoadBalancer ports: - port: 3000 targetPort: 3000 protocol: TCP selector: name: venues </code></pre>
<p>Your selector in the service is wrong: you need to select a label of the deployment, not the container name. So</p> <pre><code>selector: app: venues </code></pre> <p>should work. Optionally you could add also <code>version: v0.3</code> if needed.</p>
<p>I can't find documentation on how to create user group on Kubernetes with <code>yaml</code> file. I'd like gather some authenticated users in group using their e-mail accounts.</p> <p>I'd like to write in <code>yaml</code> something like :</p> <pre><code> kind: GoupBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: "frontend-developers" namespace: development subjects: - kind: User name: [email protected],[email protected] apiGroup: "" </code></pre>
<p>Groups are determined by the configured authentication method. See <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a> for details about how each authenticator determines the group membership of the authenticated user.</p>
<p>I have a custom Kubernetes Cluster (deployed using <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a>) running on Virtual Machines from an IAAS Provider. The Kubernetes Nodes have <em>no Internet facing IP Adresses</em> (except for the Master Node, which I also use for Ingress).</p> <p>I'm now trying to join a Machine to this Cluster that is not hosted by my main IAAS provider. I want to do this because I need specialized computing resources for my application that are not offered by the IAAS.</p> <p><strong>What is the best way to do this?</strong></p> <hr> <p>Here's what I've tried already:</p> <ul> <li><em>Run the Cluster on Internet facing IP Adresses</em></li> </ul> <p>I have no trouble joining the Node when I tell <code>kube-apiserver</code> on the Master Node to listen on <code>0.0.0.0</code> and use public IP Adresses for every Node. However, this approach is non-ideal from a security perspective and also leads to higher cost because public IP Adresses have to be leased for Nodes that normally don't need them.</p> <ul> <li><em>Create a Tunnel to the Master Node using <a href="https://github.com/apenwarr/sshuttle" rel="nofollow noreferrer">sshuttle</a></em></li> </ul> <p>I've had moderate success by creating a tunnel from the external Machine to the Kubernetes Master Node using sshuttle, which is configured on my external Machine to route <code>10.0.0.0/8</code> through the tunnel. This works in principle, but it seems way too hacky and is also a bit unstable (sometimes the external machine can't get a route to the other nodes, I have yet to investigate this problem further).</p> <hr> <p>Here are some ideas that could work, but I haven't tried yet because I don't favor these approaches:</p> <ul> <li><em>Use a proper VPN</em></li> </ul> <p>I could try to use a proper VPN tunnel to connect the Machine. I don't favor this solution because it would add a (admittedly quite small) overhead to the Cluster.</p> <ul> <li><em>Use a cluster federation</em></li> </ul> <p>It looks like <a href="https://kubernetes.io/docs/tasks/federation/set-up-cluster-federation-kubefed/" rel="nofollow noreferrer">kubefed</a> was made specifically for this purpose. However, I think this is overkill in my case: I'm only trying to join a single external Machine to the Cluster. Using Kubefed would add a <em>ton</em> of overhead (Federation Control Plane on my Main Cluster + Single Host Kubernetes Deployment on the external machine).</p> <hr>
<p>I couldn't think about any better solution than a VPN here. Especially since you have only one isolated node, it should be relatively easy to make the handshake happen between this node and your master.</p> <p>Routing the traffic from "internal" nodes to this isolated node is also trivial. Because all nodes already use the master as their default gateway, modifying the route table on the master is enough to forward the traffic from internal nodes to the isolated node through the tunnel.</p> <p>You have to be careful with the configuration of your container network though. Depending on the solution you use to deploy it, you may have to assign a different subnet to the Docker bridge on the other side of the VPN.</p>
<p>I have a basic prometheus.yml file in my environment i.e ..</p> <pre><code>### apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: prometheus-core data: prometheus.yml: | global: scrape_interval: 10s scrape_timeout: 10s evaluation_interval: 10s rule_files: - '/etc/prometheus-rules/*.rules' scrape_configs: # The job name is added as a label `job=&lt;job_name&gt;` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] </code></pre> <p>Now if I add new nodes to my environment, my prometheus.yml file should automatically get updated and add the nodes to the targets below i.e.</p> <pre><code>### apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: prometheus-core data: prometheus.yml: | global: scrape_interval: 10s scrape_timeout: 10s evaluation_interval: 10s rule_files: - '/etc/prometheus-rules/*.rules' scrape_configs: # The job name is added as a label `job=&lt;job_name&gt;` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090','12.10.17.6:9100','12.10.17.19:9100'] </code></pre> <p>Can any one suggest how I can achieve this ?</p>
<p>Prometheus supports a Kubernetes service discovery mechanisms, see the <a href="https://prometheus.io/docs/operating/configuration/#%3Ckubernetes_sd_config%3E" rel="nofollow noreferrer">documentation</a> for details. <br><br> So instead of the <code>static_configs</code> section, you should add a section similar to this:</p> <pre><code>scrape_configs: - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints ... </code></pre> <p>See this <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml#L37" rel="nofollow noreferrer">example configuration file</a> for how it's done.</p>
<p>I'm running a SSE server on a VPS and it works perfectly, no problems at all, but due to scalability reasons I needed to move it to another server.</p> <p>I moved the server to <strong>Google Cloud Platform/Google Container Engine</strong> and <strong>Kubernetes/Ingress</strong>. But now I've encountered that I can't keep a SSE connection effectively, it's completely unstable and it closes connections by itself.</p> <p>Is there anything special I have to do to run a <strong>SSE</strong> server over <strong>Kubernetes/Ingress</strong>?</p> <p>I assume my code/software runs perfect and that is not the issue, due it works perfectly in Kubernetes, VPS, on my machine, everywhere, just not when I add the Ingress configuration, and I'm doing this because I want <strong>HTTPS</strong> over the <strong>Kubernetes</strong> <strong>load-balancer</strong>.</p>
<p>Got it working by adding a long time on the timeout: 86,400 seconds. This is because it is a socket connection that needs to stay open and not a normal connection that would require less than 30 seconds to execute.</p>
<p>I wanted to play with <a href="https://kubernetes.io/docs/tasks/inject-data-application/podpreset/" rel="nofollow noreferrer">PodPreset</a> Kubernetes feature on my test cluster running version 1.6.6. Kube-apiserver is started with (included just the relevant part):</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>--storage-backend=etcd2 --admission-control ...,PodPreset --runtime-config=settings.k8s.io/v1alpha1/podpreset</code></pre> </div> </div> </p> <p>However I still get:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.701156 22088 reflector.go:236] Listing and watching *settings.PodPreset from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70 Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.701487 22088 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kube-apiserver/v1.6.6 (linux/amd64) kubernetes/7fa1c17" -H "Authorization: Bearer XXX” https://A.B.C.D:443/apis/settings.k8s.io/v1alpha1/podpresets?resourceVersion=0 Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.702702 22088 wrap.go:75] GET /apis/settings.k8s.io/v1alpha1/podpresets?resourceVersion=0: (266.839µs) 404 [[kube-apiserver/v1.6.6 (linux/amd64) kubernetes/7fa1c17] A.B.C.D:37122] Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.703446 22088 round_trippers.go:417] GET https://A.B.C.D:443/apis/settings.k8s.io/v1alpha1/podpresets?resourceVersion=0 404 Not Found in 1 milliseconds Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.703483 22088 round_trippers.go:423] Response Headers: Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.703502 22088 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.703515 22088 round_trippers.go:426] Content-Length: 112 Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.703527 22088 round_trippers.go:426] Date: Tue, 27 Jun 2017 14:34:25 GMT Jun 27 16:34:25 host kube-apiserver[22088]: I0627 16:34:25.703606 22088 request.go:989] Response Body: Jun 27 16:34:25 host kube-apiserver[22088]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat| Jun 27 16:34:25 host kube-apiserver[22088]: 00000010 75 73 12 58 0a 04 0a 00 12 00 12 07 46 61 69 6c |us.X........Fail| Jun 27 16:34:25 host kube-apiserver[22088]: 00000020 75 72 65 1a 30 74 68 65 20 73 65 72 76 65 72 20 |ure.0the server | Jun 27 16:34:25 host kube-apiserver[22088]: 00000030 63 6f 75 6c 64 20 6e 6f 74 20 66 69 6e 64 20 74 |could not find t| Jun 27 16:34:25 host kube-apiserver[22088]: 00000040 68 65 20 72 65 71 75 65 73 74 65 64 20 72 65 73 |he requested res| Jun 27 16:34:25 host kube-apiserver[22088]: 00000050 6f 75 72 63 65 22 08 4e 6f 74 46 6f 75 6e 64 2a |ource".NotFound*| Jun 27 16:34:25 host kube-apiserver[22088]: 00000060 08 0a 00 12 00 1a 00 28 00 30 94 03 1a 00 22 00 |.......(.0....".| Jun 27 16:34:25 host kube-apiserver[22088]: E0627 16:34:25.703783 22088 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *settings.PodPreset: the server could not find the requested resource</code></pre> </div> </div> </p> <p>And obviously when I try to submit my resource through kubectl it fails.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>error: error validating "pod-preset.yaml": error validating data: found invalid field resources for v1alpha1.PodPresetSpec; if you choose to ignore these errors, turn validation off with --validate=false</code></pre> </div> </div> </p> <p>Cluster itself is running normally (I have multiple services and pods). Any idea?</p>
<p>"settings.k8s.io/v1alpha1" API endpoint is enabled by default in 1.6.x. So you don't need to specify in runtime-config parameter.</p> <p>Can you try without specifying anything in "runtime-config" parameter. You just need PodPreset in admission-control, which you have already.</p>
<p>I'm looking to fully understand the jobs in kubernetes.</p> <p>I have successfully create and executed a job, but I do not see the use case.</p> <p>Not being able to rerun a job or not being able to actively listen to it completion makes me think it is a bit difficult to manage.</p> <p>Anyone using them? Which is the use case?</p> <p>Thank you.</p>
<p>A job retries pods until they complete, so that you can tolerate errors that cause pods to be deleted.</p> <p>If you want to run a job repeatedly and periodically, you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob alpha</a> or <a href="http://blog.wercker.com/introducing-cronetes" rel="nofollow noreferrer">cronetes</a>.</p> <p>Some Helm Charts use Jobs to run install, setup, or test commands on clusters, as part of installing services. (<a href="https://github.com/kubernetes/charts/blob/7b1b8e7ac3f49330d0a4c2df9a6c0e4d9d296c17/stable/spinnaker/templates/hooks/delete-jobs.yaml" rel="nofollow noreferrer">Example</a>).</p> <p>If you save the YAML for the job then you can re-run it by deleting the old job an creating it again, or by editing the YAML to change the name (or use e.g. <code>sed</code> in a script). </p> <p>You can watch a job's status with this command:</p> <p><code>kubectl get jobs myjob -w</code></p> <p>The <code>-w</code> option watches for changes. You are looking for the <code>SUCCESSFUL</code> column to show <code>1</code>.</p> <p>Here is a shell command loop to wait for job completion (e.g. in a script): <code> until kubectl get jobs myjob -o jsonpath='{.status.conditions[?(@.type=="Complete")].status}' | grep True ; do sleep 1 ; done </code></p>
<p>I have a Scala application that is using Spark 2.1 in standalone mode. The application will run for 2 hours and finish. It should be run once a month.</p> <p>I found several approaches to combine Spark and Kubernetes: </p> <ol> <li>Use Apache Spark Helm Chart: <a href="https://github.com/kubernetes/charts/tree/master/stable/spark" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/spark</a></li> <li>There is a special branch of Spark for Kubernetes: <a href="https://github.com/apache-spark-on-k8s/spark" rel="nofollow noreferrer">https://github.com/apache-spark-on-k8s/spark</a> </li> <li>Build my own Docker image of my application including the Spark binary: <a href="http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/" rel="nofollow noreferrer">http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/</a> Code example: <a href="https://github.com/phatak-dev/kubernetes-spark" rel="nofollow noreferrer">https://github.com/phatak-dev/kubernetes-spark</a></li> </ol> <p>Most of the documentation describe how to run a Spark cluster on Kubernetes. What is the approach for running Spark standalone on Kubernetes?</p>
<p>For standalone spark on Kubernetes, the two canonical samples that exist are:</p> <ol> <li><a href="https://github.com/kubernetes/charts/tree/master/stable/spark" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/spark</a></li> <li><a href="https://github.com/kubernetes/examples/tree/master/staging/spark" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/staging/spark</a></li> </ol> <p>These are currently running outdated versions of Spark, and require updating to 2.1 and soon 2.2. (PRs are welcome :)). </p> <p>The <a href="https://github.com/apache-spark-on-k8s/spark" rel="nofollow noreferrer">https://github.com/apache-spark-on-k8s/spark</a> branch is not for standalone mode, but aims to enable Spark to directly launch on Kubernetes clusters. It will eventually be merged into upstream spark. Documentation, if you wish to make use of it, is <a href="https://apache-spark-on-k8s.github.io/userdocs/running-on-kubernetes.html" rel="nofollow noreferrer">here</a>. </p> <p>As of now, if you want to use Spark 2.1, options are: either to compile your own image, or packaging your application with the spark distribution in <a href="https://github.com/apache-spark-on-k8s/spark" rel="nofollow noreferrer">apache-spark-on-k8s</a> </p>
<p>I have a GCP project with two subnets (VPC₁ and VPC₂). In VPC₁ I have a few GCE instances and in VPC₂ I have a GKE cluster.</p> <p>I have established VPC Network Peering between both VPCs, and POD₁'s host node can reach VM₁ and vice-versa. Now I would like to be able to reach VM₁ from within POD₁, but unfortunately I can't seem to be able to reach it.</p> <p>Is this a matter of creating the appropriate firewall rules / routes on POD₁, perhaps using its host as router, or is there something else I need to do? How can I achieve connectivity between this pod and the GCE instance?</p> <p><a href="https://i.stack.imgur.com/Cf63d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cf63d.png" alt="GCP Project"></a></p>
<p>Network routes are only effective within its VPC. Say request from pod1 reaches VM1, VPC1 do not know how to route the packet back to pod1. To solve this, just need to SNAT traffic from Pod CIDR range in VPC2 and heading to VPC1. </p> <p>Here is a simple daemonset that can help to inject iptables rules to your GKE cluster. It SNAT traffic based on custom destinations. <a href="https://github.com/bowei/k8s-custom-iptables" rel="nofollow noreferrer">https://github.com/bowei/k8s-custom-iptables</a></p> <p>Of course, the firewall rules need to be setup properly.</p>
<ul> <li>I have Azure Machine (kubernetes) who have Agent of 2 core, 1 GB. My two services are running on this Machine each have its own Postgres (Deplyment, Service, Pv, PVC).</li> <li>I want to host my third service too on same machine.</li> <li>So when I tried to create Postgres Deployment (this too have its own service, PV, PVC) but Pod was stuck in <code>status=ContainerCreating</code> .</li> <li>After some digging I got to know that my <code>VM</code> only Supports <code>data-disks</code>. </li> </ul> <p>So i thought why not use <code>PVC</code> of earlier deployment in current service like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: third-postgres labels: name: third-postgres spec: replicas: 1 template: metadata: labels: name: third-postgres spec: containers: - name: third-postgres image: postgres env: - name: PGDATA value: /var/lib/postgresql/data/pgdata - name: POSTGRES_USER value: third-user - name: POSTGRES_PASSWORD value: &lt;password&gt; - name: POSTGRES_DB value: third_service_db ports: - containerPort: 5432 volumeMounts: - name: third-postgresdata mountPath: /var/lib/postgresql/data volumes: - name: third-postgresdata persistentVolumeClaim: claimName: &lt;second-postgres-data&gt; </code></pre> <ul> <li>Now this Deployment was successfully running but it doesn't create new database <code>third_service_db</code> </li> <li>May be because second <code>PVC</code> was already exists so it skips the Db create part ? </li> <li>So is their any way I can use same <code>PVC</code> for my all services and same <code>PVC</code> can have multiple databases. So that when I run <code>kubectl create -f &lt;path-to-thirst-postgres.yaml&gt;</code> it takes name Database configuration from env Variables and create DB in same <code>PVC</code> </li> </ul>
<p>You have to create one PVC per Deployment. Once a PVC has been <em>claimed</em>, it must be released before it can be used again.</p> <p>In the case of AzureDisk, the auto-created volumes can only be mounted by a single node (<em>ReadWriteOnce</em> access mode) so there's one more constraint: each of your Deployments can have at most <strong>1</strong> replica.</p>
<p>We are running java based microservices and we have following scenario </p> <ul> <li>Application writes debug.log file to /opt/tomcat/logs/debuglog/debug.log and size of log file is 1GB/hour </li> <li>Tomcat to write catalina.out , localhost_access_log and localhost.log and size of log is 1GB/hour for all of them</li> </ul> <p>Question is how to solve this problem when we have a lot of logs to centralised and analysed. We are running 20 instances of this application. We have got 150GB logs in flat filess. Followings are the problems, </p> <ul> <li>Store the logs for 3 years in GCS as per our SLA</li> <li>Parse these logs and store them in BQ for bigdata for 1 year</li> <li>Parse these logs and store them in ELK for 7 days for developers to analyse any running issue</li> </ul> <p>We are trying to evaluate following, </p> <ul> <li>As kubernetes recommends to run sidecars for application logs, we may endup running 3 sidecars considering catalina.out will go to stdout. We can use Stack-driver to process the logs and put them to GCS. Problem we see is container explosion specifically with auto scaling. Other problem is to parse the logs from stackdriver to BigQuery or ELK.</li> <li>Mount GCS in containers and write there itself. Problem is the GCS is community driven and not production ready. We still have to write solution to parse these logs again</li> <li>Use external drive mount to Minion and volume mount to container. Run 1 container per VM to process the logs for different pipelines and scenarios. This is solving a few problems for us like : Logs will not be lost when downscaled, No container explosion , single responsibly container to process different pipelines, move the logs to GCS as per availability. Problem we see is to manage the SSD storage attached to each VM upon up-scale and down-scale. </li> </ul> <p>Any suggestions are welcomed. </p> <h2>EDIT</h2> <p>We end up using custom pipeline on GCP where applications are pushing logs to pub/sub and dataflow is responsible to aggregate and transform the information. </p>
<p>You can use a single sidecar that runs something like <a href="http://www.fluentd.org/" rel="nofollow noreferrer">fluentd</a> or <a href="https://www.elastic.co/products/logstash" rel="nofollow noreferrer">logstash</a>. Both are log ingestion tools that can be customized with several plugins, which allow you to route to all destinations at once. In the case of logstash you might even want to use filebeat.</p> <p>Also, fluentd seems to have an <a href="http://www.fluentd.org/plugins#google-cloud-platform" rel="nofollow noreferrer">official plugin from Google that does most of what you want</a>.</p> <h1>Using DaemonSets to collect logs on hosts</h1> <p>This is the procedure described <a href="http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html" rel="nofollow noreferrer">in this k8s blog post about cluster-level logging</a> and <a href="http://docs.fluentd.org/v0.12/articles/kubernetes-fluentd" rel="nofollow noreferrer">this blog post in the fluentd blog</a>.</p> <p>The idea is to run a DaemonSet (a set of pods that runs on every node in the cluster) that mounts the host path where container log files are located.</p> <p>However, this will only collect the logs that your application produces to stdout. To collect the other ones, you can use the technique described <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#streaming-sidecar-container" rel="nofollow noreferrer">here</a>: run an <em>extremely lightweight</em> sidecar that just tails the log files.</p>
<p>The Kubernetes Dashboard allows users to see all secrets, including their raw values with just a couple clicks. These secrets will likely contain very sensitive data, such as production database passwords and private keys.</p> <p>How do you limit users of the Dashboard, so that they can't see the sensitive data?</p>
<p>This is a <a href="https://github.com/kubernetes/dashboard/issues/964" rel="nofollow noreferrer">known issue</a> and it is simply not officially supported at the moment - the Dashboard is a super-user level administration tool. This <a href="https://github.com/kubernetes/dashboard/issues/964#issuecomment-292486020" rel="nofollow noreferrer">should not be the case</a> forever, but more help is needed to get it there.</p> <p>There are some workarounds discussed in that issue thread that work currently. Here are some notable quirks around them to be aware of beforehand:</p> <ul> <li>Should the dashboard be under a dashboard user, and limited by that? If so, like Anirudh suggested you can neuter parts of the Dashboard and it will work fine and get 403s if they access the Secrets panel.</li> <li>Should the dashboard be under a logged in user, and be limited to what that user can see? This means that <code>kubectl proxy</code> will be necessary without some browser plugin or MITM proxy to attach the needed auth to dashboard server calls but it <a href="https://github.com/kubernetes/dashboard/issues/964#issuecomment-240496867" rel="nofollow noreferrer">is possible</a>.</li> </ul>
<p>I have a Kubernetes cluster and have a running container (X). From this container i want to create a new namespace, deploy a pod in this name space and spawn container(Y). I know kubernetes provides REST APIs. however, i am exploring goClient to do the same and not sure how to use namespace creation api.</p>
<pre><code>import ( "github.com/golang/glog" "k8s.io/client-go/kubernetes" "k8s.io/kubernetes/pkg/api/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) clientConfig, err := config.createClientConfigFromFile() if err != nil { glog.Fatalf("Failed to create a ClientConfig: %v. Exiting.", err) } clientset, err := clientset.NewForConfig(clientConfig) if err != nil { glog.Fatalf("Failed to create a ClientSet: %v. Exiting.", err) } nsSpec := &amp;v1.Namespace{ObjectMeta: metav1.ObjectMeta{Name: ns}} _, err := clientset.Core().Namespaces().Create(nsSpec) } </code></pre>
<p>Is it possible to use the python API to create a "custom object" in kubernetes?</p> <p>Edit:</p> <p>By custom objects I'm refering to <a href="https://kubernetes.io/docs/user-guide/thirdpartyresources/#creating-custom-objects" rel="nofollow noreferrer">this</a>.</p> <p>Thank you.</p>
<p><a href="https://github.com/kubernetes-incubator/client-python" rel="nofollow noreferrer">client-python</a> now supports TPRs. This is <a href="https://github.com/kubernetes-incubator/client-python/blob/master/examples/create_thirdparty_resource.md" rel="nofollow noreferrer">the example from the repo</a>:</p> <pre><code>from __future__ import print_function from pprint import pprint import kubernetes from kubernetes import config from kubernetes.rest import ApiException config.load_kube_config() api_instance = kubernetes.ThirdPartyResources() namespace = 'default' resource = 'repos' fqdn = 'git.k8s.com' body = {} body['apiVersion'] = "git.k8s.com/v1" body['kind'] = "RePo" body['metadata'] = {} body['metadata']['name'] = "blog-repo" body['repo'] = "github.com/user/my-blog" body['username'] = "username" body['password'] = "password" body['branch'] = "branch" try: # Create a Resource api_response = api_instance.apis_fqdn_v1_namespaces_namespace_resource_post( namespace, fqdn, resource, body) pprint(api_response) except ApiException as e: print( "Exception when calling DefaultApi-&gt;apis_fqdn_v1_namespaces_namespace_resource_post: %s\n" % e) </code></pre> <p>ref: <a href="https://github.com/kubernetes-incubator/client-python/blob/master/examples/create_thirdparty_resource.md" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/client-python/blob/master/examples/create_thirdparty_resource.md</a></p>
<p>As an experiment I'm trying to run a docker container on Azure using the Azure Container Service and Kubernetes as the orchestrator. I'm running the official nginx image. Here are the steps I am taking:</p> <p><code> az group create --name test-group --location westus az acs create --orchestrator-type=kubernetes --resource-group=test-group --name=k8s-cluster --generate-ssh-keys </code></p> <p>I created Kubernetes deployment and service files from a docker compose file using Kompose.</p> <p><strong>deployment file</strong> <code> apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.service.type: LoadBalancer creationTimestamp: null labels: io.kompose.service: test name: test spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: test spec: containers: - image: nginx:latest name: test ports: - containerPort: 80 resources: {} restartPolicy: Always status: {} </code></p> <p><strong>service file</strong> <code> apiVersion: v1 kind: Service metadata: annotations: kompose.service.type: LoadBalancer creationTimestamp: null labels: io.kompose.service: test name: test spec: ports: - name: "80" port: 80 targetPort: 80 selector: io.kompose.service: test type: LoadBalancer status: loadBalancer: {} </code></p> <p>I can then start everything up:</p> <p><code> kubectl create -f test-service.yaml,test-deployment.yaml </code></p> <p>Once an IP has been exposed I assign a dns prefix to it so I can access my running container like so: <em>http</em>://nginx-test.westus.cloudapp.azure.com/.</p> <p><strong>My question is, how can I access the service using https? At http<em>s</em>://nginx-test.westus.cloudapp.azure.com/</strong></p> <p>I don't think I'm supposed to configure nginx for https, since the certificate is not mine. I've tried changing the load balancer to send 443 traffic to port 80, but I receive a timeout error.</p> <p>I tried mapping port 443 to port 80 in my Kubernetes service config.</p> <p><code> ports: - name: "443" port: 443 targetPort: 80 </code></p> <p>But that results in:</p> <p><code> SSL peer was not expecting a handshake message it received. Error code: SSL_ERROR_HANDSHAKE_UNEXPECTED_ALERT </code></p> <p>How can I view my running container at <a href="https://nginx-test.westus.cloudapp.azure.com/" rel="nofollow noreferrer">https://nginx-test.westus.cloudapp.azure.com/</a>?</p>
<p>If I understand it correctly, I think you are looking for <code>Nginx Ingress controller</code>.<br> If we need TLS termination on Kubernetes, we can use ingress controller, on Azure we can use <code>Nginx Ingress controller</code>.<br> To archive this, we can follow those steps:<br> 1 Deploy the Nginx Ingress controller<br> 2 Create TLS certificates<br> 3 Deploy test http service<br> 4 configure TLS termination<br> More information about configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure, please refer to this <a href="https://blogs.technet.microsoft.com/livedevopsinjapan/2017/02/28/configure-nginx-ingress-controller-for-tls-termination-on-kubernetes-on-azure-2/" rel="nofollow noreferrer">blog</a>.</p> <pre><code>root@k8s-master-6F403744-0:~/ingress/examples/deployment/nginx# kubectl get services --namespace kube-system -w NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend 10.0.113.185 &lt;none&gt; 80/TCP 42m heapster 10.0.4.232 &lt;none&gt; 80/TCP 1h kube-dns 10.0.0.10 &lt;none&gt; 53/UDP,53/TCP 1h kubernetes-dashboard 10.0.237.125 &lt;nodes&gt; 80:32229/TCP 1h nginx-ingress-ssl 10.0.92.57 40.71.37.243 443:30215/TCP 13m </code></pre> <p><a href="https://i.stack.imgur.com/jSxZ8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jSxZ8.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/P1Gl9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P1Gl9.png" alt="enter image description here"></a></p>
<p>I have one master and two worker node kubernetes cluster on <code>AWS</code>. And I have two environments (qc and prod) in the cluster and I created two namespaces. I have the same service running on <code>qc</code>and <code>prod</code> namespaces.</p> <p>I have created ingress for both namespaces.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress namespace: prod spec: rules: - host: "*.qc-k8s.example.com" http: paths: - path: /app backend: serviceName: client-svc servicePort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress namespace: qc spec: rules: - host: "*.qc-k8s.example.com" http: paths: - path: /app-qc backend: serviceName: client-svc servicePort: 80 </code></pre> <p>I have <code>client-svc</code> in both <code>qc</code> and <code>prod</code> namespaces and open the nodeport 80. Then I created <code>ELB</code> service and <code>daemonset</code> as below.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-svc namespace: deafult annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:ca-central-1:492276880714:certificate/xxxxxxxxxxxxxx service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http spec: type: LoadBalancer selector: app: my-app ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: http --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: ingress-nginx namespace: deafult spec: template: metadata: labels: app: my-app spec: terminationGracePeriodSeconds: 60 containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.6 name: ingress-nginx imagePullPolicy: Always ports: - name: http containerPort: 80 hostPort: 80 livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace resources: limits: cpu: 100m memory: 100Mi requests: cpu: 100m memory: 100Mi args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend </code></pre> <p>When I tried to curl <code>curl -iv https://gayan.qc-k8s.example.com/app/</code>. Then Im getting an error.</p> <p><strong>2017/06/27 15:43:31 [error] 158#158: *981 connect() failed (111: Connection refused) while connecting to upstream, client: 209.128.50.138, server: gayan.qc-k8s.example.com, request: "GET /app/ HTTP/1.1", upstream: "<a href="http://100.82.2.47:80/app/" rel="nofollow noreferrer">http://100.82.2.47:80/app/</a>", host: "gayan.qc-k8s.example.com" 209.128.50.138 - [209.128.50.138, 209.128.50.138] - - [27/Jun/2017:15:43:31 +0000] "GET /app/ HTTP/1.1" 500 193 "-" "curl/7.51.0" 198 0.014 100.82.2.47:80, 100.96.2.48:80 0, 193 0.001, 0.013 502, 500</strong></p> <p>If I curl <code>curl -iv https://gayan.qc-k8s.example.com/app-qc</code>, I'm getting the same issue. Anyone has experienced this error previously ? any clue to resolve this issue?</p> <p>Thank you</p>
<p>I solved this by <a href="https://github.com/kubernetes/kubernetes/issues/17088" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/17088</a></p> <p>An example, from a real document we use:</p> <pre><code> apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress namespace: dev-1 spec: rules: - host: api-gateway-dev-1.faceit.com http: paths: - backend: serviceName: api-gateway servicePort: 80 path: / - host: api-shop-dev-1.faceit.com http: paths: - backend: serviceName: api-shop servicePort: 80 path: / - host: api-search-dev-1.faceit.com http: paths: - backend: serviceName: api-search servicePort: 8080 path: / tls: - hosts: - api-gateway-dev-1.faceit.com - api-search-dev-1.faceit.com - api-shop-dev-1.faceit.com secretName: faceitssl </code></pre> <p>We make one of these for each of our namespaces for each track.</p> <p>Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.</p> <p>As such, the traffic is then routed:</p> <p>Internet -> AWS ELB -> NGINX (on node) -> Pod</p> <p>We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.</p>
<p>I have a requirement that involves putting together uptime metrics for some of the pods in my Kubernetes cluster.</p> <p>I am thinking of using the Kubernetes readiness checks and was curious if anyone has done anything similar?</p> <p>Basically I am trying to generate reports that say this pod has had 95% uptime over the last week/month.</p>
<p>I would recommend checking out Prometheus, it is the most powerful way of monitoring internal services of Kubernetes cluster, resource usage (at host level, as well as Kubernetes API level), and, of course, your applications.</p> <p>NB: for pod uptime, there is a generic <code>up</code> metric you can use.</p>
<p>how to ssh into nodes of Google container engine cluster? When i try ssh to the node using its ip address it says public key denied.</p> <p>Can some one help me how to get the key to connect to nodes of google container cluster?</p>
<p>You should use gcloud tool e.g:</p> <pre><code>gcloud compute ssh &lt;NODE_NAME&gt; --zone &lt;ZONE&gt; </code></pre>
<p>I am trying to host a simple Node.js app connected to MongoDB, I look on the web, I've found 2 different methods and some guides to do so:</p> <ol> <li><p>Different pods:</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/tree/master/examples/nodesjs-mongodb" rel="nofollow noreferrer">Example</a> from the official Kubernetes documentation.</li> <li><a href="https://medium.com/google-cloud/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d" rel="nofollow noreferrer">Example</a> from the official Google Cloud Platform documentation.</li> </ul></li> <li><p>Same pod: </p> <ul> <li><a href="https://developer.ibm.com/recipes/tutorials/kubernetes-how-to-run-a-node-js-application-which-accesses-mongo-database-where-both-are-running-in-containers-in-a-single-pod/" rel="nofollow noreferrer">Example</a> from the official IBM documentation.</li> </ul></li> </ol> <p>This confused me a bit, which is the best practise to achieve that?</p>
<p>You should put them in different pods.</p> <p>Being in different pods you can then scale up the number of replicas of the web application part to as many instances as you need and you will still have the one instance of MongoDB.</p> <p>If you had them in the same pod, you wouldn't be able to scale up to multiple instances as each replica would have its own MongoDB instance and thus separate data. If doing data updates to MongoDB from the web application, that would cause all sorts of problems as subsequent requests could hit a difference instance and so see different data.</p>
<p>I'm testing the bluemix container service. Since IBM provides a grafana/graphite for my account to collect cpu/mem stats on all of my containers, i naturally want to add my own statistics.</p> <p>Is it possible to report custom stats form the kubernetes cluster or from inside my containers to the ibm graphite?</p>
<p>according to the documentation you can/have-to use the coreOS prometheus-operator to provide a prometheus in your cluster and then add a prometheus datasource to grafana.</p> <p>afaik you can not add the prometheus datasource to the metric.ng.bluemix.net grafana</p> <p>WARNING: the current version of the linked coreOS repository is for kubernetes 1.6 (bluemix runs 1.5). You have to get an old version of the scripts used by coreOS</p>
<p>I'm using Kubernetes on Google Compute Engine and Stackdriver. The Kubernetes metrics show up in Stackdriver as custom metrics. I successfully set up a dashboard with charts that show a few custom metrics such as "node cpu reservation". I can even set up an aggregate mean of all node cpu reservations to see if my total Kubernetes cluster CPU reservation is getting too high. See screenshot.</p> <p><a href="https://i.stack.imgur.com/ilrKd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ilrKd.png" alt="enter image description here"></a></p> <p>My problem is, I can't seem to set up an alert on the mean of a custom metric. I can set up an alert on each node, but that isn't what I want. I can also set up "Group Aggregate Threshold Condition", but custom metrics don't seem to work for that. Notice how "Custom Metric" is not in the dropdown.</p> <p><a href="https://i.stack.imgur.com/sF4fn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sF4fn.png" alt="enter image description here"></a></p> <p>Is there a way to set an alert for an aggregate of a custom metric? If not, is there some way I can alert when my Kubernetes cluster is getting too high on CPU reservation? </p>
<p>alerting on an aggregation of custom metrics is currently not available in Stackdriver. We are considering various solutions to the problem you're facing. Note that sometimes it's possible to alert directly on symptoms of the problem rather than monitoring the underlying resources. For example, if you're concerned about cpu because X happens and users notice, and X is bad - you could consider alerting on symptoms of X instead of alerting on cpu. </p>
<p>Ok here is the story:</p> <p>I am using minikube to host my application:</p> <p>1 pod running redis (redis on ubuntu:14.04)</p> <p>1 pod running my php application (php7-apache)</p> <p>I realised that if I setup my redis pod first then my php pod, my php pod will have these extra env variables:</p> <pre><code>REDIS_SERVICE_PORT=6379 REDIS_PORT_6379_TCP_ADDR=10.0.0.229 REDIS_PORT_6379_TCP_PORT=6379 REDIS_PORT_6379_TCP=tcp://10.0.0.229:6379 REDIS_PORT=tcp://10.0.0.229:6379 REDIS_SERVICE_HOST=10.0.0.229 </code></pre> <p>These variables override the port I setup for my php project.</p> <p>To counter it, I have to explicitly set the REDIS_PORT in my yaml file for my php deployment.</p> <p>Any idea why this happened? And clean way to simply avoid this?</p> <p>Thanks!</p>
<p>Ooooook, got the answer.</p> <p>Credits to @aschepis.</p> <p>For some mysterious reason, if I name my deployment to be exactly "redis" in the redis pod, the env variables will be injected to other pods who comes later.</p> <p>As long as I renamed the deployment to another name, things went back to normal...</p> <p>Thanks again to @aschepis.</p>
<p>I am using the concourse helm build provided at <a href="https://github.com/kubernetes/charts/tree/master/stable/concourse" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/concourse</a> to setup concourse inside of our kubernetes cluster. I have been able to get the setup working and I am able to access it within the cluster but I am having trouble accessing it outside the cluster. The notes from the build show that I can just use <code>kubectl port-forward</code> to get to the webpage but I don't want to have all of the developers have to forward the port just to get to the web ui. I have tried creating a service that has a node port like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: concourse namespace: concourse-ci spec: ports: - port: 8080 name: atc nodePort: 31080 - port: 2222 name: tsa nodePort: 31222 selector: app: concourse-web type: NodePort </code></pre> <p>This allows me to get to the webpage and interact with it in most ways but then when I try to look at build status it never loads the events that happened. Instead a network request for <code>/api/v1/builds/1/events</code> is stuck in pending and the steps of the build never load. Any ideas what I can do to be able to completely access concourse external to the cluster?</p> <p>EDIT: It seems like the events network request normally responds with a <code>text/event-stream</code> data type and maybe the Kubernetes service isn't handling an event stream correctly. Or there is something about concourse that handles event-streams different than the norm.</p>
<p>After plenty of investigation I have found that the the nodePort service is actually working and it is just my antivirus (Sophos) that is silently blocking the response from the <code>events</code> request.</p>
<p>I have a Scala application that is using Spark 2.1 in standalone mode. The application will run for 2 hours and finish. It should be run once a month.</p> <p>I found several approaches to combine Spark and Kubernetes: </p> <ol> <li>Use Apache Spark Helm Chart: <a href="https://github.com/kubernetes/charts/tree/master/stable/spark" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/spark</a></li> <li>There is a special branch of Spark for Kubernetes: <a href="https://github.com/apache-spark-on-k8s/spark" rel="nofollow noreferrer">https://github.com/apache-spark-on-k8s/spark</a> </li> <li>Build my own Docker image of my application including the Spark binary: <a href="http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/" rel="nofollow noreferrer">http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/</a> Code example: <a href="https://github.com/phatak-dev/kubernetes-spark" rel="nofollow noreferrer">https://github.com/phatak-dev/kubernetes-spark</a></li> </ol> <p>Most of the documentation describe how to run a Spark cluster on Kubernetes. What is the approach for running Spark standalone on Kubernetes?</p>
<p>I first tried the simplest idea: Approach 3:</p> <p><em>Build my own Docker image of my application including the Spark binary: <a href="http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/" rel="nofollow noreferrer">http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/</a></em></p> <p><em>Code example: <a href="https://github.com/phatak-dev/kubernetes-spark" rel="nofollow noreferrer">https://github.com/phatak-dev/kubernetes-spark</a></em></p> <p>It worked well.</p>
<p>Here <a href="https://quay.io/repository/coreos/hyperkube?tab=tags" rel="nofollow noreferrer">https://quay.io/repository/coreos/hyperkube?tab=tags</a> you can see that there are two tags for 1.6.6:</p> <ul> <li>v1.6.6_coreos.0</li> <li>v1.6.6_coreos.1</li> </ul> <p>What is the meaning of 0 and 1?<br> What is the difference between the images?</p>
<p>According to <a href="https://github.com/coreos/kubernetes/releases" rel="nofollow noreferrer">https://github.com/coreos/kubernetes/releases</a> , these are identical images, but one was re-tagged to allow new hyperkube image build at quay.io . For details, please have a look into provided link.</p>
<p>I am running a kubernetes cluster with heapster and prometheus service. I want to measure each container and pods start and end time but i could not find such statistics in prometheus.</p> <p>I want to get these statistics through some api.</p> <p>Does anyone know how can I get it ?</p>
<p>The kube-state-metrics job exports various Kubernetes API relates stats for Prometheus, including in <code>kube_pod_info</code> the Pod start time:</p> <p><a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md" rel="nofollow noreferrer">https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md</a></p>
<h3>Problem</h3> <p>I am trying to enable authentication on my kubelet servers using Bearer Tokens (<strong>not</strong> X.509 client certificate authentication), and fail to understand the workflow.</p> <h3>What I tried</h3> <p>According to the documentation page <a href="https://kubernetes.io/docs/admin/kubelet-authentication-authorization/" rel="nofollow noreferrer">Kubelet authentication/authorization</a>, starting the kubelet with the <code>--authentication-token-webhook</code> flag enables the Bearer Token authentication. I could confirm that by sending a request to the kubelet REST API using one of the <code>default</code> secrets created by the Controller Manager:</p> <pre class="lang-sh prettyprint-override"><code>$ MY_TOKEN="$(kubectl get secret default-token-kw7mk \ -o jsonpath='{$.data.token}' | base64 -d)" $ curl -sS -o /dev/null -D - \ --cacert /var/run/kubernetes/kubelet.crt \ -H "Authorization : Bearer $MY_TOKEN" \ https://host-192-168-0-10:10250/pods/ HTTP/1.1 200 OK Content-Type: application/json Date: Fri, 30 Jun 2017 22:12:29 GMT Transfer-Encoding: chunked </code></pre> <p>However any communication with the kubelet <strong>via the API server</strong> (typically using the kubectl <code>logs</code> or <code>exec</code> commands) using the same Bearer Token as above fails with:</p> <pre><code>$ kubectl --token="$MY_TOKEN" -n kube-system logs \ kube-dns-2272871451-sc02r -c kubedns error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log kube-dns-2272871451-sc02r)) </code></pre> <h3>Where I need clarification</h3> <p>My initial assumption was that the API server just passed the Bearer Token it received from the client directly to the kubelet, but my little experiment above proved me otherwise.</p> <p>I see that the <a href="https://kubernetes.io/docs/admin/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a> documentation mentions a flag called <code>--authentication-token-webhook-config-file</code> but I'm unsure how to use it, or if it's even relevant for authenticating the API server against a kubelet.</p> <h3>Current configuration</h3> <p>My kubelet(s) run with:</p> <pre><code> --anonymous-auth=false --authorization-mode=Webhook --authentication-token-webhook --cadvisor-port=0 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --read-only-port=0 --kubeconfig=/etc/kubernetes/kubeconfig-kubelet --pod-manifest-path=/etc/kubernetes/manifests --require-kubeconfig </code></pre> <p>My API server runs with: </p> <pre><code> --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --anonymous-auth=false --authorization-mode=AlwaysAllow (+ tls flags) </code></pre>
<p>When making calls to the API server that require communication from the API server to the kubelet, that communication is done using the API server's client credentials, which only support x509 authentication to the kubelet.</p> <p>The flags used to give the API server the credentials to use to contact the kubelet are listed in the "X509 client certificate authentication" section of <a href="https://kubernetes.io/docs/admin/kubelet-authentication-authorization/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/kubelet-authentication-authorization/</a></p> <p>API server webhook authentication options are unrelated to kubelet auth. </p>
<p>I've got a kubernetes cluster which was set up with kops with 1.5, and then upgraded to 1.6.2. I'm trying to use PodPresets. The docs state the following requirements:</p> <blockquote> <ol> <li>You have enabled the api type settings.k8s.io/v1alpha1/podpreset</li> <li>You have enabled the admission controller PodPreset</li> <li>You have defined your pod presets</li> </ol> </blockquote> <p>I'm seeing that for 1.6.x, the first is taken care of (how can I verify?). How can I apply the second? I can see that there are three kube-apiserver-* pods running in the cluster (I imagine it's for the 3 azs). I guess I can edit their yaml config from kubernetes dashboard and add PodPreset to the admission-control string. But is there a better way to achieve this?</p>
<p>You can list the API groups which are currently enabled in your cluster either with the <code>api-versions</code> kubectl command, or by sending a GET request to the <code>/apis</code> endpoint of your <code>kube-apiserver</code>:</p> <pre class="lang-none prettyprint-override"><code>$ curl localhost:8080/apis { "paths": [ "/api", "/api/v1", "...", "/apis/settings.k8s.io", "/apis/settings.k8s.io/v1alpha1", "...", } </code></pre> <blockquote> <p><strong>Note</strong>: The <code>settings.k8s.io/v1alpha1</code> API is enabled by default on Kubernetes v1.6 and v1.7 but will be <a href="https://github.com/kubernetes/kubernetes/pull/47690" rel="nofollow noreferrer">disabled by default in v1.8</a>.</p> </blockquote> <p>You can use a kops <a href="https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md" rel="nofollow noreferrer">ClusterSpec</a> to customize the configuration of your Kubernetes components during the cluster provisioning, including the API servers.</p> <p>This is described on the documentation page <a href="https://github.com/kubernetes/kops/blob/master/docs/manifests_and_customizing_via_api.md#exporting-a-cluster" rel="nofollow noreferrer">Using A Manifest to Manage kops Clusters</a>, and the full spec for the KubeAPIServerConfig type is available <a href="https://godoc.org/k8s.io/kops/pkg/apis/kops#KubeAPIServerConfig" rel="nofollow noreferrer">in the kops GoDoc</a>.</p> <p>Example:</p> <pre class="lang-none prettyprint-override"><code>apiVersion: kops/v1 kind: Cluster metadata: name: k8s.example.com spec: kubeAPIServer: AdmissionControl: - NamespaceLifecycle - LimitRanger - PodPreset </code></pre> <p>To update an existing cluster, perform the following steps:</p> <ol> <li><p>Get the full cluster configuration with</p> <pre class="lang-none prettyprint-override"><code>kops get cluster name --full </code></pre></li> <li><p>Copy the kubeAPIServer spec block from it.</p></li> <li><p><strong>Do not push back the full configuration</strong>. Instead, edit the cluster configuration with</p> <pre class="lang-none prettyprint-override"><code>kops edit cluster name </code></pre></li> <li><p>Paste the kubeAPIServer spec block, add the missing bits, and save.</p></li> <li><p>Update the cluster resources with</p> <pre class="lang-none prettyprint-override"><code>kops update cluster nane </code></pre></li> <li><p>Perform a rolling update to apply the changes:</p> <pre class="lang-none prettyprint-override"><code>kops rolling-update name </code></pre></li> </ol>
<p>I've created a Kubernetes cluster on AWS with the <a href="https://github.com/kubernetes/kops" rel="noreferrer">kops</a> tool. I need to get hold of its certificate authority certificate and key though, how do I export these files through kops?</p> <p>I've tried the following, but it fails saying that yaml output format isn't supported for secrets:</p> <pre><code>kops --state s3://example.com get secret ca -o yaml </code></pre> <p>I'm also wondering how to specify which cluster to address, as kops is just using the default kubectl context. I'm using kops version 1.6.2.</p>
<p>I found out that kops stores the CA key and certificate in its S3 bucket, so you can download said files like so:</p> <pre><code>aws s3 cp s3://$BUCKET/$CLUSTER/pki/private/ca/$KEY.key ca.key aws s3 cp s3://$BUCKET/$CLUSTER/pki/issued/ca/$CERT.crt ca.crt </code></pre> <p>You need to determine the S3 bucket used by kops (i.e. <code>$BUCKET</code>), the name of your cluster (<code>$CLUSTER</code>) and the filenames of the .key and .crt files will be random.</p>
<p>Is it possible to share a single persistent volume claim (PVC) between two apps (each using a pod)?</p> <p>I read: <a href="https://stackoverflow.com/questions/35364367/share-persistent-volume-claims-amongst-containers-in-kubernetes-openshift">Share persistent volume claims amongst containers in Kubernetes/OpenShift</a> but didn't quite get the answer.</p> <p>I tried to added a PHP app, and MySQL app (with persistent storage) within the same project. Deleted the original persistent volume (PV) and created a new one with read,write,many mode. I set the root password of the MySQL database, and the database works.</p> <p>Then, I add storage to the PHP app using the same persistent volume claim with a different subpath. I found that I can't turn on both apps. After I turn one on, when I try to turn on the next one, it get stuck at creating container.</p> <p>MySQL .yaml of the deployment step at openshift:</p> <pre><code> ... template: metadata: creationTimestamp: null labels: name: mysql spec: volumes: - name: mysql-data persistentVolumeClaim: claimName: mysql containers: - name: mysql ... volumeMounts: - name: mysql-data mountPath: /var/lib/mysql/data subPath: mysql/data ... terminationMessagePath: /dev/termination-log imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst </code></pre> <p>PHP .yaml from deployment step:</p> <pre><code> template: metadata: creationTimestamp: null labels: app: wiki2 deploymentconfig: wiki2 spec: volumes: - name: volume-959bo &lt;&lt;---- persistentVolumeClaim: claimName: mysql containers: - name: wiki2 ... volumeMounts: - name: volume-959bo mountPath: /opt/app-root/src/w/images subPath: wiki/images terminationMessagePath: /dev/termination-log imagePullPolicy: Always restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst securityContext: {} </code></pre> <p>The volume mount names are different. But that shouldn't make the two pods can't share the PVC. Or, the problem is that they can't both mount the same volume at the same time?? I can't get the termination log at /dev because if it can't mount the volume, the pod doesn't start, and I can't get the log.</p> <p>The PVC's .yaml (<code>oc get pvc -o yaml</code>)</p> <pre><code>apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-class: ebs volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs creationTimestamp: YYYY-MM-DDTHH:MM:SSZ name: mysql namespace: abcdefghi resourceVersion: "123456789" selfLink: /api/v1/namespaces/abcdefghi/persistentvolumeclaims/mysql uid: ________-____-____-____-____________ spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi volumeName: pvc-________-____-____-____-____________ status: accessModes: - ReadWriteMany capacity: storage: 1Gi phase: Bound kind: List metadata: {} resourceVersion: "" selfLink: "" </code></pre> <p>Suspicious Entries from <code>oc get events</code> </p> <pre><code>Warning FailedMount {controller-manager } Failed to attach volume "pvc-________-____-____-____-____________" on node "ip-172-__-__-___.xx-xxxx-x.compute.internal" with: Error attaching EBS volume "vol-000a00a00000000a0" to instance "i-1111b1b11b1111111": VolumeInUse: vol-000a00a00000000a0 is already attached to an instance Warning FailedMount {kubelet ip-172-__-__-___.xx-xxxx-x.compute.internal} Unable to mount volumes for pod "the pod for php app": timeout expired waiting for volumes to attach/mount for pod "the pod". list of unattached/unmounted volumes= [volume-959bo default-token-xxxxx] </code></pre> <p>I tried to:</p> <ol> <li>turn on the MySQL app first, and then try to turn on the PHP app </li> <li>found php app can't start</li> <li>turn off both apps</li> <li>turn on the PHP app first, and then try to turn on the MySQL app.</li> <li>found mysql app can't start</li> </ol> <p>The strange thing is that the event log never says it can't mount volume for the MySQL app.</p> <p>The remaining volumen to mount is either default-token-xxxxx, or volume-959bo (the volume name in PHP app), but never mysql-data (the volume name in MySQL app).</p>
<p>So the error seems to be caused by the underlying storage you are using, in this case <code>EBS</code>. The OpenShift docs actually specifically state that this is the case for block storage, see <a href="https://docs.openshift.com/container-platform/3.5/security/storage.html#security-storage-block" rel="nofollow noreferrer">here</a>.</p> <p>I know this will work for both NFS and Glusterfs storage, and have done this in numerous projects using these storage type but unfortunately, in your case it's not supported</p>
<p>Specific to Docker based deployment, what are the differences between those two? Since Google App Engine Flexible now also supports Dockerfile based deployment and it is also fully-managed service, seems it's more preferred option rather than configuring Kubernetes deployment on Container Engine, isn't it?</p> <p>What are the use cases where it's more preferred to use Google Container Engine over App Engine Flexible?</p>
<p>They are different things. App Engine Flexible is focused on application development - i.e. you have an application and you want to be deployed and managed by Google). On the other hand, Kubernetes is more about having your own infrastructure. Obviously, you can also deploy applications in Kubernetes but, as it's your "own" infrastructure, you are the one to directly manage how both the and the application will behave (create services, create scalability policies, RBAC, security policies...). </p> <p>In this sense, Kubernetes is more flexible in what you can achieve. However, as a developer, you may not be interested in the infrastructure at all, only that your application works and scales. For this kind of profile, App Engine Flexible is more suitable.</p> <p>If, on the other side, want to manage a complete Container infrastructure (more SRE profile), then Kubernetes is for you.</p>
<p>I have a bunch of pods running in the same cluster. Sometimes there are not enough resources and some pods need to terminate. </p> <p>That's OK, but how do I set the priority of which pods are killed first?</p> <p>It usually kills my most important service first :\</p> <p>Thanks!</p>
<p>I suggest you take a look at <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md" rel="nofollow noreferrer">resource QoS</a>.</p> <p>Have you important stuff (including monitoring) specify limit=request which in turn will land them in the guaranteed QoS class.</p> <p>Specifically,</p> <blockquote> <p>The system computes pod level requests and limits by summing up per-resource requests and limits across all containers. When request == limit, the resources are guaranteed (...)</p> </blockquote> <p>Also, overstepping CPU limits only results in throttling, so it's more important to get memory limits (per container) right.</p>
<p>After migrating the image type from container-vm to cos for the nodes of a GKE cluster, it seems no longer possible to mount a NFS volume for a pod.</p> <p>The problem seems to be missing NFS client libraries, as a mount command from command line fails on all COS versions I tried (cos-stable-58-9334-62-0, cos-beta-59-9460-20-0, cos-dev-60-9540-0-0).</p> <pre><code>sudo mount -t nfs mynfsserver:/myshare /mnt </code></pre> <p>fails with</p> <pre><code>mount: wrong fs type, bad option, bad superblock on mynfsserver:/myshare, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.&lt;type&gt; helper program) </code></pre> <p>But this contradicts the supported volume types listed here: <a href="https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support" rel="nofollow noreferrer">https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support</a></p> <p>Mounting a NFS volume in a pod works in a pool with image-type <code>container-vm</code> but not with <code>cos</code>.</p> <p>With cos I get following messages with <code>kubectl describe pod</code>:</p> <pre><code>MountVolume.SetUp failed for volume "kubernetes.io/nfs/b6e6cf44-41e7-11e7-8b00-42010a840079-nfs-mandant1" (spec.Name: "nfs-mandant1") pod "b6e6cf44-41e7-11e7-8b00-42010a840079" (UID: "b6e6cf44-41e7-11e7-8b00-42010a840079") with: mount failed: exit status 1 Mounting command: /home/kubernetes/containerized_mounter/mounter Mounting arguments: singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1 nfs [] Output: Mount failed: Mount failed: exit status 32 Mounting command: chroot Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1] Output: mount.nfs: Failed to resolve server singlefs-1-vm: Temporary failure in name resolution </code></pre>
<p>I've nicked the solution @saad-ali mentioned above, from the kubernetes project, to make this work.</p> <p>To be concrete, I've added the following to my cloud-config:</p> <pre><code># This script creates a chroot environment containing the tools needed to mount an nfs drive - path: /tmp/mount_config.sh permissions: 0755 owner: root content: | #!/bin/sh set +x # For debugging export USER=root export HOME=/home/dockerrunner mkdir -p /tmp/mount_chroot chmod a+x /tmp/mount_chroot cd /tmp/ echo "Sleeping for 30 seconds because toolbox pull fails otherwise" sleep 30 toolbox --bind /tmp /google-cloud-sdk/bin/gsutil cp gs://&lt;uploaded-file-bucket&gt;/mounter.tar /tmp/mounter.tar tar xf /tmp/mounter.tar -C /tmp/mount_chroot/ mount --bind /tmp/mount_chroot /tmp/mount_chroot mount -o remount, exec /tmp/mount_chroot mount --rbind /proc /tmp/mount_chroot/proc mount --rbind /dev /tmp/mount_chroot/dev mount --rbind /tmp /tmp/mount_chroot/tmp mount --rbind /mnt /tmp/mount_chroot/mnt </code></pre> <p>The uploaded-file-bucket container the chroom image the kube team has created, downloaded from: <a href="https://storage.googleapis.com/kubernetes-release/gci-mounter/mounter.tar" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-release/gci-mounter/mounter.tar</a></p> <p>Then, the runcmd for the cloud-config looks something like:</p> <pre><code>runcmd: - /tmp/mount_config.sh - mkdir -p /mnt/disks/nfs_mount - chroot /tmp/mount_chroot /bin/mount -t nfs -o rw nfsserver:/sftp /mnt/disks/nfs_mount </code></pre> <p>This works. Ugly as hell, but it'll have to do for now.</p>
<p>I have a google compute VM instance that will not stop or be killed. </p> <p>I don't know where it came from and I can't delete it or pause it. I don't have anything running on it nor is has anything scheduled with it.</p> <p>'gke-cluster-1-default-pool-....` <a href="https://i.stack.imgur.com/HQxwf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HQxwf.png" alt="enter image description here"></a></p>
<p>That is a VM from Google Container Engine. In the left menu, navigate to Container Engine and check if you have any clusters created. If a cluster was created and then removed it is possible that the VM did not get cleaned up properly.</p> <p>In you dashboard, there should be an Activity tab. You can use this to filter the activity on the account to see if someone created a Google Container Engine cluster.</p>
<p>I have a traefik.toml file defined as part of my traefik configmap. The snippet below is the kubernetes endpoint configuration with a labelselector defined:</p> <pre><code>[kubernetes] labelselector = "expose=internal" </code></pre> <p>When I check the traefik status page in this configuration, I see all ingresses listed, not just those with the label "expose: internal" defined.</p> <p>If I set the kubernetes.labelselector as a container argument to my deployment, however, only the ingresses with the matching label are displayed on the traefik status page as expected:</p> <pre><code>- --kubernetes.labelselector=expose=internal </code></pre> <p>According to the <a href="https://docs.traefik.io/toml/#kubernetes-ingress-backend" rel="nofollow noreferrer">Kubernetes Ingress Backend</a> documentation, any label selector format valid in the label selector section of the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">Labels and Selectors</a> should be valid in the traefik.toml file. I have experimented with both the equality baed (as shown above) and the set-based (to determine if the "expose" label is present, only), neither of which have worked in the toml. The set-based does not seem to work in the container args, but the equality statements do.</p> <p>I'm assuming there is some issue related to how I've formatted the kubernetes endpoint within the traefik.toml file. Before reporting this issue to github I was hoping someone could clarify the documentation and/or correct any mistakes I've made in the toml file format.</p>
<p>As you have already found out, not passing <code>--kubernetes</code> makes things work for you. The reason is that this parameter not only enables the Kubernetes provider but also sets all defaults. <a href="http://docs.traefik.io/basics/#static-trfik-configuration" rel="nofollow noreferrer">As documented</a>, the command-line arguments take precedence over the configuration file; thus, any non-default Kubernetes parameters specified in the TOML file will be overridden by the default values implied through <code>--kubernetes</code>. This is intended (albeit not ideally documented) behavior.</p> <p>You can still mix and match both command-line and TOML configuration parameters for Kubernetes (or any other provider, for that matter) by omitting <code>--kubernetes</code>. For instance, you could have your example TOML file</p> <pre><code>[kubernetes] labelselector = "expose=internal" </code></pre> <p>and then invoke Traefik like</p> <pre><code>./traefik --configfile=config.yaml --kubernetes.namespaces=other </code></pre> <p>which will cause Traefik to use both the custom label selector <code>expose=internal</code> and watch the namespace <code>other</code>.</p> <p>I have <a href="https://github.com/containous/traefik/pull/1830" rel="nofollow noreferrer">submitted a PR</a> to clarify the behavior of the command-line provider-enabling parameters with regards to the provider default values.</p>
<p>I am trying to update the existing deployments in Openshift using Kubernetes Client for Go. I am using the following JSON to update the replicas to 3:</p> <p>JSON:</p> <pre><code>{ "kind": "Deployment", "spec": { "template": { "spec": { "containers": { "image": "docker.fmr.com\/fmr-pr000105\/testcontainer:1.0.0", "name": "testcontainer", "resources": { "requests": { "cpu": "50m" }, "limits": { "cpu": "50m", "memory": "50M" } }, "ports": { "protocol": "TCP", "name": "test-con-http", "containerPort": 22 } } }, "metadata": { "labels": { "app": "testcontainer" } } }, "replicas": 3 }, "apiVersion": "extensions\/v1beta1", "metadata": { "name": "testcontainer" } } </code></pre> <p>But keep on getting the error:</p> <pre><code>only encoded map or array can be decoded into a struct </code></pre> <p>I am using the following code :</p> <pre><code>import ( "fmt" "flag" "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/kubernetes" apiv1 "k8s.io/client-go/pkg/api/v1" "k8s.io/client-go/pkg/api" "k8s.io/client-go/pkg/apis/extensions" "k8s.io/client-go/pkg/apis/extensions/v1beta1" "bufio" "os" ) func main() { var jsonBody []byte jsonBody = rteMockedUp.GetJsonBody() d := api.Codecs.UniversalDecoder() obj, _, err := d.Decode(jsonBody, nil, nil) if err != nil { log.Fatalf("could not decode json: %s\n%s", jsonBody, err) } src := obj.(*extensions.Deployment) dst := &amp;v1beta1.Deployment{} err = api.Scheme.Convert(src,dst,0) if err != nil { log.Fatalf("failed to convert: %s", err) } updateStatus, err := deploymentsClient.Update(dst) if err != nil { log.Fatalf("Update failed %s", err) } } </code></pre> <p>An error is thrown over here:</p> <pre><code>obj, _, err := d.Decode(jsonBody, nil, nil) </code></pre> <p>Is there a problem with my JSON?</p>
<p>Normally when this happens it means there is something wrong with the definition. In your case the <code>containers</code> should be an array and not an object.</p>
<p>How can I use <code>kubectl</code> or the API to retrieve the current image for containers in a pod or deployment?</p> <p>For example, in a deployment created with the below configuration, I want to retrieve the value <code>eu.gcr.io/test0/brain:latest</code>.</p> <pre><code>apiVersion: v1 kind: Deployment metadata: name: flags spec: replicas: 6 template: metadata: labels: app: flags spec: containers: - name: flags image: eu.gcr.io/test0/brain:latest </code></pre>
<p>From <code>kubectl</code> 1.6 the <code>-o wide</code> option does this, so</p> <pre><code>kubectl get deployments -o wide </code></pre> <p>will show the current image in the output.</p>
<p>I've followed <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noreferrer">guide</a> to install kubectl for testing in development environment on Ubuntu 16.04. </p> <p>Now I would like to remove Kubectl from ubuntu.</p>
<p>Depending on the method you chose in <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a>, it can be any of these:</p> <ul> <li>Install kubectl binary via curl: <code>sudo rm /usr/local/bin/kubectl</code></li> <li>Download as part of the Google Cloud SDK: <code>gcloud components remove kubectl</code></li> <li>Install with snap on Ubuntu (just as Gparmar said): <code>snap remove kubectl</code></li> </ul> <p>In addition, you may need to remove the configuration files in <code>~/.kube</code>.</p>
<p>I'm running traefik on AWS via kubernetes. It all works, except for my http=>https redirect configuration.</p> <p>I have the following .toml file</p> <pre><code>defaultEntryPoints = ["http", "https"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" </code></pre> <p>With the Kubernetes Deployment + Service configurations below.</p> <p>https requests work file, but http requests return:</p> <pre><code>&gt; curl http://www.myserver.com curl: (52) Empty reply from server </code></pre> <p>Kubernetes config files:</p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: name: traefik-proxy labels: app: traefik-proxy tier: proxy spec: replicas: 1 selector: matchLabels: app: traefik-proxy tier: proxy template: metadata: labels: app: traefik-proxy tier: proxy spec: terminationGracePeriodSeconds: 60 # kubectl create configmap traefik-conf --from-file=./conf/k8s/traefik.toml volumes: - name: config configMap: name: traefik-conf containers: - image: traefik:v1.2.0-rc1-alpine name: traefik-proxy resources: limits: cpu: "200m" memory: "30Mi" requests: cpu: "100m" memory: "20Mi" volumeMounts: - mountPath: "/conf" name: config ports: - containerPort: 80 hostPort: 80 name: traefik-proxy - containerPort: 8080 name: traefik-ui args: - --configFile=/conf/traefik.toml - --kubernetes - --web --- apiVersion: v1 kind: Service metadata: name: traefik-proxy annotations: # SSL certificate. # https://console.aws.amazon.com/acm (alienlabs.io) service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:861694698401:certificate/28204a9f-69ec-424d-8f56-ac085b7bdad8" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" spec: type: LoadBalancer selector: app: traefik-proxy tier: proxy ports: - port: 80 targetPort: 80 name: http - port: 443 targetPort: 80 name: https </code></pre>
<p>My <code>traefik.toml</code> config is similar to Corey's but is a little simpler. It works well for a catch-all redirect of <code>http</code> to <code>https</code>.</p> <pre><code>defaultEntryPoints = ["http","https"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" </code></pre>
<p>I created a PersistentVolume sourced from a Google Compute Engine persistent disk that I already formatted and provision with data. Kubernetes says the PersistentVolume is available.</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: models-1-0-0 labels: name: models-1-0-0 spec: capacity: storage: 200Gi accessModes: - ReadOnlyMany gcePersistentDisk: pdName: models-1-0-0 fsType: ext4 readOnly: true </code></pre> <p>I then created a PersistentVolumeClaim so that I could attach this volume to multiple pods across multiple nodes. However, kubernetes indefinitely says it is in a pending state.</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: models-1-0-0-claim spec: accessModes: - ReadOnlyMany resources: requests: storage: 200Gi selector: matchLabels: name: models-1-0-0 </code></pre> <p>Any insights? I feel there may be something wrong with the selector...</p> <p>Is it even possible to preconfigure a persistent disk with data and have pods across multiple nodes all be able to read from it?</p>
<p>I quickly realized that PersistentVolumeClaim defaults the <code>storageClassName</code> field to <code>standard</code> when not specified. However, when creating a PersistentVolume, <code>storageClassName</code> does not have a default, so the selector doesn't find a match.</p> <p>The following worked for me:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: models-1-0-0 labels: name: models-1-0-0 spec: capacity: storage: 200Gi storageClassName: standard accessModes: - ReadOnlyMany gcePersistentDisk: pdName: models-1-0-0 fsType: ext4 readOnly: true --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: models-1-0-0-claim spec: accessModes: - ReadOnlyMany resources: requests: storage: 200Gi selector: matchLabels: name: models-1-0-0 </code></pre>
<p>is there any way to avoid execution of an application deployed as DaemonSet on master?<br> I have seen that this is the expected behavior, but I would like to avoid execution in some way. </p> <p>Regular pods will not schedule on master but DaemonSet pods do. </p> <p>If yes, is it possible to set this information in the yml file (parameter..etc??)?</p> <pre><code> kubectl create -f mydaemon.yml logspri-4zwl4 1/1 Running 0 &lt;invalid&gt; X.X.X.X k8s-master-e7c355e2-0 logspri-kld2w 1/1 Running 0 &lt;invalid&gt; X.X.X.X k8s-agent-e7c355e2-0 logspri-lksrh 1/1 Running 0 &lt;invalid&gt; X.X.X.X k8s-agent-e7c355e2-1 </code></pre> <p>I would like to avoid my pod is running on <code>k8s-master-e7c355e2-0</code></p> <p>I have tried : </p> <pre><code>annotations: scheduler.alpha.kubernetes.io/affinity: &gt; { "nodeAffinity": { "requiredDuringSchedulingRequiredDuringExecution": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "kubernetes.io/role", "operator": "NotIn", "values": ["master"] } ] } ] } } } </code></pre> <p>Trying also to apply the following role (as suggested) but it doesn't work : </p> <pre><code>kubectl get nodes NAME STATUS AGE VERSION k8s-agent-e7c355e2-0 Ready 49d v1.5.3 k8s-agent-e7c355e2-1 Ready 49d v1.5.3 k8s-master-e7c355e2-0 Ready,SchedulingDisabled 49d v1.5.3 </code></pre> <p>Shall I perform: </p> <pre><code>VirtualBox:~/elk/logspout$ kubectl taint node k8s-master-e7c355e2-0 k8s-master-e7c355e2-0/ismaster=:NoSchedule node "k8s-master-e7c355e2-0" tainted </code></pre> <p>Even if it seems that the master is tainted I see that the application is always on master. </p> <pre><code>Role: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=Standard_D2 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=northeurope failure-domain.beta.kubernetes.io/zone=0 kubernetes.io/hostname=k8s-master-e7c355e2-0 Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true Taints: &lt;none&gt; CreationTimestamp: Wed, 17 May 2017 14:38:06 +0200 Phase: Conditions: </code></pre> <p>What is wrong? Could you give me the right command to perform?</p> <p>same problem reported <a href="https://stackoverflow.com/questions/42134518/cluster-created-with-kops-deploying-one-pod-by-node-with-daemonset-avoiding-ma">here</a> without an apparent solution by : </p> <pre><code>kubectl taint nodes nameofmaster dedicated=master:NoSchedule </code></pre> <p>Thanks Prisco</p>
<p>From <a href="https://github.com/kubernetes/kubernetes/issues/29108" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/29108</a>, you can add a taint flag to your Master node kubelet, so the even the pods in the DaemonSet are not scheduled there</p> <pre><code> --register-with-taints=node.alpha.kubernetes.io/ismaster=:NoSchedule </code></pre> <p>You will need to restart kubelet in your node</p>
<p>I'm having trouble to create a persistent volume that I can use from different pods (1 write, another read).</p> <p>Tried to use <code>gcePersistentDisk</code> directly in the pod spec like in the example on the k8s page (plus <code>readOnly</code>):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: gcr.io/google_containers/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume readOnly: true volumes: - name: test-volume gcePersistentDisk: pdName: my-data-disk fsType: ext4 readOnly: true </code></pre> <p>Then in the second pod spec exactly the same except the <code>readOnly</code> ... but got an <code>NoDiskConflict</code> error.</p> <p>Second approach is to use <code>PersistentVolume</code> and <code>PersistentVolumeClaim</code> like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: data-standard spec: capacity: storage: 1Gi accessModes: - ReadWriteMany gcePersistentDisk: fsType: ext4 pdName: data --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-standard-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi </code></pre> <p>But now I get an error telling me:</p> <pre><code>MountVolume.MountDevice failed for volume "kubernetes.io/gce-pd/xxx" (spec.Name: "yyy") pod "6ae34476-6197-11e7-9da5-42010a840186" (UID: "6ae34476-6197-11e7-9da5-42010a840186") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: /dev/disk/by-id/google-gke-cluster-xxx /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-cluster-xxx [ro] Output: mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"my-deployment". list of unattached/unmounted volumes=[data] </code></pre> <p>So what is the correct way of using a GCE disk with multiple pods.</p> <p>PS: Kubernetes 1.6.6</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes</a>, GCE Disks do not support ReadWriteMany. I am not sure if this explains the issue but I would advise you to try another compatible Volume type.</p>
<p>I'd like to launch a Kubernetes job and give it a fixed deadline to finish. If the pod is still running when the deadline comes, I'd like the job to automatically be killed.</p> <p>Does something like this exist? (At first I thought that the Job spec's <code>activeDeadlineSeconds</code> covered this use case, but now I see that <code>activeDeadlineSeconds</code> only places a limit on when a job is re-tried; it doesn't actively kill a slow/runaway job.)</p>
<p>You can self-impose timeouts on the container's entrypoint command by using GNU <code>timeout</code> utility.</p> <p>For example the following Job that computes first 4000 digits of pi will time out after 10 seconds:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: metadata: name: pi spec: containers: - name: pi image: perl command: ["/usr/bin/timeout", "10", "perl", "-Mbignum=bpi", "-wle", "print bpi(4000)"] restartPolicy: Never </code></pre> <p>(Manifest adopted from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job</a>)</p> <p>You can play with the numbers and see it timeout or not. Typically computing 4000 digits of pi takes ~23 seconds on my workstation, so if you set it to 5 seconds it'll probably always fail and if you set it to 120 seconds it will always work.</p>
<p>After following the link <a href="https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/</a> to create a service and expose the replica set. </p> <p>The output shows the internal IP address as Hex code. </p> <pre><code>$ kubectl get services example-service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service 100.66.5.159 a75af4afc61af... 8080:32114/TCP 1h </code></pre> <p>Here is my Kubernetes environment.</p> <p>The Kubernetes cluster is running on AWS was deployed with kops version 1.6.0 and kubelet version 1.6.4. Everything is working fine with the internal cluster including DNS.</p>
<p>That is the address of the ELB that was created for the service. If you go into the AWS console you should see an ELB that begins with that string. This will make your service externally accessible from outside your cluster.</p>
<p>is there any way to avoid execution of an application deployed as DaemonSet on master?<br> I have seen that this is the expected behavior, but I would like to avoid execution in some way. </p> <p>Regular pods will not schedule on master but DaemonSet pods do. </p> <p>If yes, is it possible to set this information in the yml file (parameter..etc??)?</p> <pre><code> kubectl create -f mydaemon.yml logspri-4zwl4 1/1 Running 0 &lt;invalid&gt; X.X.X.X k8s-master-e7c355e2-0 logspri-kld2w 1/1 Running 0 &lt;invalid&gt; X.X.X.X k8s-agent-e7c355e2-0 logspri-lksrh 1/1 Running 0 &lt;invalid&gt; X.X.X.X k8s-agent-e7c355e2-1 </code></pre> <p>I would like to avoid my pod is running on <code>k8s-master-e7c355e2-0</code></p> <p>I have tried : </p> <pre><code>annotations: scheduler.alpha.kubernetes.io/affinity: &gt; { "nodeAffinity": { "requiredDuringSchedulingRequiredDuringExecution": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "kubernetes.io/role", "operator": "NotIn", "values": ["master"] } ] } ] } } } </code></pre> <p>Trying also to apply the following role (as suggested) but it doesn't work : </p> <pre><code>kubectl get nodes NAME STATUS AGE VERSION k8s-agent-e7c355e2-0 Ready 49d v1.5.3 k8s-agent-e7c355e2-1 Ready 49d v1.5.3 k8s-master-e7c355e2-0 Ready,SchedulingDisabled 49d v1.5.3 </code></pre> <p>Shall I perform: </p> <pre><code>VirtualBox:~/elk/logspout$ kubectl taint node k8s-master-e7c355e2-0 k8s-master-e7c355e2-0/ismaster=:NoSchedule node "k8s-master-e7c355e2-0" tainted </code></pre> <p>Even if it seems that the master is tainted I see that the application is always on master. </p> <pre><code>Role: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=Standard_D2 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=northeurope failure-domain.beta.kubernetes.io/zone=0 kubernetes.io/hostname=k8s-master-e7c355e2-0 Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true Taints: &lt;none&gt; CreationTimestamp: Wed, 17 May 2017 14:38:06 +0200 Phase: Conditions: </code></pre> <p>What is wrong? Could you give me the right command to perform?</p> <p>same problem reported <a href="https://stackoverflow.com/questions/42134518/cluster-created-with-kops-deploying-one-pod-by-node-with-daemonset-avoiding-ma">here</a> without an apparent solution by : </p> <pre><code>kubectl taint nodes nameofmaster dedicated=master:NoSchedule </code></pre> <p>Thanks Prisco</p>
<p>By <code>Even if it seems that the master is tainted I see that the application is always on master.</code>, I'm not certain if the DaemonSet was created before or after the taint.</p> <p>If you tainted first and then created the DaemonSet, the pod should be repelled by the tainted node without further config. Otherwise, the pod from the DaemonSet will not automatically terminate. To evict existing pods immediately, the <code>NoExecute</code> taint is needed.</p> <p>From <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>Normally, if a taint with effect NoExecute is added to a node, then any pods that do not tolerate the taint will be evicted immediately, and any pods that do tolerate the taint will never be evicted. However, a toleration with NoExecute effect can specify an optional tolerationSeconds field that dictates how long the pod will stay bound to the node after the taint is added.</p> </blockquote>
<p>I have a Go program that uses aws-sdk-go to talk to dynamodb. Dependencies are vendored. Go version 1.7.1. aws-sdk-go version 1.6.24. The program works as expected in all the following environments:</p> <ul> <li>dev box from shell (Arch Linux) </li> <li>docker container running on my dev box (Docker 1.13.1) </li> <li>Ec2 instance from shell (Ubuntu 16.04)</li> </ul> <p>When I run the docker container on kubernetes (same one I tested on my dev box), I get the following error:</p> <pre> 2017/03/02 22:30:13 DEBUG ERROR: Request dynamodb/GetItem: ---[ REQUEST DUMP ERROR ]----------------------------- net/http: invalid header field value "AWS4-HMAC-SHA256 Credential=hidden\n/20170302/us-east-1/dynamodb/aws4_request, SignedHeaders=accept-encoding;content-length;content-type;host;x-amz-date;x-amz-target, Signature=483f56dd0b17d8945d3c2f2044b7f97e531190602f132a4d5f828264b3a2cff2" for key Authorization ----------------------------------------------------- 2017/03/02 22:30:13 DEBUG: Response dynamodb/GetItem Details: ---[ RESPONSE ]-------------------------------------- HTTP/0.0 000 status code 0 Content-Length: 0 </pre> <p>Based on:<br> <a href="https://golang.org/src/net/http/transport.go" rel="noreferrer">https://golang.org/src/net/http/transport.go</a><br> <a href="https://godoc.org/golang.org/x/net/lex/httplex#ValidHeaderFieldValue" rel="noreferrer">https://godoc.org/golang.org/x/net/lex/httplex#ValidHeaderFieldValue</a> </p> <p>It looks like the problem is with the header value validation, yet I am at a loss to understand why it works everywhere except on my k8s cluster. The cluster is composed of Ec2 instances running the latest CoreOS stable ami (CoreOS stable 1235.8.0)</p> <p>The docker image that works on my dev machine is scratch based. To troubleshoot I created an image based on Ubuntu latest with a separate go program that just does a simple get item from dynamodb. When this image is run on my k8s cluster and the program run from an interactive shell, I get the same errors. I have confirmed I can ping the dynamodb endpoints from this env.</p> <p>I am having a hard time troubleshooting this issue: am I missing something stupid here? Can someone point me in the right direction or have an idea of what is going on?</p>
<p>remember the "-n" when you do this: echo -n key | base64</p>
<p>I want to benchmark my SSD using Fio (Flexible I/O) I/O benchmarking tool inside Docker containers. </p> <p>I am running my Docker containers like </p> <p><code>docker run -it -v /dev/nvme0n1:/mount saurabhd04/docker_fio</code> </p> <p>where I am mounting my SSD as a Docker volume. </p> <p>But, whenever I run fio inside Docker container, I get following error: </p> <p><code>fio: blocksize too large for data set</code>.</p> <p>Am I missing anything? Any help regarding this would be of great help!</p>
<p>Mapping directories and files does not mean "mounting" them.</p> <p>You need to follow 2 steps:</p> <ol> <li><p>Share the <code>/dev/nvme0n1</code> to the container</p> <pre><code>docker run --cap-add SYS_ADMIN --device /dev/nvme0n1 -it saurabhd04/docker_fio </code></pre></li> <li><p>With the container running mount the <code>nvme0n1</code>:</p> <pre><code>docker exec &lt;container-id&gt; mount /dev/nvme0n1 /mnt </code></pre></li> </ol>
<p>I'm following a course on PluralSight where the course author puts a docker image onto kubernetes and then access it via his browser. I'm trying to replicate what he does but I cannot manage to reach the website. I believe I might be connecting to the wrong IP.</p> <p>I have a <code>ReplicationController</code> that's running 10 <code>pods</code>:</p> <p><strong>rc.yml</strong></p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: hello-rc spec: replicas: 10 selector: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-pod image: nigelpoulton/pluralsight-docker-ci:latest ports: - containerPort: 8080 </code></pre> <p>I then tried to expose the rc:</p> <pre><code>kubectl expose rc hello-rc --name=hello-svc --target-port=8080 --type=NodePort $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-svc 10.27.254.160 &lt;nodes&gt; 8080:30488/TCP 30s kubernetes 10.27.240.1 &lt;none&gt; 443/TCP 1h </code></pre> <p>My google container endpoint is : <code>35.xxx.xx.xxx</code> and when running <code>kubectl describe svc hello-svc</code> the NodePort is <code>30488</code></p> <p>Thus I try to access the app at <code>35.xxx.xx.xxx:30488</code> but the site can’t be reached.</p>
<p>If you want to access your service via the NodePort port, you need to open your firewall for that port (and that instance).</p> <p>A better way is to create a service of type LoadBalancer (<code>--type=LoadBalancer</code>) and access it on the IP Google will give you.</p> <p>Do not forget to delete the load balancer when you are done.</p>
<p>I'd like a webserver to be notified if Kubernetes kills a pod and for what reason e.g. DEADLINE_EXCEEDED or OOM. Does Kubernetes have webhook functionality for this or some other mechanism where I can be told when it does something. </p>
<p>There isn't a webhook per-se, however there are kubernetes events that you can listen to.</p> <p>Quick google turned up <a href="https://solinea.com/blog/tapping-kubernetes-events" rel="nofollow noreferrer">this article</a></p>
<p>I have configured the Node Exporter in Kubernetes and start monitoring using Prometheus, But in Prometheus all servers are showing as down with the error below:</p> <blockquote> <p>Get <a href="http://10.7.17.11:9100/metrics" rel="nofollow noreferrer">http://10.7.17.11:9100/metrics</a>: dial tcp 10.7.17.11:9100: getsockopt: connection timed out</p> </blockquote> <p>Can anyone help why it is showing down ?</p>
<p>Make sure firewall is not blocking port 9100. Try to curl this URL from other nodes and from the prometheus pod</p>
<p>I'm now using kubernetes to run the Docker container.I just create the container and i use SSH connect to my pods. </p> <p>I need to do some system config change so i need to reboot the container but when i`reboot the container it will lose all the data in the pod. kubernetes will run a new pod just like the Docker image original.</p> <p>So how can i reboot the pod and just keep the data in it?</p> <p>The kubernetes was offered my Bluemix</p>
<p>You need to learn more about containers as your question suggests that you are not fully grasping the concepts.</p> <ol> <li>Running SSH in a container is an anti-pattern, a container is not a virtual machine. So remove the SSH server from it.</li> <li>the fact that you run SSH indicates that you may be running more than one process per container. This is usually bad practice. So remove that supervisor and call your main process directly in your entrypoint.</li> <li>Setup your container image main process to use environment variables or configuration files for configuration at runtime.</li> </ol> <p>The last item means that you can define environment variables in your Pod manifest or use Kubernetes configmaps to store configuration file. Your Pod will read those and your process in your container will get configured properly. If not your Pod will die or your process will not run properly and you can just edit the environment variable or config map.</p> <p>My main suggestion here is to not use Kubernetes until you have your docker image properly written and your configuration thought through, you should not have to exec in the container to get your process running.</p> <p>Finally, more generally, you should not keep state inside a container.</p>
<p>I have created one <a href="https://cloud.google.com/solutions/jenkins-on-container-engine-tutorial" rel="nofollow noreferrer">cluster</a> in google container engine &amp; in that i have deployed one pod having jenkins running in it. then configiured one job which will build,run,push &amp; deploy <a href="https://cloud.google.com/solutions/continuous-delivery-jenkins-container-engine" rel="nofollow noreferrer">sample</a> app. so all these job steps are executing except "deploy-sampleapp-step" due to below error </p> <p><code>[sampleapp_master-HAWDXNK5BCRQ7EWPPOHGW7RUWBBM25WIAIKOP6UBHIDYZGTMQIJA] Running shell script<br> + kubectl --namespace=production apply -f k8s/services/ error: group map[:0xc820374b60 apps:0xc820374bd0 authorization.k8s.io:0xc820374c40 componentconfig:0xc820374d90 extensions:0xc820374e00 policy:0xc820374e70 storage.k8s.io:0xc8202cc770 federation:0xc820374af0 autoscaling:0xc820374cb0 batch:0xc820374d20 rbac.authorization.k8s.io:0xc820374ee0 authentication.k8s.io:0xc820374fc0] is already registered</code> </p> <p>So I am using cluster version 1.6.4 </p> <p>So does anyone has any idea how to escalate this problem<br> Thanks In advance Adding some more information that may be useful for the above question- </p> <blockquote> <p>user@yproject-173008:~$ kubectl cluster-info<br> Kubernetes master is running at <a href="https://IP" rel="nofollow noreferrer">https://IP</a> GLBCDefaultBackend is running at <a href="https://IP/api/v1/proxy/namespaces/kube-system/services/default-http-backend" rel="nofollow noreferrer">https://IP/api/v1/proxy/namespaces/kube-system/services/default-http-backend</a><br> Heapster is running at <a href="https://IP/api/v1/proxy/namespaces/kube-system/services/heapster" rel="nofollow noreferrer">https://IP/api/v1/proxy/namespaces/kube-system/services/heapster</a><br> KubeDNS is running at <a href="https://IP/api/v1/proxy/namespaces/kube-system/services/kube-dns" rel="nofollow noreferrer">https://IP/api/v1/proxy/namespaces/kube-system/services/kube-dns</a><br> kubernetes-dashboard is running at<br> <a href="https://IP/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard" rel="nofollow noreferrer">https://IP/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard</a><br> To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.</p> <p>user@yproject-173008:~$ kubectl version<br> Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:34:20Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}<br> Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </p> </blockquote>
<p>You are getting this error because the version of kubeAPI and version of kubectl is different. To get versions, edit Jenkins file present in the build directory to print version of kubectl client that has been used inside the jenkins slave envirnment while executing the job. In my case it was at <code>/continuous-deployment-on-kubernetes/sample-app/Jenkinsfile</code> . Add following line- </p> <blockquote> <p>sh("kubectl version") </p> </blockquote> <p>This will print the version of the kubectl used by jenkins slave. </p> <p>I found it as <code>GitVersion:"v1.3.4"</code>. If this is the case for you then perform following steps- </p> <p><strong>1. Generate jenkins slave dockerfile</strong><br> -> create Dockerfile with following contents-</p> <blockquote> <p>FROM jenkinsci/jnlp-slave<br> ENV CLOUDSDK_CORE_DISABLE_PROMPTS 1<br> ENV PATH /opt/google-cloud-sdk/bin:$PATH<br> USER root<br> RUN apt-get update -y<br> RUN apt-get install -y jq<br> RUN curl <a href="https://sdk.cloud.google.com" rel="nofollow noreferrer">https://sdk.cloud.google.com</a> | bash &amp;&amp; mv google-cloud-sdk /opt<br> COPY kubectl /opt/google-cloud-sdk/bin/<br> RUN chmod +x /opt/google-cloud-sdk/bin/kubectl </p> </blockquote> <p>-> Download kubectl binary compatible with your kube-cluster. OR Take binary present at your kube-cluster and place it in this directory.<br> -> build image and push it to your registry. </p> <blockquote> <p>docker build -t IMAGE_NAME .<br> gcloud docker -- push IMAGE_NAME</p> </blockquote> <p><strong>2. Edit jenkins configuration to use this image for slave.</strong><br> Go to Jenkins-> Manage Jenkins->Configure System.<br> Scroll down to Cloud.<br> Select Kubernetes. Go to Images->Containers->Docker image.<br> Enter image name that you have pushed in step 1.<br> click save.<br> <strong>3. start the job.</strong> </p>
<p>I have an agent (datadog agent but could be something else) running on all the nodes of my cluster, deployed through a DaemonSet. This agent is collecting diverse metrics about the host: cpu and memory usage, IO, which containers are running.</p> <p>It can also collect custom metrics, by listening on a specific port 1234.</p> <p>How can I send data from a pod to the instance of the agent running on the same node than the pod? If I use a Kubernetes service the calls to send the metric will be load balanced across all my agents and I'll lose the correlation between the pod emitting the metric and the host it's running on.</p>
<p>I use the exact same setup, <code>dd-agent</code> running as a DaemonSet in my kubernetes cluster. Using the same port mapping you commented <a href="https://stackoverflow.com/questions/44855220/sending-data-from-one-pod-to-another-pod-running-specifically-on-the-same-host#comment76761846_44857071">here</a>, you can just send metrics to the hostname of the node an application is running on.</p> <p>You can add the node name to the pods environment using the downward api in your pod spec:</p> <pre><code>env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName </code></pre> <p>Then, you can just open an UDP connection to <code>${NODE_NAME}:8125</code> to connect to the datadog agent.</p>
<p>Using Kubernetes 1.7.0, the intention here is to be able to deploy MySQL / MongoDB / etc, and use local disk as storage backing; while webheads, and processing pods can autoscale by Kubernetes. To these aims, I've</p> <ul> <li>Set up &amp; deployed the <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume" rel="nofollow noreferrer">Local Persistent Storage provisioner</a> to automatically provision locally attached disk to pods' Persitent Volume Claims. </li> <li>Manually created a Persistent Volume Claim, which succeeds, and the local volume is attached</li> <li><p>Attempted to deploy MariaDB via helm by</p> <blockquote> <p>helm install --name mysql --set persistence.storageClass=default stable/mariadb</p> </blockquote></li> <li><p>This appears to succeed; but by going to the dashboard, I get</p></li> </ul> <blockquote> <p>Storage node affinity check failed for volume "local-pv-8ef6e2af" : NodeSelectorTerm [{Key:kubernetes.io/hostname Operator:In Values:[kubemaster]}] does not match node labels</p> </blockquote> <p>I suspect this might be due to helm's charts not including nodeaffinity. Other than updating each chart manually, is there a way to tell helm to deploy to the same pod where the provisioner has the volume?</p>
<p>Unfortunately, no. You will need to specify node affinity so that the Pod lands on the node where the local storage is located. See the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">docs on Node Affinity</a> to know what do add to the helm chart.</p> <p>I suspect it would look something like the following in your case.</p> <pre><code>affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kubemaster </code></pre> <p>As an aside, this is something that will happen, not just at the node level, but at the zone level for cloud environments like AWS and GCP as well. In those environments, persistent disks are zonal and will require you to set <code>NodeAffinity</code> so that the Pods land in the zone with the persistent disk when deploying to a multi-zone cluster.</p> <p>Also as an aside, It looks like your are deploying to the Kubernetes master? If so that may not be advisable since MySQL could potentially affect the master's operation.</p>
<p>Links suggested i use minikube to install kubernetes. However i am confused if i should install this on my ubuntu host machine or on the vm provisioned with virtual box running on this host machine. I want to know the pre-requisites for the install and how to go about it. I am a newbie to kubernetes and pretty confused on how to go about it.</p>
<p>You can try Kubernetes right away with Minikube. It will take just a few minutes to see Kubernetes running on your laptop or desktop. I tried it on my laptop with Ubuntu 16.04. I put it here <a href="https://gitlab.com/abushoeb/kubernetes/" rel="nofollow noreferrer">https://gitlab.com/abushoeb/kubernetes/</a> but for your convenience, you can just follow the following steps:</p> <h2>How to install Kubernetes</h2> <h3>You need to install 3 components</h3> <ul> <li>Virtualbox <a href="https://tecadmin.net/install-oracle-virtualbox-on-ubuntu/" rel="nofollow noreferrer">https://tecadmin.net/install-oracle-virtualbox-on-ubuntu/</a></li> <li>Kubectl <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a></li> <li>Minikube <a href="https://github.com/kubernetes/minikube/releases" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/releases</a> <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">https://github.com/kubernetes/minikube</a></li> </ul> Install Virtual Box <p><code> $ sudo nano /etc/apt/sources.list (add followin line to your sources.list if you haven't already) deb http://download.virtualbox.org/virtualbox/debian xenial contrib $ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - $ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add - $ sudo apt-get update $ sudo apt-get install virtualbox-5.1 </code></p> Install Kubectl <p><code> $ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl </code></p> Install Minikube <p><code> $ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.20.0/minikube-linux-amd64 $ chmod +x minikube $ sudo mv minikube /usr/local/bin/ </code></p> Minikube Commands <p><code> start k8 cluster $ minikube start or start specific version of k8 $ minikube start --kubernetes-version="v1.5.2" or start with a flag enabled $ minikube start --kubernetes-version="v1.5.3" --extra-config kubelet.EnableCustomMetrics=true enable any addon e.g. heapster $ minikube addons enable heapster see all k8 versions $ minikube get-k8s-versions see minikube status $ minikube status access k8 dashboard $ minikube dashboard (this will open your browser) stop minikube $ minikube stop </code></p> Reference <ul> <li><a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/minikube/</a></li> <li><a href="https://github.com/kubernetes/minikube/releases" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/releases</a></li> <li><a href="https://rominirani.com/tutorial-getting-started-with-kubernetes-on-your-windows-laptop-with-minikube-3269b54a226" rel="nofollow noreferrer">https://rominirani.com/tutorial-getting-started-with-kubernetes-on-your-windows-laptop-with-minikube-3269b54a226</a> </li> </ul>
<p>Folks, Is there an easier method to grab the external ip address of a service in Kubernetes, than to parsing the output of a kubectl output?</p> <p><code>kubectl get services/foo --namespace=foo -o json</code></p> <p>Thanks!</p>
<p>Using kubectl is the easiest way to get the ingress IP addresses of your services. If you are looking to get just the IP addresses then you can do most of the parsing as part of the kubectl command itself.</p> <pre><code>kubectl get svc foo -n foo \ -o jsonpath="{.status.loadBalancer.ingress[*].ip}" </code></pre> <p>This may not apply to you but some cloud load balancers (like AWS ELB) give you a hostname rather than IP address so you will need to look for that instead.</p> <pre><code>kubectl get svc foo -n foo \ -o jsonpath="{.status.loadBalancer.ingress[*].hostname}" </code></pre> <p>You can get both by using the jsonpath union operator if you like.</p> <pre><code>kubectl get svc foo -n foo \ -o jsonpath="{.status.loadBalancer.ingress[*]['ip', 'hostname']}" </code></pre> <p>If you want a human readable output you can use the <code>custom-columns</code> output format.</p> <pre><code>kubectl get svc foo -n foo \ -o custom-columns="NAME:.metadata.name,IP ADDRESS:.status.loadBalancer.ingress[*].ip" </code></pre>
<p>We are trying to deploy Mule Application on Kubernetes using Minikube. Could you please explain the steps to deploy on Kubernetes in windows environment.</p>
<p>First of all you would need to install Minikube for Windows: <a href="https://github.com/kubernetes/minikube/releases" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/releases</a></p> <p>Then, install the API client <code>kubectl</code>: <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a></p> <p>Then, according to the needs to your application, you will have to create different API Objects, most likely:</p> <ul> <li>Deployment (also, you may have to create a container): <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a></li> <li>Service: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></li> <li>Persistent Volumes (in case you need persistence): <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></li> </ul> <p>This will need some knowledge of how k8s works so I advise you to check the Kubernetes documentation (<a href="https://kubernetes.io/docs" rel="nofollow noreferrer">https://kubernetes.io/docs</a>) and some get started guides.</p>
<p>I am learning kubernetes and using minikube to create single node cluster in my ubuntu machine. In my ubuntu machine Oracle Virtualbox is also installed. As I run</p> <pre><code>$ minikube start Starting local Kubernetes v1.6.4 cluster... ... $ cat ~/.kube/config apiVersion: v1 clusters: - cluster: certificate-authority: /root/.minikube/ca.crt server: https://192.168.99.100:8443 name: minikube ... $ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8000 error: failed to discover supported resources: Get https://192.168.99.100:8443/api: Service Unavailable </code></pre> <p>I am not getting that what is causing this error. Is there some place we can check for logs. I cannot use kubectl logs as it requires container to mention which is not creating at all. Please provide any possible solution to problem.</p>
<p>You can debug using these steps:</p> <ol> <li><p><code>kubectl</code> talks to <code>kube-apiserver</code> at port 8443 to do its thing. Try <code>curl -k https://192.168.99.100:8443</code> and see if there's a positive response. If this fails, it means <code>kube-apiserver</code> isn't running at all. You can try restarting the VM or rebuilding minikube to see if it comes up properly the 2nd time round.</p></li> <li><p>You can also debug the VM directly if you feel brave. In this case, get a shell on the VM spun up by minikube. Run <code>docker ps | grep apiserver</code> to check if the <code>kube-apiserver</code> pod is running. Also try <code>ps aux | grep apiserver</code> to check if it's run natively. If both don't turn up results, check the logs using <code>journalctl -xef</code>.</p></li> </ol>
<p>I have created an ACS Kubernetes cluster following the instructions here: <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough</a> .</p> <p>I see that master node has a public IP and I can ssh into the master node using <code>azureuser</code>. But regular nodes has no public IP and I don't see how I can ssh into regular nodes from master node.</p> <p>How do I SSH into the regular nodes?</p>
<p>You can use one of the k8s masters as a "bastion host" and avoid copying the keys over. Eg:</p> <pre><code># In ~/.ssh/config Host agent1_private_ip agent2_private_ip .... IdentityFile ~/.ssh/&lt;your_k8s_cluster_key&gt; ProxyCommand ssh user@master_public_ip -W %h:%p </code></pre> <p>Now just <code>ssh user@agent1_private_ip</code></p> <p>See more here: <a href="http://blog.scottlowe.org/2015/11/21/using-ssh-bastion-host/" rel="noreferrer">http://blog.scottlowe.org/2015/11/21/using-ssh-bastion-host/</a></p> <hr> <p>PS: Here's a quickie to retrieve your agent private ips, in <code>/etc/hosts</code> format:</p> <pre><code>kubectl get nodes -o json | jq -r '.items[].status.addresses[].address' | paste - - </code></pre>
<p>Can I run kubelet as a docker container based on kubernetes v1.6.6? if can do, how to create the image or where to get a image, and also how to run this image?</p> <p>the following is my ops, but has some problems.</p> <p>Dockerfile <code> FROM i71:5000/ubuntu:14.04 ADD iptables /usr/local/bin/iptables ADD bin/kubelet /usr/local/bin/kubelet </code></p> <p>docker runing sh <code> #!/bin/bash docker rm -f $(docker ps -aq --filter "name=kubelet") docker run \ -d \ --restart="always" \ --net="host" \ -v /data/kubernetes-cluster/ssl:/data/kubernetes-cluster/ssl \ -v /data/kubernetes-cluster/kube-conf:/data/kubernetes-cluster/kube-conf \ -v /data/kubernetes-cluster/log:/data/kubernetes-cluster/log \ -v /data/kubelet:/var/lib/kubelet \ -v /etc/localtime:/etc/localtime \ -v /var/run/:/var/run/ \ -v /var/log/:/var/log/ \ --privileged=true \ --name kubelet \ i71:5000/kubelet:v1.6.6 \ /usr/local/bin/kubelet \ --logtostderr=true \ --v=0 \ --cgroup-driver=cgroupfs \ --api-servers=http://192.168.0.97:8080 \ --docker-endpoint=http://127.0.0.1:4243 \ --address=192.168.0.97 \ --hostname-override=192.168.0.97 \ --allow-privileged=true \ --pod-infra-container-image=i71:5000/pod-infrastructure:rhel7 \ --cluster-dns=10.3.0.2 \ --experimental-bootstrap-kubeconfig=/data/kubernetes-cluster/kube-conf/bootstrap.kubeconfig \ --kubeconfig=/data/kubernetes-cluster/kube-conf/kubelet.kubeconfig \ --require-kubeconfig \ --cert-dir=/data/kubernetes-cluster/ssl \ --cluster-domain=cluster.local. \ --hairpin-mode=promiscuous-bridge \ --serialize-image-pulls=false </code></p> <p>kubelet prints these errors(my docker damon pid is 5140): <code> E0706 12:11:46.061949 1 container_manager_linux.go:394] open /proc/5140/cgroup: no such file or directory E0706 12:11:46.137217 1 container_manager_linux.go:97] Unable to ensure the docker processes run in the desired containers E0706 12:16:46.062290 1 container_manager_linux.go:394] open /proc/5140/cgroup: no such file or directory E0706 12:16:46.138189 1 container_manager_linux.go:97] Unable to ensure the docker processes run in the desired containers </code></p> <p>if i do volume host dir '/proc', an error occurred by docker damon. <code> docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting\\\"/proc\\\" to rootfs \\\"/data/docker/local-storage/docker/aufs/mnt/6ad53fff3b30e8d709b1be326f5de1314371e174e34806d4d6c6436275b6fbd3\\\" at \\\"/proc\\\" caused \\\"\\\\\\\"/data/docker/local-storage/docker/aufs/mnt/6ad53fff3b30e8d709b1be326f5de1314371e174e34806d4d6c6436275b6fbd3/proc\\\\\\\" cannot be mounted because it is located inside \\\\\\\"/proc\\\\\\\"\\\"\"". </code></p> <p>what should i do, can anyone show me a light?</p>
<p>There is a official docker image called <code>hyperkube</code> which contains all Kubernetes binaries, see <a href="https://kubernetes.io/docs/getting-started-guides/scratch/#selecting-images" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/scratch/#selecting-images</a></p> <p>You can find an example systemd service on how to use it for <code>kubelet</code> here: <a href="https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase2/ignition/vanilla/kubelet.service" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase2/ignition/vanilla/kubelet.service</a></p>
<p>Is there a way to view the history of pod Termination statuses? Eg. If I look at <code>pod describe</code> command I see output similar to this:</p> <pre><code>State: Running Started: Mon, 10 Jul 2017 13:09:20 +0300 Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Thu, 06 Jul 2017 11:01:21 +0300 Finished: Mon, 10 Jul 2017 13:09:18 +0300 </code></pre> <p>The same <code>pod describe</code> does not show anything in the pod events:</p> <pre><code> Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 10m 10m 1 kubelet, gke-dev-default-d8f2dbc5-mbkb spec.containers{demo} Normal Pulled Container image "eu.gcr.io/project/image:v1" already present on machine 10m 10m 1 kubelet, gke-dev-default-d8f2dbc5-mbkb spec.containers{demo} Normal Created Created container with id 1d857caae77bdc43f0bc90fe045ed5050f85436479073b0e6b46454500f4eb5a 10m 10m 1 kubelet, gke-dev-default-d8f2dbc5-mbkb spec.containers{demo} Normal Started Started container with id 1d857caae77bdc43f0bc90fe045ed5050f85436479073b0e6b46454500f4eb5a </code></pre> <p>If I look at the <code>kubectl get events --all-namespaces</code> I see this event, but there is no way how to relate this to the particular pod:</p> <pre><code> default 12m 12m 1 gke-dev-default-d8f2dbc5-mbkb Node Warning OOMKilling kernel-monitor, gke-dev-default-d8f2dbc5-mbkb Memory cgroup out of memory: Kill process 1639 (java) score 2014 or sacrifice child Killed process 1639 (java) total-vm:10828960kB, anon-rss:1013756kB, file-rss:22308kB </code></pre> <p>And even the event Details repoted through the api have missleading info (like the <code>default</code>namespace though pod is actualy in <code>demo</code>namespace):</p> <pre><code> "metadata": { "name": "gke-dev-default-d8f2dbc5-mbkb.14cff03fe771b053", "namespace": "default", "selfLink": "/api/v1/namespaces/default/events/gke-dev-default-d8f2dbc5-mbkb.14cff03fe771b053", "uid": "d5d3230e-6557-11e7-a486-42010a8401d3", "resourceVersion": "5278875", "creationTimestamp": "2017-07-10T10:09:18Z" }, "involvedObject": { "kind": "Node", "name": "gke-dev-default-d8f2dbc5-mbkb", "uid": "gke-dev-default-d8f2dbc5-mbkb" }, "reason": "OOMKilling", "message": "Memory cgroup out of memory: Kill process 1639 (java) score 2014 or sacrifice child\nKilled process 1639 (java) total-vm:10828960kB, anon-rss:1013756kB, file-rss:22308kB", "source": { "component": "kernel-monitor", "host": "gke-dev-default-d8f2dbc5-mbkb" }, "firstTimestamp": "2017-07-10T10:09:18Z", "lastTimestamp": "2017-07-10T10:09:18Z", "count": 1, "type": "Warning" </code></pre> <p>So while I can see last Termination status via <code>pod describe</code>, what about the previous ones?</p>
<p>Eviction events are Node events. That is why you don't see them in the Pod events. If you run <code>kubectl describe node &lt;node_name&gt;</code> with the node the pod was running on, you can see the eviction events.</p> <p>Test this: run a deployment that will constantly get OOMKilled:</p> <pre><code>kubectl run memory-hog --image=gisleburt/my-memory-hog --replicas=2 --limits=memory=128m </code></pre> <p>Once pods start running and dying, you can run <code>kubectl get events</code> or use <code>kubectl describe node &lt;node_name&gt;</code> , then you'll see events like:</p> <pre><code>Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 7345 (exe) score 50000 or sacrifice child Killed process 7345 (exe) total-vm:6092kB, anon-rss:64kB, file-rss:112kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 7409 (exe) score 51000 or sacrifice child Killed process 7409 (exe) total-vm:6092kB, anon-rss:68kB, file-rss:112kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 7495 (exe) score 50000 or sacrifice child Killed process 7495 (exe) total-vm:6092kB, anon-rss:64kB, file-rss:112kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 7561 (exe) score 49000 or sacrifice child Killed process 7561 (exe) total-vm:6092kB, anon-rss:60kB, file-rss:112kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 7638 (exe) score 494000 or sacrifice child Killed process 7638 (exe) total-vm:7536kB, anon-rss:148kB, file-rss:1832kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 7728 (exe) score 49000 or sacrifice child Killed process 7728 (exe) total-vm:6092kB, anon-rss:60kB, file-rss:112kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 7876 (exe) score 48000 or sacrifice child Killed process 7876 (exe) total-vm:6092kB, anon-rss:60kB, file-rss:112kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 8013 (exe) score 480000 or sacrifice child Killed process 8013 (exe) total-vm:15732kB, anon-rss:152kB, file-rss:1768kB 2m 2m 1 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling Memory cgroup out of memory: Kill process 8140 (exe) score 1023000 or sacrifice child Killed process 8140 (exe) total-vm:24184kB, anon-rss:448kB, file-rss:3704kB 2m 25s 50 kernel-monitor, gke-test-default-pool-649c88dd-818j Warning OOMKilling (events with common reason combined) </code></pre>
<p>I'd like to monitor my Kubernetes Service objects to ensure that they have > 0 Pods behind them in "Running" state.</p> <p>However, to do this I would have to first group the Pods by service and then group them further by status. </p> <p>I would also like to do this programatically (e.g. for each service in namespace ...)</p> <p>There's already some code that does this in the Sensu kubernetes plugin: <a href="https://github.com/sensu-plugins/sensu-plugins-kubernetes/blob/master/bin/check-kube-service-available.rb" rel="nofollow noreferrer">https://github.com/sensu-plugins/sensu-plugins-kubernetes/blob/master/bin/check-kube-service-available.rb</a> but I haven't seen anything that shows how to do it with Prometheus.</p> <p>Has anyone setup kubernetes service level health checks with Prometheus? If so, how did you group by service and then group by Pod status? </p>
<p>The examples I have seen for Prometheus service checks relied on the blackbox exporter:</p> <p>The blackbox exporter will try a given URL on the service. If that succeeds, at least one pod is up and running.</p> <p>See here for an example: <a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml</a> in job kubernetes-service-endpoints</p> <p>The URL to probe might be your liveness probe or something else. If your services don't talk HTTP, you can make the blackbox exporter test other protocols as well. </p>
<p>I have an app running on Google Container Engine.</p> <p>I would like to monitor number requests per second my api is receiving how can I do this?</p> <p>is there a way monitoring from historical metrics on <a href="https://cloudplatform.googleblog.com/2015/12/monitoring-Container-Engine-with-Google-Cloud-Monitoring.html" rel="nofollow noreferrer">Stackdriver</a> as I am opted for <a href="https://cloud.google.com/stackdriver/pricing" rel="nofollow noreferrer">Stackdriver Premium</a></p>
<p>Looking at <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/</a> I see that Stackdriver is deployed in the cluster by default. It looks like that, according to <a href="https://cloud.google.com/monitoring/api/metrics" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics</a>, this includes the metrics you are looking for.</p>
<p>Does anyone have experience using Chaos Monkey with Kubernetes? Curious as to how Chaos Monkey is setup, the outputs, reports, etc. Thanks in advance!</p>
<p>I would suggest using kube-monkey (<a href="https://github.com/asobti/kube-monkey" rel="noreferrer">https://github.com/asobti/kube-monkey</a>)</p>
<p>I'm pushing out my Phoenix app to a Kubernetes cluster for testing via GitLab. I'd like to be able to run <code>mix ecto.migrate</code> in my <code>gitlab-ci.yml</code> script once my app and the postgres service are ready. Here's a snippet from the <code>gitlab-ci.yml</code> file:</p> <pre><code>review: stage: review image: dtzar/helm-kubectl environment: name: review/$CI_COMMIT_REF_NAME url: https://$CI_PROJECT_NAME-${CI_ENVIRONMENT_SLUG}.$KUBE_DOMAIN on_stop: stop_review before_script: - command deploy/kinit.sh script: - helm upgrade --install db --wait --set postgresDatabase=app_db stable/postgresql - helm upgrade --install app ./deploy/app_chart --wait --set env.DATABASE_URL="${DATABASE_URL}" - export POD_NAME=`kubectl get pod -l "app=${CI_ENVIRONMENT_SLUG}" -o jsonpath='{.items[0].metadata.name}'` - kubectl exec $POD_NAME -- mix ecto.migrate </code></pre> <p>From what I understand, the <code>--wait</code> parameter means that each deployment will complete (in its entirety) before moving on. What I'm finding is that although the postgres deployment is complete, that does not mean that the postgres server is ready.</p> <p>More often than not, when the <code>kubectl exec</code> command runs, I get the following error:</p> <pre><code>** (exit) exited in: :gen_server.call(#PID&lt;0.183.0&gt;, {:checkout, #Reference&lt;0.0.1.2678&gt;, true, :infinity}, 5000) ** (EXIT) time out (db_connection) lib/db_connection/poolboy.ex:112: DBConnection.Poolboy.checkout/3 (db_connection) lib/db_connection.ex:919: DBConnection.checkout/2 (db_connection) lib/db_connection.ex:741: DBConnection.run/3 (db_connection) lib/db_connection.ex:1132: DBConnection.run_meter/3 (db_connection) lib/db_connection.ex:584: DBConnection.prepare_execute/4 (ecto) lib/ecto/adapters/postgres/connection.ex:93: Ecto.Adapters.Postgres.Connection.execute/4 (ecto) lib/ecto/adapters/sql.ex:243: Ecto.Adapters.SQL.sql_call/6 (ecto) lib/ecto/adapters/sql.ex:193: Ecto.Adapters.SQL.query!/5 </code></pre> <p>When I look at the Kubernetes ui, I can see the following error against my postgres pod:</p> <pre><code>SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "db-postgresql", which is unexpected. </code></pre> <p>After seeing this message, I monitor the pods and everything comes good. But not before my deployment script fails.</p> <p>My initial thought is that I could create an <code>initContainer</code> for my app that uses psql to successfully connect to the server and check for the existence of the "app_db" database. That way I don't have to worry about writing my own code for timeouts and retries - I can just take advantage of the built-in mechanism provided by Kubernetes.</p> <p>However, I don't want to do this in my production environment (I want to run <code>mix ecto.migrate</code> on the production system manually). In this case, the <code>initContainer</code> simply serves as a waste of system resources.</p> <p>Is there a robust way I can achieve this through the <code>gitlab-ci.yml</code> script?</p>
<p>From a conceptual point of view I would:</p> <ol> <li><p>Configure a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">readiness probe</a> on my Postgres container, so that the Pod is not considered "Running" until the engine is up.</p> <pre><code># in the Pod template: # spec.containers['postgres'] readinessProbe: exec: command: - psql - -U - postgres - -c - 'SELECT 1' initialDelaySeconds: 5 periodSeconds: 5 </code></pre></li> <li><p>Wait for the Pod to transition to a "Running" state before executing my Mix task.</p> <pre><code># in gitlab-ci.yml, before "mix ecto.migrate" - | while [ "$(kubectl get pod $POD_NAME -o jsonpath='{$.status.phase}')" != "Running" ]; do sleep 1; done </code></pre></li> </ol>
<p>I recently started working on microservices. I am building my docker image and want to deploy it on kubernetes. while creating a pod.yaml file I started getting the below error.</p> <p>Command : </p> <pre><code>kubectl create -f podservice.yaml </code></pre> <p>Error :</p> <pre><code>error: the path "podcpeservice.yml" does not exist </code></pre> <p>Tried using the helpfor kubectl create -f help. An example in the help document is </p> <p>command :</p> <pre><code>kubectl create -f ./pod.json </code></pre> <p>Even the above command gives the same error. Not able to figure out what is the problem. tried removing <code>./</code></p> <p>I am using centos 7 on virtual-box with windows 7 as host.</p>
<p>Tushar,</p> <p>First you need to create the deployment yml file using one of the editor, then pass the file as argument for kubectl command.</p> <p>eg.</p> <p>kubernets team already created this deployment file. you use kubectl to deploy.</p> <pre><code>kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml </code></pre>
<p>The <a href="https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens" rel="nofollow noreferrer">Kubernetes documentation related to OpenID Connect</a> mentions that as part of setting things up you need to supply some parameters to the API server:</p> <pre><code>--oidc-client-id: A client id that all tokens must be issued for. </code></pre> <p>There is no other explanation about how this would map to, say, <a href="https://developers.google.com/identity/sign-in/web/" rel="nofollow noreferrer">something returned by the OpenID Connect-conformant Google identity provider</a>.</p> <p>I don't know what this parameter value will be used for. Will it match against something in the decoded JWT token?</p> <p>It looks like the <code>id_token</code> returned by the Google identity provider might contain something, once decoded, in its <code>aud</code> field (<code>aud</code> is apparently short for "audience"). Is this what the <code>--oidc-client-id</code> should match? Am I way off?</p>
<p>This can be explained from the kubernetes documentation on <a href="https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens" rel="nofollow noreferrer">id tokens</a>.</p> <p>As you can see, identity provider is a separate system. For example this can be MS Azure AD or Google as you have shown. </p> <p>When you register for a identity provider, you get important things in return. <strong><code>client id</code></strong> is one such important parameter. if you are aware of the openid connect flow, you need to provide this <strong><code>client id</code></strong> when you follow the flow. If the flow is complete, you will return an <strong><code>id token</code></strong>. An <strong><code>id token</code></strong> has one must have claim, <strong><code>aud</code></strong> which is the audience that token was issued for.</p> <p>When you validate an <strong><code>id token</code></strong> you <strong>MUST</strong> verify you are in the audience list. More can be found from the <a href="https://openid.net/specs/openid-connect-core-1_0.html#IDTokenValidation" rel="nofollow noreferrer">spec</a>.</p> <p>Quoting from specification,</p> <blockquote> <p>The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience</p> </blockquote> <p>Now, kubernetes uses bearer tokens. Here the tokens used are <strong><code>id tokens</code></strong>. To validate the token it should know specifically the audience. This enables the API server to validate the token is issued for the particular client who made the call. Thus authorising the call to to success. </p>
<p>I am following <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="noreferrer">this tutorial</a>.</p> <p>My objective was to create a Deployment made with a custom container instead of the hello container.</p> <p>I made this <code>my-custom-server-deployment.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Deployment metadata: name: my-custom-server-deployment spec: replicas: 2 template: metadata: labels: app: my-custom tier: backend spec: containers: - name: my-custom-server image: my-custom-server ports: containerPort: 3000 </code></pre> <p>I tried to build it with <code>kubectl create -f my-custom-server-deployment.yaml</code> and i got:</p> <pre><code>error: error validating "my-custom-server-deployment.yaml": error validating data: couldn't find type: v1.Deployment; if you choose to ignore these errors, turn validation off with --vali date=false </code></pre> <p>When i add <code>--validate=false</code> i get:</p> <pre><code> no matches for /, Kind=Deployment </code></pre> <p>I thought this was pretty straight forward. What am i doing wrong? </p>
<p>Try using <code>apiVersion: apps/v1beta1</code> instead. I believe Deployments is a still a beta feature.</p>
<p>I have a running pod and I want to change one of it's container's environment variable and made it work immediately. Can I achieve that? If I can, how to do that?</p>
<p>Simply put and in kube terms, you can not.</p> <p>Environment for linux process is established on process startup, and there are certainly no kube tools that can achieve such goal. For example, if you make a change to your Deployment (I assume you use it to create pods) it will roll the underlying pods.</p> <p>Now, that said, there is a really hacky solution reported under <a href="https://stackoverflow.com/questions/205064/is-there-a-way-to-change-another-processs-environment-variables">Is there a way to change the environment variables of another process in Unix?</a> that involves using GDB</p> <p>Also, remember that even if you could do that, there is still application logic that would need to watch for such changes instead of, as it usually is now, just evaluate configuration from envs during startup.</p>