prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have an app with several containers running just fine using kubernetes on AWS however now I need to port this to a AWS Dedicated Host VPC where the cluster has previously been created NOT using Kubernetes so I am not able to execute kube-up.sh or its kops equivalent</p> <p>Is it possible to orchestrate my containers using kubernetes on a pre-existing cluster ? ( IE. have kubernetes probe the parent AWS cluster and treat it as if it created it )</p> <p>Of course until this linkage is made between my calls to kubectl and the parent AWS Dedicated Host VPC it has no Kubernetes context and just times out :</p> <blockquote> <p>kubectl create -f /my/app/goodie.yaml</p> <p>Unable to connect to the server: dial tcp 34.199.89.247:443: i/o timeout</p> </blockquote> <p>Possible alternative would be to call kube-up.sh or kops and demand the new cluster live inside a specified AWS Dedicated Host ... alas its not apparent Kubernetes has this flexibility ... yet !</p>
<p>Yes, definitely. kubectl is just a client application and it can connect to any kubernetes cluster and orchestrate it. </p> <p>If you get i/o timeout, you most likely have connectivity issues and some firewall/proxy in place. Did you try to just access the kubernetes API through curl or telnet? </p>
<p>I'm relatively new to K8S and hit a road block. I've created a ConfigMap to create a central location so that all Deployments can pull in and mount the same vars - something like so:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: site namespace: staging spec: revisionHistoryLimit: 1 replicas: 1 template: metadata: labels: app: site spec: containers: - name: nginx image: gcr.io/XXX/builds/nginx:develop-latest imagePullPolicy: Always command: ["nginx", "-g", "daemon off;"] ports: - containerPort: 80 livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 10 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 10 - name: php image: gcr.io/XXX/builds/php:develop-latest imagePullPolicy: Always command: ["php-fpm7.0", "--nodaemonize", "-R"] livenessProbe: tcpSocket: port: 9000 initialDelaySeconds: 10 readinessProbe: tcpSocket: port: 9000 initialDelaySeconds: 10 env: - name: ENV_PATH value: /etc/config/.env volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: staging-config items: - key: site path: .env </code></pre> <p>It mounts the file correctly -- I can see it when I <code>ls</code> inside the pod, but I'm struggling to expose the contents of the file to act as environment variables that my Laravel app can use. When defining them normally everything works as expected (see below), but taking the above approach they simply do not get set. The docs doesn't lend any further help and I'm not seeing anything online... Any pointers?</p> <pre><code>env: - name: variable value: val - name: var2 value: var2val </code></pre>
<p>If you want to set values from your configMap directly as env vars inside your pod, then you do not need to mount them as files. Instead use:</p> <pre><code>env: - name: MY_ENV_VAR valueFrom: configMapKeyRef: name: &lt;your-configmap-name&gt; key: &lt;key-of-your-value&gt; </code></pre> <p>If you want to stick with the mounting option, another way might be to use your <code>docker-entrypoint.sh</code> script of your laravel container to somehow source the mounted env file when you start the container.</p>
<p>I am trying to attach to a running container in Kubernetes, however I get the error message below. </p> <pre><code>&gt;kubectl attach -it es-client-2756725635-4rk43 -c es-node Unable to use a TTY - container es-node did not allocate one If you don't see a command prompt, try pressing enter. </code></pre> <p>How do I enable a TTY in my container yaml?</p>
<p>In order to have proper TTY and stdin when doing attach:</p> <pre><code>kubectl attach -it POD -c CONTAINER </code></pre> <p>The container must be configured with <code>tty: true</code> and <code>stdin: true</code>. By default both of those values are <code>false</code>: <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#debugging" rel="noreferrer">https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#debugging</a></p> <p>Example Pod:</p> <pre><code>spec: containers: - name: web image: web:latest tty: true stdin: true </code></pre>
<p>I would like to run a Cassandra cluster under Kubernetes on Google Container Engine using the examples given here: <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/storage/cassandra" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/storage/cassandra</a></p> <p>The file describes 3 ways to setup the cluster - PetSet(StatefulSet), Replication Controller and DaemonSet. Each one of them has its pros and cons.</p> <p>While trying to choose the best setup for me, I noticed that I cannot figure out what to do with the storage and backups. </p> <ol> <li>How can I set or scale the storage size (increase/decrease node/cluster <strong>data storage</strong> size without data loss) ?</li> <li>How do I manage backups and restores?</li> </ol>
<p>The short answer is that there is no way to do this in kubernetes. Kubernetes does very little in terms of storage management.</p> <p>If you have the flexibility of choosing other solutions, <a href="https://robinsystems.com/product-and-solutions/robin-for-databases/" rel="nofollow noreferrer">check this out</a>.</p> <p>They provide a container-based solution that combines compute, network, storage, so you have full control over all resources required by cassandra, and perform snapshot/restore, scale out, scale up/down, etc.</p>
<p>I am using kubernetes on google cloud container, and I still don't understand how the load-balancers are "magically" getting configured when I create / update any of my ingresses.</p> <p>My understanding was that I needed to deploy a <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce" rel="nofollow noreferrer">glbc / gce L7 container</a>, and that container would watch the ingresses and do the job. I've never deployed such container. So maybe it is part of <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc" rel="nofollow noreferrer">this cluster addon glbc</a>, so it works even before I do anything?</p> <p>Yet, on my cluster, I can see a "l7-default-backend-v1.0" Replication Controller in kube-system, with its pod and NodePort service, and it corresponds to what I see in the LB configs/routes. But I can't find anything like a "l7-lb-controller" that should do the provisionning, such container does not exist on the cluster.</p> <p>So where is the magic ? What is the glue between the ingresses and the LB provisionning ?</p>
<p>Google Container Engine runs the glbc "glue" on your behalf unless you explicitly request it to be disabled as a cluster add-on (see <a href="https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters#HttpLoadBalancing" rel="nofollow noreferrer">https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters#HttpLoadBalancing</a>). </p> <p>Just like you don't see a pod in the system namespace for the scheduler or controller manager (like you do if you deploy Kubernetes yourself), you don't see the glbc controller pod either. </p>
<p>How can I expose service of type <code>NodePort</code> to internet <strong>without</strong> using type <code>LoadBalancer</code>? Every resource I have found was doing it by using load balancer. But I don't want load balancing its expensive and unnecessary for my use case because I am running one instance of <code>postgres</code> image which is mounting to persistent disk and I would like to be able to connect to my database from my PC using pgAdmin. If it is possible could you please provide bit more detailed answer as I am new to Kubernetes, GCE and networking.</p> <p>Just for the record and bit more context I have deployment running 3 replicas of my API server to which I am connecting through load balancer with set loadBalancerIP and another deployment which is running one instance of postgres with NodePort service through which my API servers are communicating with my db. And my problem is that maintaining the db without public access is hard.</p>
<p>using <code>NodePort</code> as Service type works straight away e.g. like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx spec: type: NodePort ports: - port: 80 nodePort: 30080 name: http - port: 443 nodePort: 30443 name: https selector: name: nginx </code></pre> <p>More details can be found in the <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="noreferrer">documentation</a>. The drawback of using <code>NodePort</code> is that you've to take care of integrating with your providers firewall by yourself. A starting port for that can also be found in the <a href="https://kubernetes.io/docs/user-guide/services-firewalls/#google-compute-engine" rel="noreferrer">Configuring Your Cloud Provider's Firewalls</a> section of the official documentation.</p> <p>For GCE opening up the above for publicly on all nodes could look like:</p> <pre><code>gcloud compute firewall-rules create myservice --allow tcp:30080,tcp:30443 </code></pre> <p>Once this is in place your services should be accessable through any of the public IPs of your nodes. You'll find them with:</p> <pre><code>gcloud compute instances list </code></pre>
<p>Hoping someone can help. I have a 3x node CoreOS cluster running Kubernetes. The nodes are as follows: 192.168.1.201 - Controller 192.168.1.202 - Worker Node 192.168.1.203 - Worker Node</p> <p>The cluster is up and running, and I can run the following commands:</p> <pre><code>&gt; kubectl get nodes NAME STATUS AGE 192.168.1.201 Ready,SchedulingDisabled 1d 192.168.1.202 Ready 21h 192.168.1.203 Ready 21h &gt; kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE kube-apiserver-192.168.1.201 1/1 Running 2 1d kube-controller-manager-192.168.1.201 1/1 Running 4 1d kube-dns-v20-h4w7m 2/3 CrashLoopBackOff 15 23m kube-proxy-192.168.1.201 1/1 Running 2 1d kube-proxy-192.168.1.202 1/1 Running 1 21h kube-proxy-192.168.1.203 1/1 Running 1 21h kube-scheduler-192.168.1.201 1/1 Running 4 1d </code></pre> <p>As you can see, the kube-dns service is not running correctly. It keeps restarting and I am struggling to understand why. Any help in debugging this would be greatly appreciated (or pointers at where to read about debugging this. Running kubectl logs does not bring anything back...not sure if the addons function differently to standard pods.</p> <p>Running a kubectl describe pods, I can see the containers are killed due to being unhealthy:</p> <pre><code>16m 16m 1 {kubelet 192.168.1.203} spec.containers{kubedns} Normal Created Created container with docker id 189afaa1eb0d; Security:[seccomp=unconfined] 16m 16m 1 {kubelet 192.168.1.203} spec.containers{kubedns} Normal Started Started container with docker id 189afaa1eb0d 14m 14m 1 {kubelet 192.168.1.203} spec.containers{kubedns} Normal Killing Killing container with docker id 189afaa1eb0d: pod "kube-dns-v20-h4w7m_kube-system(3a545c95-ea19-11e6-aa7c-52540021bfab)" container "kubedns" is unhealthy, it will be killed and re-created </code></pre> <p>Please find a full output of this command as a github gist here: <a href="https://gist.github.com/mehstg/0b8016f5398a8781c3ade8cf49c02680" rel="nofollow noreferrer">https://gist.github.com/mehstg/0b8016f5398a8781c3ade8cf49c02680</a></p> <p>Thanks in advance!</p>
<p>If you installed your cluster with kubeadm you should add a pod network after installing.</p> <p>If you choose flannel as your pod network, you should have this argument in your init command <code>kubeadm init --pod-network-cidr 10.244.0.0/16</code>.</p> <p>The flannel YAML file can be found in the <a href="https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">coreOS flannel repo</a>. </p> <p>All you need to do if your cluster was initialized properly (read above), is to run <code>kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code></p> <p>Once this is up and running (it will create pods on every node), your kube-dns pod should come up.</p> <p>If you need to reset your installation (for example to add the argument to <code>kubeadm init</code>), you can use <code>kubeadm reset</code> on all nodes.</p> <p>Normally, you would run the init command on the master, then add a pod network, and then add your other nodes.</p> <p>This is all described in more detail in the <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">Getting started guide</a>, step 3/4 regarding the pod network.</p>
<p>I'm new in kubernetes and I have some doubts about the installation of kubernetes on centos 7, I have read some documentation on some links:</p> <p><a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a></p> <p><a href="https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/</a></p> <p>But I not undestanding which procedure to follow, on first link it show how to install it using kubeadm but at the end of the article on "Limitations" appear that this tool "is a work in progress and these limitations will be addressed in due course", on second link I need to have at least 2 machines, so my question is which is better to use if I will to install it like production.</p> <p>Thanks in advance</p>
<p><code>kubeadm</code>.</p> <p><code>kubeadm</code> now can support for multi masters, which is considerable for production.</p> <p>The <code>kubeadm</code> also supplies a secure deployment. It automatically configs <a href="https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/" rel="nofollow noreferrer">TLS settings</a> or RBAC for the cluster, which is not included in the "manual installation page".</p> <p>My advice: play <code>kubeadm</code> in your development environment first, so that you see how <code>kubeadm</code> deploys a Kubernetes cluster, many components can be deployed by Kubernetes itself. Then, you decide whether use it in your production.</p>
<p>A new Kubernetes Deployment with 2+ replicas for high availability.</p> <p>I want to be able to execute a command on the first pod only, let's say create a DB, and let the other replicas wait for the first one to complete.<br> To implement this, I just want to know in the pod if this is replica #1 or not.</p> <p>So in the pod's entry point I can test:</p> <pre><code>if [ $REPLICA_ID -eq 1 ]; then CreateDB else WaitForDB fi </code></pre> <p>Can this be done in Kubernetes?</p>
<p>in Kubernetes a <code>Deployment</code> is considered stateless and therefore doesn't provide the feature you're looking for. You should rather look into <code>StatefulSet</code> and their features.</p> <p>A <code>StatefulSet</code>e.g. supports <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#ordered-pod-creation" rel="noreferrer">ordered creation</a> and when combined with the generally available <code>readinessProbe</code> for you pods you could create the desired behaviour. Also the pod name is stable within a <code>StatefulSet</code> so your test could then be done with the <code>hostname</code> of the <code>Pod</code>.</p>
<p>I want to set up gcsFUSE on my cluster. It's easier to do this in Debian jessie according to the <a href="https://github.com/GoogleCloudPlatform/gcsfuse" rel="nofollow noreferrer">gcsFUSE page</a>.</p> <p>The <code>config-default.sh</code> that <code>kube-up.sh</code> uses contains the following:</p> <pre><code>NODE_OS_DISTRIBUTION=${KUBE_NODE_OS_DISTRIBUTION:-${KUBE_OS_DISTRIBUTION:-debian}} </code></pre> <p>which sets up <code>wheezy</code>. What do I change this to to get <code>jessie</code>? I've tried replacing <code>debian</code> with the values <code>debian-8</code> and <code>jessie</code>, without any luck:</p> <pre><code>$ cluster/kube-up.sh Cannot operate on cluster using node os distro: jessie </code></pre>
<p>from reading the <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/util.sh" rel="nofollow noreferrer">cluster/gce/util.sh</a> you can use <code>KUBE_GCE_MASTER_IMAGE</code> / <code>KUBE_GCE_MASTER_PROJECT</code> and <code>KUBE_GCE_NODE_IMAGE</code> / <code>KUBE_GCE_NODE_PROJECT</code> for that purpose.</p> <p>E.g. with:</p> <pre><code> KUBE_GCE_MASTER_IMAGE=debian-8-jessie-v20170124 KUBE_GCE_MASTER_PROJECT=debian-8 KUBE_GCE_NODE_IMAGE=debian-8-jessie-v20170124 KUBE_GCE_NODE_PROJECT=debian-8 </code></pre> <p>You can find the relevant images on the with:</p> <pre><code>gcloud compute images list --filter=debian </code></pre> <p>These environment variables are used to then create the instances with </p> <pre><code> gcloud compute instance-templates create ... </code></pre> <p>The <a href="https://cloud.google.com/sdk/gcloud/reference/compute/instance-templates/create#--image" rel="nofollow noreferrer">related documentation</a> has some further details.</p>
<p>I am using <code>Azure CLI 2.0</code> and I am trying to create <code>Azure Container Service type Kurbenetes</code>, with this command (I already created resource group)</p> <pre><code>az acs create --orchestrator-type=kubernetes --resource-group=mi-shared-docker-test --dns-prefix=kube --name=mishareddocker </code></pre> <p>I am getting this error </p> <blockquote> <p>waiting for AAD role to propagate..........Could not create a service principal with the right permissions. Are you an Owner on this project?</p> </blockquote> <p>I can create any vms, webapps etc, but why am I receiving this issue ?</p>
<p>Well, the error states it pretty clearly, you don't have the right permissions, you should read on the ACS\Kubernetes guide.</p> <p>I understand you are using AZ, not azure, but the idea is the same, you should have <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-deployment#kubernetes-specific-entires-required-in-the-portal" rel="nofollow noreferrer">enough permissions and a service principal</a> to deploy Kubernetes on Azure.</p> <p>You need to look specifically at <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-service-principal" rel="nofollow noreferrer">this link</a>.</p>
<p>How can I pass the <code>nginx.conf</code> configuration file to an nginx instance running inside a Kubernetes cluster?</p>
<p>You can create a ConfigMap object and then mount the values as files where you need them:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-config data: nginx.conf: | your config comes here like this other.conf: | second file contents </code></pre> <p>And in you pod spec:</p> <pre><code>spec: containers: - name: nginx image: nginx volumeMounts: - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: other.conf mountPath: /etc/nginx/other.conf subPath: other.conf volumes: - name: nginx-config configMap: name: nginx-config </code></pre> <p>(Take note of the duplication of the filename in mountPath and using the exact same subPath; same as bind mounting files.)</p> <p>For more information about ConfigMap see: <a href="https://kubernetes.io/docs/user-guide/configmap/" rel="noreferrer">https://kubernetes.io/docs/user-guide/configmap/</a></p> <blockquote> <p>Note: A container using a ConfigMap as a subPath volume will not receive ConfigMap updates.</p> </blockquote>
<p>I have a question about kubernetes and network firewall rules. I want to secure my kubernetes cluster with firewall rules, and was wondering if workers/masters need internet access? I'm planning on using a private registry located on my network, but I'm having problems getting it to work when the workers don't have internet access. Here's an example</p> <pre><code>Name: foo Namespace: default Node: worker003/192.168.30.1 Start Time: Mon, 23 Jan 2017 10:33:07 -0500 Labels: &lt;none&gt; Status: Pending IP: Controllers: &lt;none&gt; Containers: foo: Container ID: Image: registry.company.org/wop_java/app:nginx Image ID: Port: State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-3cg0w (ro) Environment Variables: &lt;none&gt; Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-3cg0w: Type: Secret (a volume populated by a Secret) SecretName: default-token-3cg0w QoS Class: BestEffort Tolerations: &lt;none&gt; Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned foo to worker003 4m 1m 4 {kubelet worker003} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request. details: (Error response from daemon: {\"message\":\"Get https://gcr.io/v1/_ping: dial tcp 74.125.192.82:443: i/o timeout\"})" 3m 3s 9 {kubelet worker003} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause-amd64:3.0\"" </code></pre> <p>My question is, does kubernetes require internet access to work? If yes, where is it documented officially?</p>
<p>you need to pass an argument <code>--pod-infra-container-image</code> to a <strong>kubelet</strong> as documented here: <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/kubelet/</a>. It defaults to <code>gcr.io/google_containers/pause-amd64:3.0</code>, which in unsuccessfuly pulled on your machine since gcr.io is unavailable. </p> <p>You can easily transfer the pause image to you private registry</p> <pre><code>docker pull gcr.io/google_containers/pause-amd64:3.0 docker tag gcr.io/google_containers/pause-amd64:3.0 REGISTRY.PRIVATE/google_containers/pause-amd64:3.0 docker push REGISTRY.PRIVATE/google_containers/pause-amd64:3.0 # and pass kubelet --pod-infra-container-image=REGISTRY.PRIVATE/google_containers/pause-amd64:3.0 ... </code></pre> <p>The pause is a container is created prior your container in order to allocate and keep network and ipc namespaces over restarts.</p>
<p>right now i'm accessing my pods (postgres port 5432) trough a service that is exposed, but since <strong>gcloud</strong> charge for every forwarding rule created, the amount of pods i need to monitor or to execute stuff in it, is costing me more and more, is there a way to create a single expose service for all of my pods? or can i create some sort of <code>vpn</code>? <code>putty tunnel</code> or something? any help would be appreciated! I'm also using kubectl exec</p>
<p>If you are looking for a managed solution then Google is offering VPN for that: <a href="https://console.cloud.google.com/networking/vpn/" rel="nofollow noreferrer">https://console.cloud.google.com/networking/vpn/</a></p> <p>If you are happy to roll your own then you can create a new Compute instance on the same network where your nodes are and set up openvpn there. This will give you a fix ip as a freebie.</p> <p>A more advanced solution is if you run openvpn as a pod (or pods) and use a Service with NodePort to expose it. (Optionally manually create a single loadbalacer on google cloud to get a static ip for that.)</p> <p>At the end of the day the ideal solution depends much on your environment and goal.</p>
<p>I have Kubernetes running on a VM on my dev box. I want to view the Kubernetes dashboard from the VM host. When I run the following command:</p> <pre><code>kubectl proxy --address 0.0.0.0 --accept-hosts ^/.* </code></pre> <p>When I try to access the dashboard I get an unauthorized error. </p> <p>What am I missing?</p>
<p>The --accept-hosts access control is for checking of the <em>hostname</em>, so it won't start with a / (slash). You need to do:</p> <pre><code>kubectl proxy --address 0.0.0.0 --accept-hosts '.*' </code></pre> <p>(Make sure you shell escape the .* as it may match files in the current directory!)</p> <p>More information at: <a href="https://kubernetes.io/docs/user-guide/kubectl/kubectl_proxy/" rel="noreferrer">https://kubernetes.io/docs/user-guide/kubectl/kubectl_proxy/</a></p>
<p>I'm trying to use <code>Consul</code> with <code>Registrator</code> in GCE &amp; K8s. Everything launches fine except `Registrator'. </p> <p>Here is my deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null name: consul spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: service: consul spec: restartPolicy: Always containers: - name: consul image: eu.gcr.io/xxx/consul ports: - containerPort: 8300 protocol: TCP - containerPort: 8400 protocol: TCP - containerPort: 8500 protocol: TCP - containerPort: 53 protocol: UDP env: - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP args: - -server - -bootstrap - -advertise=$(MY_POD_IP) - name: registrator args: - -internal - -ip=$(MY_POD_IP) - consul://localhost:8500 env: - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP image: eu.gcr.io/xxx/registrator volumeMounts: - mountPath: /tmp/docker.sock name: registrator-claim0 volumes: - name: registrator-claim0 persistentVolumeClaim: claimName: registrator-claim0 status: {} </code></pre> <p>Here are the log outputs: Consul: <a href="https://i.stack.imgur.com/0xjY7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0xjY7.png" alt="enter image description here"></a> Registrator: <a href="https://i.stack.imgur.com/wzq5w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wzq5w.png" alt="enter image description here"></a></p> <p>In docker-compose everything works fine, but I haven't got my head completeley around K8s and GCE. Thanks for the help!</p>
<p>I have switched to Linkerd which works very well together with k8s. </p>
<p>I followed this guide <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-launch.html" rel="nofollow noreferrer">https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws-launch.html</a> to create a kubernetes cluster on AWS with <a href="https://github.com/coreos/kube-aws" rel="nofollow noreferrer"><code>kube-aws</code></a>.</p> <p>I am using <code>kube-aws</code> to <code>v0.9.4-rc2</code></p> <p>After successfully do <code>kube-aws up --s3-uri s3://..</code>, I tried to get the nodes with <code>kubectl get nodes</code>, and that's when I get this error:</p> <pre><code>:; kubectl get nodes Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") </code></pre> <p>In the <code>kubeconfig</code> file, there is a line describing the certificate authority:</p> <pre><code>apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: credentials/ca.pem </code></pre> <p>Does anyone know what might have gone wrong for me? How could I debug it a bit further?</p>
<p>It seems like the problem was because my credentials were not all generated correctly. So perhaps the apiserver cert was signed with a wrong ca cert? Not sure how that might've happened.</p> <p>Anyway, deleting the <code>credentials</code> directory, then destroy the cluster and bring it up again solved the problem for me. Luckily it's still an experimental cluster, so I could do that. Not sure if I could've fixed it without destroying the cluster.</p>
<p>i followed this guide <a href="https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="noreferrer">link</a> to install a kubernetes cluster and i have no error, but i can't access kubernetes-Dashboard</p> <p>I did <code>kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml</code> and when i go to <a href="https://192.168.11.20/ui" rel="noreferrer">https://192.168.11.20/ui</a> is nothing there</p> <p>how can i access the dashboard?</p> <p>some additional information</p> <pre><code>[root@kubeMaster ~]# kubectl get nodes NAME STATUS AGE kubenode1 Ready 6h kubenode2 Ready 6h [root@kubeMaster ~]# kubectl get pods No resources found. [root@kubeMaster ~]# kubectl describe svc kubernetes-dashboard --namespace=kube-system Name: kubernetes-dashboard Namespace: kube-system Labels: app=kubernetes-dashboard Selector: app=kubernetes-dashboard Type: NodePort IP: 10.254.81.213 Port: &lt;unset&gt; 80/TCP NodePort: &lt;unset&gt; 31785/TCP Endpoints: &lt;none&gt; Session Affinity: None No events. [root@kubeMaster ~]# kubectl get deployment kubernetes-dashboard --namespace=kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1 0 0 0 6h [root@kubeMaster ~]# kubectl --namespace=kube-system get ep kubernetes-dashboard NAME ENDPOINTS AGE kubernetes-dashboard &lt;none&gt; 6h [root@kubeMaster ~]# kubectl cluster-info Kubernetes master is running at http://kubeMaster:8080 [root@kubeMaster ~]# kubectl get ns NAME STATUS AGE default Active 6h kube-system Active 6h [root@kubeMaster ~]# kubectl get ep NAME ENDPOINTS AGE kubernetes 192.168.11.20:6443 6h </code></pre>
<p>192.168.0.0/16 is a private IP range, meaning you need to be within the cluster's network to access it.</p> <p>The easiest way to access your service outside the cluster is to run <code>kubectl proxy</code>, which will proxy requests to your localhost port 8001 to the Kubernetes API server. From there, the apiserver can proxy to your service:</p> <p><a href="http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard" rel="noreferrer">http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard</a></p>
<p>I've setup a Kubernetes 1.5 cluster with the three master nodes tainted <em>dedicated=master:NoSchedule</em>. Now I want to deploy the Nginx Ingress Controller on the Master nodes only so I've added tolerations:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: kube-system labels: kubernetes.io/cluster-service: "true" spec: replicas: 3 template: metadata: labels: k8s-app: nginx-ingress-lb name: nginx-ingress-lb annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: […] </code></pre> <p>Unfortunately this does not have the desired effect: Kubernetes schedules all Pods on the workers. When scaling the number of replicas to a larger number the Pods are deployed on the workers, too. </p> <p>How can I achieve scheduling to the Master nodes only?</p> <p>Thanks for your help.</p>
<p>A toleration does <em>not</em> mean that the pod must be scheduled on a node with such taints. It means that the pod <em>tolerates</em> such a taint. If you want your pod to be <em>"attracted"</em> to specific nodes you will need to attach a <strong>label</strong> to your dedicated=master tainted nodes and set nodeSelector in the pod to look for such label.</p> <p>Attach the label to each of your special use nodes:</p> <pre><code>kubectl label nodes name_of_your_node dedicated=master </code></pre> <h2>Kubernetes 1.6 and above syntax</h2> <p>Add the nodeSelector to your pod:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: kube-system labels: kubernetes.io/cluster-service: "true" spec: replicas: 3 template: metadata: labels: k8s-app: nginx-ingress-lb name: nginx-ingress-lb annotations: spec: nodeSelector: dedicated: master tolerations: - key: dedicated operator: Equal value: master effect: NoSchedule […] </code></pre> <p>If you don't fancy <code>nodeSelector</code> you can add <code>affinity:</code> under <code>spec:</code> instead:</p> <pre><code>affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: - key: dedicated operator: Equal values: ["master"] </code></pre> <h2>Pre 1.6 syntax</h2> <p>Add the nodeSelector to your pod:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: kube-system labels: kubernetes.io/cluster-service: "true" spec: replicas: 3 template: metadata: labels: k8s-app: nginx-ingress-lb name: nginx-ingress-lb annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: nodeSelector: dedicated: master […] </code></pre> <p>If you don't fancy <code>nodeSelector</code> you can also add an annotation like this:</p> <pre><code>scheduler.alpha.kubernetes.io/affinity: &gt; { "nodeAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "dedicated", "operator": "Equal", "values": ["master"] } ] } ] } } } </code></pre> <p>Keep in mind that NoSchedule will not evict pods that are already scheduled.</p> <p>The information above is from <a href="https://kubernetes.io/docs/user-guide/node-selection/" rel="noreferrer">https://kubernetes.io/docs/user-guide/node-selection/</a> and there are more details there.</p>
<p>I'm trying to use the kubernetes ingress resource on bare metal with no cloud provider. </p> <p>I created an ingress resource:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx spec: rules: - host: foobar.com http: paths: - path: /foo backend: serviceName: echoheaders-x servicePort: 80 - path: / backend: serviceName: frontend servicePort: 80 </code></pre> <p>however, when I view the ingress, I get this IP:</p> <pre><code>[root@kubemaster]# kubectl get ing NAME HOSTS ADDRESS PORTS AGE nginx foobar.com 172.17.0.1 80 12m </code></pre> <p>That IP address seems to correspond with the docker0 IP address on all my kubelet nodes.</p> <p>Is there a way to set this IP? All the tutorial's I've read seem to have this IP be routable.</p> <p>Here's my nginx-controller yaml:</p> <pre><code>--- apiVersion: v1 kind: ReplicationController metadata: name: nginx-ingress-controller labels: k8s-app: nginx-ingress-lb spec: replicas: 1 selector: k8s-app: nginx-ingress-lb template: metadata: labels: k8s-app: nginx-ingress-lb name: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3 name: nginx-ingress-lb imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 # use downward API env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 # we expose 18080 to access nginx stats in url /nginx-status # this is optional - containerPort: 18080 hostPort: 18080 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --nginx-configmap=$(POD_NAMESPACE)/nginx-ingress-controller </code></pre>
<p>The issue here was the kubelet configuration. By default, the kubelet will listen on <code>0.0.0.0</code> and because <code>docker0</code> is the first available address, it grabbed <code>docker0</code>'s IP.</p> <p>I added the following to the kubelet config:</p> <pre><code>--address=&lt;actualip&gt; --node-ip=&lt;actualip&gt; </code></pre> <p>And it registered correctly.</p>
<p>Does anyone know if it is possible to specify the Kubernetes version when deploying ACS Kubernetes flavour?</p> <p>If so how?</p>
<p>Using the supported resource provider in ARM you cannot specify the version. However, if you use <a href="http://github.com/Azure/acs-engine" rel="nofollow noreferrer">http://github.com/Azure/acs-engine</a> you can do so. ACS Engine is the open source code we (I work for MS) use to drive Azure Container Service. Using this code you have much more flexibility than you do through the published resource provider, but it's a harder onramp. For instructions see <a href="https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md" rel="nofollow noreferrer">https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.md</a></p> <p>See examples at <a href="https://github.com/Azure/acs-engine/tree/master/examples/kubernetes-releases" rel="nofollow noreferrer">https://github.com/Azure/acs-engine/tree/master/examples/kubernetes-releases</a></p>
<p>I get the go get k8s.io/client-go/1.5/... An error occurred while trying to go run:</p> <pre><code>&gt; # k8s.io/client-go/pkg/api/v1 &gt; ../k8s.io/client-go/pkg/api/v1/helpers.go:86: undefined: v1.FinalizerOrphan </code></pre> <p>Want to how to deal with, please?</p> <p>../k8s.io/client-go/pkg/api/v1/helpers.go:86:</p> <pre class="lang-golang prettyprint-override"><code>var standardFinalizers = sets.NewString( string(FinalizerKubernetes), metav1.FinalizerOrphan, ) </code></pre>
<p>I encountered a similar issue when I tried to go get kubernetes v1.5.2.<br/> Just solved it with:<br/> <i>cd ../k8s.io/kubernetes</i><br/> <i>git checkout v1.5.2</i><br/></p>
<p>I have a simple meteor app deployed on kubernetes. I associated an external IP address with the server, so that it's accessible from within the cluster. Now, I am up to exposing it to the internet and securing it (using HTTPS protocol). Can anyone give simple instructions for this section?</p>
<p>In my opinion <a href="https://github.com/jetstack/kube-lego" rel="noreferrer">kube-lego</a> is the best solution for GKE. See why:</p> <ul> <li>Uses <a href="https://letsencrypt.org/" rel="noreferrer">Let's Encrypt</a> as a CA</li> <li>Fully automated enrollment and renewals</li> <li>Minimal configuration in a single ConfigMap object</li> <li>Works with <a href="https://github.com/nginxinc/kubernetes-ingress" rel="noreferrer">nginx-ingress-controller</a> (see <a href="https://github.com/jetstack/kube-lego/tree/master/examples/nginx" rel="noreferrer">example</a>)</li> <li>Works with <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">GKE's HTTP Load Balancer</a> (see <a href="https://github.com/jetstack/kube-lego/tree/master/examples/gce" rel="noreferrer">example</a>)</li> <li>Multiple domains fully supported, including virtual hosting multiple https sites on one IP (with nginx-ingress-controller's SNI support)</li> </ul> <p>Example configuration (that's it!): </p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: kube-lego namespace: kube-lego data: lego.email: "your@email" lego.url: "https://acme-v01.api.letsencrypt.org/directory" </code></pre> <p>Example Ingress (you can create more of these): </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: site1 annotations: # remove next line if not using nginx-ingress-controller kubernetes.io/ingress.class: "nginx" # next line enable kube-lego for this Ingress kubernetes.io/tls-acme: "true" spec: tls: - hosts: - site1.com - www.site1.com - site2.com - www.site2.com secretName: site12-tls rules: ... </code></pre>
<p>I'm trying to secure my k8s cluster, and I'm looking at client-authentication authorization support for k8s. My requirement is that I want to be able to uniquely identify myself (e.g. client) to the k8s apiserver, but everything I read so far about client authentication is not the solution. </p> <p>My understanding is that the server will just ensure that the client certificate provided is in fact signed by the certificate authority. What if a hacker gets another certificate signed by the same certificate authority (which isn't hard to do in my org) and uses that to talk to my server? It appears that popular orchestrations like Swarm and k8s support this option and touted it as most secure so there must be a reason for doing this. Can someone shed some light?</p>
<p>It is not only verified that the certificate is authorized by the CA. The client certificate also contains the <a href="https://kubernetes.io/docs/admin/authentication/#x509-client-certs" rel="nofollow noreferrer">Common Name (CN)</a> which can be used with a simple <a href="https://kubernetes.io/docs/admin/authorization/" rel="nofollow noreferrer">ABAC Authorization</a> to limit the access to specific users or groups.</p> <p>Also it shouldn't be easy to get a signed certificate. IMO the access to the root CA should be very limited and it should be comprehensible who is allowed to sign certs and when it happened. Ideally the root CA should life on a offline host.</p> <p>Beside that it sounds like the CA is also used for other purposes. If it is so, you could consider to create a separate root cert for the client authentication. You can use a different CA for the server certificate by setting different CA files for <code>--client-ca-file</code> and <code>--tls-ca-file</code> on the <a href="https://kubernetes.io/docs/admin/kube-apiserver/" rel="nofollow noreferrer">apiserver</a>. That way you can restrict who is able to create client certificates and still verify the server identity with the CA of your organization (which might already be distributed on all org computers).</p> <h3>Other Authentication Methods</h3> <p>As mentioned Kubernetes also has some other authenication methods. The <a href="https://kubernetes.io/docs/admin/authentication/#static-token-file" rel="nofollow noreferrer">static token file</a> and the <a href="https://kubernetes.io/docs/admin/authentication/#static-password-file" rel="nofollow noreferrer">static password file</a> have the disadvantage that the secrets have to be stored plain text on the disk. Also the apiserver has to be restarted on every change.</p> <p><a href="https://kubernetes.io/docs/admin/authentication/#service-account-tokens" rel="nofollow noreferrer">Service account tokens</a> are designated to be used by applications which run in the cluster.</p> <p><a href="https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens" rel="nofollow noreferrer">OpenID</a> might be a secure alternative to client certificates, but AFAIK it is way harder to set up. Especially when there is no OpenID server, yet.</p> <p>I don't know much about the other authenication methods, but they look like they are intended for integrating with existing single-sign-on services.</p>
<p>If I run through the <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">http load balancer example</a> it works fine in my google container engine project. When I run "kubectl describe ing" the backend is "HEALTHY". If I then change the svc out to one that points to my app as shown here:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: app labels: name: app spec: ports: - port: 8000 name: http targetPort: 8000 selector: name: app type: NodePort </code></pre> <p>The app I'm running is django behind gunicorn and works just find if I make that a load balancer instead of a NodePort.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: main-ingress spec: backend: serviceName: app servicePort: 8000 </code></pre> <p>Now when I run "kubectl describe ing" the backend is listed as "UNHEALTHY" and all requests to the ingress IP give a 502. </p> <ol> <li>Is the 502 a symptom of the bad health check?</li> <li>What do I have to do to make the health check pass? I'm pretty sure the container running my app is actually healthy. I never set up a health check so I'm assuming I have to configure something that is not configured, but my googling hasn't gotten me anywhere.</li> </ol>
<p>After a lot of digging I found the answer: According to the requirements here: <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites" rel="noreferrer">https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc#prerequisites</a> the application must return a 200 status code at '/'. Because my application was returning a 302 (redirect to login), the health check was failing. When the health check fails, the ingress resource returns 502.</p>
<p>I've created an YAML file with three images in one pod (they need to communicate with eachother over 127.0.0.1) It seems that it's all working. I've defined a nodeport in the yaml file.</p> <p>There is one deployment defined <code>applications</code> it contains three images:</p> <ul> <li>contacts-db (A MySQL database)</li> <li>front-end (An Angular website)</li> <li>net-core (An API)</li> </ul> <p>I've defined three services, one for every container. In there I've defined the type <code>NodePort</code> to access it.</p> <p>So I retrieved the services to get the port numbers:</p> <pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE contacts-db 10.103.67.74 &lt;nodes&gt; 3306:30241/TCP 1d front-end 10.107.226.176 &lt;nodes&gt; 80:32195/TCP 1d net-core 10.108.146.87 &lt;nodes&gt; 5000:30245/TCP 1d </code></pre> <p>And I navigate in my browser to http://:32195 and it just keeps loading. It's not connecting. This is the complete Yaml file:</p> <pre><code>--- apiVersion: v1 kind: Namespace metadata: name: three-tier --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: applications labels: name: applications namespace: three-tier spec: replicas: 1 template: metadata: labels: name: applications spec: containers: - name: contacts-db image: mysql/mysql-server #TBD env: - name: MYSQL_ROOT_PASSWORD value: quintor - name: MYSQL_DATABASE value: quintor #TBD ports: - name: mysql containerPort: 3306 - name: front-end image: xanvier/angularfrontend #TBD resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 - name: net-core image: xanvier/contactsapi #TBD resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: contacts-db labels: name: contacts-db namespace: three-tier spec: type: NodePort ports: # the port that this service should serve on - port: 3306 targetPort: 3306 selector: name: contacts-db --- apiVersion: v1 kind: Service metadata: name: front-end labels: name: front-end namespace: three-tier spec: type: NodePort ports: - port: 80 targetPort: 80 #nodePort: 30001 selector: name: front-end --- apiVersion: v1 kind: Service metadata: name: net-core labels: name: net-core namespace: three-tier spec: type: NodePort ports: - port: 5000 targetPort: 5000 #nodePort: 30001 selector: name: net-core --- </code></pre>
<p>The selector of a service is matching the labels of your pod. In your case the defined selectors point to the containers which leads into nothing when choosing pods.</p> <p>You'd have to redefine your services to use one selector or split up your containers to different Deployments / Pods.</p> <p>To see whether a selector defined for a services would work, you can check them with:</p> <pre><code>kubectl get pods -l key=value </code></pre> <p>If the result is empty, your services will run into the void too.</p>
<p>Hi I am following this doc <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#strategic-merge-patch" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/docs/devel/api-conventions.md#strategic-merge-patch</a> for strategic-merge-patch to partially update the JSON objects using PATCH REST API. The document says that it can add or delete the object, but I have tried, whenever I add new object to existing JSON it just replaces that instead of adding new. I am trying this to modify pod definition in OpenShift 3.2. can anyone please help me how it works, probably with example. I need to use delete operation also , where I can delete the value by name.</p>
<p>As documented it depends on annotations of the types. AFAIS the strategic merge only works if <code>patchStrategy</code> and <code>patchMergeKey</code> are given. For example, this is the case in <a href="https://github.com/kubernetes/client-go/blob/7ac1236/pkg/api/v1/types.go#L2119" rel="nofollow noreferrer"><code>pod.spec.containers</code></a> and <a href="https://github.com/kubernetes/client-go/blob/7ac1236/pkg/api/v1/types.go#L2098" rel="nofollow noreferrer"><code>pod.spec.volumes</code></a>.</p> <p>For an example you need to provide more information about the type you want to merge.</p>
<p><strong>Go env:</strong></p> <blockquote> <p>GOARCH=&quot;amd64&quot;</p> <p>GOBIN=&quot;/root/&quot;</p> <p>GOEXE=&quot;&quot;</p> <p>GOHOSTARCH=&quot;amd64&quot;</p> <p>GOHOSTOS=&quot;linux&quot;</p> <p>GOOS=&quot;linux&quot;</p> <p>GOPATH=&quot;/data/workspace/kubernetes&quot;</p> <p>GORACE=&quot;&quot;</p> <p>GOROOT=&quot;/usr/local/go&quot;</p> <p>GOTOOLDIR=&quot;/usr/local/go/pkg/tool/linux_amd64&quot;</p> <p>GO15VENDOREXPERIMENT=&quot;1&quot;</p> <p>CC=&quot;gcc&quot;</p> <p>GOGCCFLAGS=&quot;-fPIC -m64 -pthread -fmessage-length=0&quot;</p> <p>CXX=&quot;g++&quot;</p> <p>CGO_ENABLED=&quot;1&quot;</p> </blockquote> <p><strong>Go version:</strong></p> <blockquote> <p>go version go1.6.3 linux/amd64</p> </blockquote> <p>This issues is happend on a “performance test env” kube-apiserver with high load. kube-apiserver panic and exit:</p> <pre><code>fatal error: concurrent map read and map write goroutine 77930636 [running]: runtime.throw(0x2f4c4c0, 0x21) /root/.gvm/gos/go1.6.3/src/runtime/panic.go:547 +0x90 fp=0xca67b477f0 sp=0xca67b477d8 runtime.mapaccess1_faststr(0x2a8e520, 0xc9e29000f0, 0x2c11220, 0xa, 0x433e360) /root/.gvm/gos/go1.6.3/src/runtime/hashmap_fast.go:202 +0x5b fp=0xca67b47850 sp=0xca67b477f0 k8s.io/kubernetes/pkg/httplog.(*respLogger).Log(0xcbddf2ae70) /data/gerrit/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/httplog/log.go:180 +0x43d fp=0xca67b47af8 sp=0xca67b47850 k8s.io/kubernetes/pkg/apiserver.RecoverPanics.func1(0x7f099f157090, 0xcbddf2ae70, 0xcd7569e380) /data/gerrit/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/apiserver/handlers.go:174 +0x15d fp=0xca67b47b50 sp=0xca67b47af8 net/http.HandlerFunc.ServeHTTP(0xc821a4eac0, 0x7f099f157058, 0xca0f4eb450, 0xcd7569e380) /root/.gvm/gos/go1.6.3/src/net/http/server.go:1618 +0x3a fp=0xca67b47b70 sp=0xca67b47b50 net/http.serverHandler.ServeHTTP(0xc8215a7b80, 0x7f099f157058, 0xca0f4eb450, 0xcd7569e380) /root/.gvm/gos/go1.6.3/src/net/http/server.go:2081 +0x19e fp=0xca67b47bd0 sp=0xca67b47b70 net/http.(*conn).serve(0xc8b5d6b980) /root/.gvm/gos/go1.6.3/src/net/http/server.go:1472 +0xf2e fp=0xca67b47f98 sp=0xca67b47bd0 runtime.goexit() /root/.gvm/gos/go1.6.3/src/runtime/asm_amd64.s:1998 +0x1 fp=0xca67b47fa0 sp=0xca67b47f98 created by net/http.(*Server).Serve /root/.gvm/gos/go1.6.3/src/net/http/server.go:2137 +0x44e </code></pre> <p>corresponding source code:</p> <p>pkg/apiserver/handlers.go</p> <pre><code>145 func RecoverPanics(handler http.Handler) http.Handler { 146 return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { 147 defer func() { 148 if x := recover(); x != nil { 149 http.Error(w, &quot;apis panic. Look in log for details.&quot;, http.StatusInternalServerError) 150 glog.Errorf(&quot;APIServer panic'd on %v %v: %v\n%s\n&quot;, req.Method, req.RequestURI, x, debug.Stack()) 151 } 152 }() 153 defer httplog.NewLogged(req, &amp;w).StacktraceWhen( httplog.StatusIsNot( http.StatusOK, http.StatusCreated, http.StatusAccepted, http.StatusBadRequest, http.StatusMovedPermanently, http.StatusTemporaryRedirect, http.StatusConflict, http.StatusNotFound, http.StatusUnauthorized, http.StatusForbidden, errors.StatusUnprocessableEntity, http.StatusSwitchingProtocols, http.StatusRequestTimeout, errors.StatusTooManyRequests, ), 170 ).Log() // Dispatch to the internal handler handler.ServeHTTP(w, req) 174 }) } </code></pre> <p>pkg/httplog/log.go:</p> <pre><code>159 func (rl *respLogger) Log() { 160 latency := time.Since(rl.startTime) 161 if glog.V(2) { 162 extraInfo := &quot;&quot; 163 if latency &gt;= time.Millisecond*200 &amp;&amp; latency &lt; time.Second { extraInfo = fmt.Sprintf(&quot;%d00.Millisecond&quot;, latency/(time.Millisecond*100)) } else if latency &gt;= time.Second &amp;&amp; latency &lt; time.Minute { // Warning extraInfo = fmt.Sprintf(&quot;%d.Second&quot;, latency/(time.Second)) } else if latency &gt;= time.Minute { // nce will timeout extraInfo = fmt.Sprintf(&quot;%d.Minutes&quot;, latency/(time.Minute)) } method := rl.req.Method if len(rl.req.Header[&quot;Detailed-Method&quot;]) &gt; 0 { method = rl.req.Header[&quot;Detailed-Method&quot;][0] } remoteIP := rl.getXForwardIPAdress(rl.req) if !rl.hijacked { //glog.InfoDepth(1, fmt.Sprintf(&quot;%s %s: (%v) %v%v%v [%s %s]&quot;, rl.req.Method, rl.req.RequestURI, latency, rl.status, rl.statusStack, rl.addedInfo, rl.req.Header[&quot;User-Agent&quot;], rl.req.RemoteAddr)) 180 glog.InfoDepth(1, fmt.Sprintf(&quot;%v %s %s: (%sms) %v%v [%s %s]-%s %v&quot;, rl.req.Header[&quot;X-Requestid&quot;], method, rl.req.RequestURI, GetMilliLatency(latency), rl.status, rl.addedInfo, rl.req.Header[&quot;User-Agent&quot;], remoteIP, extraInfo, rl.statusStack)) } else { //glog.InfoDepth(1, fmt.Sprintf(&quot;%s %s: (%v) hijacked [%s %s]&quot;, rl.req.Method, rl.req.RequestURI, latency, rl.req.Header[&quot;User-Agent&quot;], rl.req.RemoteAddr)) glog.InfoDepth(1, fmt.Sprintf(&quot;%v %s %s: (%sms) hijacked [%s %s]-%s&quot;, rl.req.Header[&quot;X-Requestid&quot;], method, rl.req.RequestURI, GetMilliLatency(latency), rl.req.Header[&quot;User-Agent&quot;], remoteIP, extraInfo)) } } </code></pre> <p>}</p> <p>In handler.ServeHTTP, I changed request.Header. I cann't find the reason:</p> <p>(1) &quot;serveHTTP&quot; and &quot;defer log&quot; works in a serial manner. So not exist &quot;concurrent read and write&quot;</p> <p>(2) Even if exist &quot;concurrent read and write&quot;, there exist &quot;recover&quot; func to deal with panics, so the kube-apiserver shoud not exit.</p> <p>The question has cost me lots of time, who can help me? Thanks</p>
<p>My guess is that you've stepped on the fact that the global timeout handler can't actually stop the goroutines working on the request. So when the connection times out (1m unless you've changed it), it returns an error. If the goroutine actually completes at the same time, it could also attempt to write to the connection. I thought we had added a lock to prevent this from causing a problem, but perhaps the headers are not always protected by the lock. If you can reproduce this in a clean source tree, then please file an issue at the Kubernetes github repo.</p>
<p>Performed update to Kubernetes 1.5.2 on Google Container Engine. Then started getting the following errors:</p> <pre><code>Failed to count the # of live instances on Kubernetes </code></pre> <p>To resolve this I then upgraded Jenkins (to 2.32.2) and the Kubernetes plugin (to 0.10) to the latest versions.</p> <p>Afterwards, then I started getting the following errors:</p> <pre><code>Feb 08, 2017 9:51:52 PM hudson.TcpSlaveAgentListener$ConnectionHandler run WARNING: Connection #5 failed java.io.EOFException at java.io.DataInputStream.readFully(DataInputStream.java:197) at java.io.DataInputStream.readFully(DataInputStream.java:169) at hudson.TcpSlaveAgentListener$ConnectionHandler.run(TcpSlaveAgentListener.java:213) Feb 08, 2017 9:51:57 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call SEVERE: Error in provisioning; slave=KubernetesSlave name: default-6126d6e4fb5, template=org.csanchez.jenkins.plugins.kubernetes.PodTemplate@47404ab7 java.lang.IllegalStateException: Containers are terminated with exit codes: {jnlp=255} at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:600) at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:532) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) </code></pre>
<p>The last error was resolved by changing the slave container name to jnlp instead of default (see image). The google documentation shows the name is supposed to be default but it seems with these updates this is not the right approach to get this system working. </p> <p>It looks like the updated kubernetes-plugin creates two containers (a container with your specified image and another with the default jnlp image). If your image's name isn't jnlp then the plugin will run both containers in the slave pod... this seems to be causing the connection issue.</p> <p><a href="https://i.stack.imgur.com/RiCrb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RiCrb.png" alt="change kubernetes-plugin slave pod name to jnlp"></a></p>
<p>I have reached moment when I need to split my prometheus into smaller ones. I have been reading about it <a href="https://www.robustperception.io/scaling-and-federating-prometheus/" rel="nofollow noreferrer">here</a> but it does not say anything about scaling in kubernetes. Below is my setup:</p> <ul> <li>one node of prometheus</li> <li>one node of <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube state metrics</a></li> <li><a href="https://github.com/prometheus/node_exporter" rel="nofollow noreferrer">node exporter</a> on each cluster node</li> </ul> <p>and there are about 50 namespaces which produces thousands of metrics and one current setup with one prometheus is not enough. So I decided to split it to three instances like:</p> <ul> <li>one for <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube state metrics</a> metrics</li> <li>one for <a href="https://github.com/prometheus/node_exporter" rel="nofollow noreferrer">node exporter</a> metrics</li> <li>one for kubernetes metrics</li> </ul> <p>But after while i realised that those metrics are scraped by <a href="https://prometheus.io/docs/operating/configuration/#&lt;kubernetes_sd_config" rel="nofollow noreferrer">kubernetes_sd_config</a> and there is no way to tell which metrics I want to scrape by which instance of prometheus or I am wrong. One solution would be to split kubernetes cluster into smaller one but it is too much work for now.</p> <p>So my question is if there is any possibility to tell prometheus that I want scrape only <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube state metrics</a>, <a href="https://github.com/prometheus/node_exporter" rel="nofollow noreferrer">node exporter</a> or native kubernetes metrics ?</p>
<p>Another option would be going for a horizontally scalable, distributed Prometheus implementation: <a href="https://github.com/weaveworks/cortex" rel="nofollow noreferrer">https://github.com/weaveworks/cortex</a> (NB I wrote this.)</p> <p>Its not ready for prime time yet, but we're using it internally and getting pretty good results. It will be more effort to setup and operate than upstream Prometheus, but it should scale virtually indefinitely - and what's more we run it on Kubernetes, so it's really at home there.</p> <p>Let me know if you're interested and I can walk you though setting it up.</p>
<p>At present, there are lots of containers running in kubernetes cluster, is there any good solution to monitor each container and could sent alert when container is in bad status ?</p>
<p>This is broad subject and a lot depends on what you actually want to monitor, the size of your deployment and the technical expertise available.</p> <p>The most comprehensive solution is <a href="https://prometheus.io" rel="nofollow noreferrer">Prometheus</a>. It comes with extensive documentation, getting start guide, etc. The setup is pretty involved though.</p> <p>Depending on how you have setup your cluster <a href="http://grafana.org" rel="nofollow noreferrer">Grafana</a> might already be available. If not, you will need to deploy these addons: <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-monitoring/influxdb" rel="nofollow noreferrer">Heapster, InfluxDB and Grafana</a>. There are a number of tutorials but again it depends much on your environment.</p>
<p>I'm using Kubernetes 1.5.2 in CoreOS 1235.6.0 on bare metal, with calico v1.0.2 for the overlay network. Containers are getting correct IP addresses, but their routes don't match:</p> <pre><code>/ # ip addr show 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: tunl0@NONE: &lt;NOARP&gt; mtu 1480 qdisc noop qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 4: eth0@if9: &lt;BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN&gt; mtu 1500 qdisc noqueue link/ether 82:df:73:ee:d1:15 brd ff:ff:ff:ff:ff:ff inet 10.2.154.97/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::80df:73ff:feee:d115/64 scope link valid_lft forever preferred_lft forever / # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 169.254.1.1 0.0.0.0 UG 0 0 0 eth0 169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 </code></pre> <p>As a result, pod networking is broken. Outgoing traffic times out, whether it's ICMP or TCP, and whether it's to the host, another pod on the same host, the apiserver, or the public Internet. The only traffic that works is this pod talking to itself.</p> <p>Here's how I'm running kubelet:</p> <pre><code>[Unit] After=network-online.target Wants=network-online.target [Service] Environment=KUBELET_VERSION=v1.5.2_coreos.0 Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \ --volume var-log,kind=host,source=/var/log \ --mount volume=var-log,target=/var/log \ --dns=host \ --volume cni-conf,kind=host,source=/etc/cni \ --mount volume=cni-conf,target=/etc/cni \ --volume cni-bin,kind=host,source=/opt/cni/bin \ --mount volume=cni-bin,target=/opt/cni/bin" ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/usr/bin/mkdir -p /var/log/containers ExecStartPre=/usr/bin/mkdir -p /etc/cni/net.d ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid ExecStart=/usr/lib/coreos/kubelet-wrapper \ --allow-privileged=true \ --api-servers=https://master.example.com \ --cluster_dns=10.3.0.10 \ --cluster_domain=cluster.local \ --container-runtime=docker \ --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \ --network-plugin=cni \ --pod-manifest-path=/etc/kubernetes/manifests \ --tls-cert-file=/etc/kubernetes/ssl/worker.pem \ --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid Restart=always RestartSec=10 [Install] WantedBy=multi-user.target </code></pre> <p>The calico config is <a href="http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/calico.yaml" rel="nofollow noreferrer">the standard one</a>.</p> <p>What have I misconfigured?</p>
<p>The addressing and routes inside the container look fine. The routes outside the container on the host would be more interesting. Given what you've seen (veth created, which implies the CNI plugin is working), I'd check that the policy controller and calico-node are running properly (e.g. no error logs / restart loops).</p> <p>You might also want to try and get live support from the community: register at <a href="https://slack.projectcalico.org" rel="nofollow noreferrer">https://slack.projectcalico.org</a></p>
<p>I am trying to use Docker + Kubernetes for my application management.</p> <p>I have installed kubectl, kubeadm, kubelet (got the steps from google docs) for Kubernetes cluster.</p> <p>Now cluster is having 2 node(1 Master, 1 Child)</p> <p>I have a customize Dockerfile , how can it use it as a Kubernetes pods ?</p> <p>If this is not possible, How to transmit the docker build to the Kubernetes child from master.</p>
<p>You could use a private Docker registriy outside or inside the cluster or work with local (pre-pulled) images.</p> <ol> <li><p>Outside the cluster you might want to look at these:</p> <ul> <li><a href="https://docs.docker.com/registry/" rel="noreferrer">Docker registry image</a></li> <li><a href="https://www.jfrog.com/confluence/display/RTF/Docker+Repositories" rel="noreferrer">Jfrog Artifactory registry</a></li> <li><a href="https://www.sonatype.com/nexus-repository-oss" rel="noreferrer">Sonatype Nexus</a></li> <li><a href="https://hub.docker.com/" rel="noreferrer">Dockerhub private registry</a></li> <li><a href="https://cloud.google.com/container-registry/" rel="noreferrer">Google private registry</a></li> <li><a href="https://aws.amazon.com/ecr/" rel="noreferrer">Amazon ECR</a></li> <li><a href="https://quay.io/" rel="noreferrer">Quai.io registry</a></li> <li><a href="https://azure.microsoft.com/en-us/services/container-registry/" rel="noreferrer">Azure registry</a></li> </ul></li> <li><p>Inside the cluster you might want to look at the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry" rel="noreferrer">Private Docker Registry in Kubernetes</a></p></li> <li><p>If you're not interested to use a registry, you could also build the image on every Kubernetes node so that Docker doesn't have to pull it. To avoid that Kubernetes tried to pull anyways you would then have to set the <code>imagePullPolicy</code> of your containers to <code>Never</code>. That's described within the <a href="https://kubernetes.io/docs/user-guide/images/#pre-pulling-images" rel="noreferrer">official documentation</a>.</p></li> </ol>
<p>I get the go get k8s.io/client-go/1.5/... An error occurred while trying to go run:</p> <pre><code>&gt; # k8s.io/client-go/pkg/api/v1 &gt; ../k8s.io/client-go/pkg/api/v1/helpers.go:86: undefined: v1.FinalizerOrphan </code></pre> <p>Want to how to deal with, please?</p> <p>../k8s.io/client-go/pkg/api/v1/helpers.go:86:</p> <pre class="lang-golang prettyprint-override"><code>var standardFinalizers = sets.NewString( string(FinalizerKubernetes), metav1.FinalizerOrphan, ) </code></pre>
<p>Sorry, this should be temporary--we are working on a fix (and on a system so our publishing bot stops breaking the client). In the meantime, go back a commit or two, as the other answer suggests.</p>
<p>Create a deployment as below:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: owt: hello pdl: com app: world idc: xg add: parameters-48 name: parameters-48 spec: replicas: 2 template: metadata: labels: name: parameters-48 spec: containers: - name: mofang-web image: registry.cc.com/online/mofang:stable nodeSelector: node:cc </code></pre> <p>Login to node found the container's pid, then check its oom score:</p> <pre><code>cat /proc/21606/oom_adj -16 cat /proc/21606/oom_score 0 cat /proc/21606/oom_score_adj -999 </code></pre> <p>According user guide page: <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-qos.md</a>, this pod should be a "Best-effort" pod, but its OOM_SCORE_ADJ not 1000 but -999. By the way -999 means won't be oom killed.</p>
<p>Following the <a href="https://github.com/kubernetes/kubernetes/issues/40990" rel="nofollow noreferrer">issue</a>, it's maybe worth to summarize how to find the correct container pid / proc for a pod:</p> <p>Run some application:</p> <pre><code>$ kubectl run bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 </code></pre> <p>Find the containerID:</p> <pre><code>$ kubectl get pods --output=jsonpath='{.items[*].status.containerStatuses[*].containerID}' docker://59f127d641cef9475309cbf4b5fc2e4a65f3e52a0e08112dccbc2c144a0e366f </code></pre> <p>Find the related host / node:</p> <pre><code>$ kubectl get pods --output=jsonpath='{.items[*].status.hostIP}' 192.168.99.100 </code></pre> <p>Both could also be found with:</p> <pre><code>$ kubectl describe pod &lt;podID&gt; </code></pre> <p>Connect to the node via SSH, then run:</p> <pre><code>$ docker inspect 59f127d641cef9475309cbf4b5fc2e4a65f3e52a0e08112dccbc2c144a0e366f | grep Pid\": "Pid": 18052, $ cat /proc/18052/oom_* 15 1000 1000 </code></pre> <p>Hope this helps someone else at some point</p>
<p>How to configure kubernetes cluster with corporate ldap, for authentication?</p> <p>I'd not found anything official.</p>
<p><a href="https://github.com/apprenda-kismatic/kubernetes-ldap" rel="nofollow noreferrer">Kismatic</a> is one of the projects that provides a Lightweight Directory Access Protocol (LDAP) authentication webhook for Kubernetes. For Kismatic Enterprise Toolkit (KET) source code check out <a href="https://github.com/apprenda/kismatic" rel="nofollow noreferrer">this link</a>.</p>
<p>I want to allow only my team to access(https) our staging environments(Web application) through ingress in GKE cluster.</p> <p>I found the article below. but GKE doesn't support basic authentication and setting nginx is only way . </p> <p><a href="https://stackoverflow.com/questions/39862340/gke-ingress-basic-authentication-ingress-kubernetes-io-auth-type">GKE Ingress Basic Authentication (ingress.kubernetes.io/auth-type)</a></p> <p>I want to avoid setting nginx if possible. Because I want to make staging and production as close as possible.</p> <p>Thanks.</p>
<p>We're having very similar problems, however our services themselves require authentication, so public accessibility is not too much of a concern for us.</p> <p>It might not apply to your scenario, but you can firewall the ingress' external IP to be accessible only from certain IPs, e.g. the one of your office. It's a quite naive but at least very quick solution. Google's <a href="https://cloud.google.com/compute/docs/cloudrouter" rel="nofollow noreferrer">Cloud Router</a> might also be worth a shot otherwise.</p>
<p>I'm currently trying to figure out the best way, as a Kubernetes admin, to make users and give them access to <code>kubectl</code>. Originally, I was going to use serviceAccounts, but it seems that that should not be used as authentication for users. I have been reading over this: <a href="https://kubernetes.io/docs/admin/authentication/#users-in-kubernetes" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/authentication/#users-in-kubernetes</a> but there are so many different ways to authenticate I was not sure what the best practices are for my use case. Thanks for any help!</p>
<p>i would go the cert route. I believe that what most people use for production and its what we currently use. I included a link to the hard way which has a better description on making certs. </p> <p><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/02-certificate-authority.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/02-certificate-authority.md</a> </p> <p>good luck</p>
<p>I was looking at the kubernetes API endpoints listed <a href="http://kubernetes.io/docs/api-reference/v1/operations/" rel="noreferrer">here</a>. Im trying to create a deployment which can be run from the terminal using <code>kubectl ru CLUSTER-NAME IMAGE-NAME PORT</code>. However I cant seem to find any endpoint for this command in the link I posted above. I can create a node using <code>curl POST /api/v1/namespaces/{namespace}/pods</code> and then delete using the <code>curl -X DELETE http://localhost:8080/api/v1/namespaces/default/pods/node-name</code> where node name HAS to be a single node (if there are 100 nodes, each should be done individually). <strong>Is there an api endpoint for creating and deleting deployments??</strong></p>
<p>To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path, such as <code>/api/v1</code> or <code>/apis/extensions/v1beta1</code> and to extend the Kubernetes API, API groups is implemented.</p> <p>Currently there are several API groups in use:</p> <ul> <li>the <code>core</code> (oftentimes called <code>legacy</code>, due to not having explicit group name) group, which is at REST path <code>/api/v1</code> and is not specified as part of the apiVersion field, e.g. <code>apiVersion: v1</code>.</li> <li>the <code>named groups</code> are at REST path <code>/apis/$GROUP_NAME/$VERSION</code>, and use <code>apiVersion: $GROUP_NAME/$VERSION</code> (e.g. <code>apiVersion: batch/v1</code>). Full list of supported API groups can be seen in <a href="https://kubernetes.io/docs/reference/" rel="noreferrer">Kubernetes API reference</a>.</li> </ul> <p>To manage extensions resources such as <code>Ingress</code>, <code>Deployments</code>, and <code>ReplicaSets</code> refer to <code>Extensions API</code> <a href="https://kubernetes.io/docs/api-reference/extensions/v1beta1/operations/" rel="noreferrer">reference</a>.</p> <p>As described in the reference, to create a Deployment:</p> <p><code>POST /apis/extensions/v1beta1/namespaces/{namespace}/deployments</code></p>
<p>I tried this <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="noreferrer">doc</a> to install and setup Kubernetes in Ubuntu VM. I have finished upto 3/4 and now kube-dns pod is in pending status. How can i figure out this? here is the result for <code>kubectl get pods --namespace=kube-system</code> and <code>kubectl describe pod &lt;pod name&gt;</code></p> <pre><code># kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE dummy-2088944543-jk2t2 1/1 Running 0 3h etcd-ubuntu 1/1 Running 0 3h kube-apiserver-ubuntu 1/1 Running 0 3h kube-controller-manager-ubuntu 1/1 Running 0 3h kube-discovery-1769846148-h88v4 1/1 Running 0 3h kube-dns-2924299975-dfp17 0/4 Pending 0 3h kube-proxy-zdcxw 1/1 Running 0 3h kube-scheduler-ubuntu 1/1 Running 0 3h weave-net-xwfhj 2/2 Running 0 2h # kubectl describe pod kube-dns-2924299975-dfp17 Error from server (NotFound): pods "kube-dns-2924299975-dfp17" not found </code></pre>
<h3>Cause</h3> <p>Most likely a lack of available computing resources in your cluster.</p> <p>If you're using the example in <a href="https://github.com/kubernetes/kubernetes/blob/release-1.10/cluster/addons/dns/kube-dns.yaml.base#L104-L108" rel="noreferrer">cluster/addons/dns</a> you're certainly using a <code>Deployment</code> with resource requests, highlighted if you click the link. It could be that your other pods are already requesting all the available resources in the cluster, therefore your pod doesn't get scheduled.</p> <p>You can confirm that theory with <code>kubectl --namespace=kube-system describe pod kube-dns-2924299975-dfp17</code> and look for the following event:</p> <pre><code>Reason Message ------ ------- FailedScheduling pod (kube-dns-2924299975-dfp17) failed to fit in any node fit failure summary on nodes : Insufficient cpu (3) </code></pre> <p>You can also describe your nodes with <code>kubectl describe node &lt;node-name&gt;</code> and look at the last information:</p> <pre><code>Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted. CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 320m (8%) 300m (7%) 150Mi (1%) 150Mi (1%) </code></pre> <p>In your case either the CPU or memory allocation should be close to 100%.</p> <h3>Solution</h3> <ul> <li>Add more computing resources / nodes to your cluster (preferred)</li> <li>Remove the resource requests from your pod(s), at the risk of overcommitting your resources</li> </ul>
<p>I followed the <a href="https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes" rel="nofollow noreferrer">https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes</a> and tried to run a single mongodb instance with a LoadBalancer Service and Replication Controller on a Google Cloud account.</p> <p>Following is the yaml file:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: mongo-svc-a labels: name: mongo-svc-a spec: type: LoadBalancer ports: - port: 27017 targetPort: 27017 protocol: TCP name: mongo-svc-a selector: name: mongo-node1 instance: rod --- apiVersion: v1 kind: ReplicationController metadata: name: mongo-rc1 labels: name: mongo-rc spec: replicas: 1 selector: name: mongo-node1 template: metadata: labels: name: mongo-node1 instance: rod spec: containers: - name: mongo-node1 image: mongo ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage1 mountPath: /data/db volumes: - name: mongo-persistent-storage1 gcePersistentDisk: pdName: mongodb-disk1-in-cluster1 fsType: ext4 </code></pre> <p>Following are the details of the Service, Replication Controller and Pods that were created. All seems to be fine.</p> <pre><code>$ kubectl describe service mongo-svc-a Name: mongo-svc-a Namespace: default Labels: name=mongo-svc-a Selector: instance=rod,name=mongo-node1 Type: LoadBalancer IP: 10.3.241.11 LoadBalancer Ingress: 104.198.236.2 Port: mongo-svc-a 27017/TCP NodePort: mongo-svc-a 31808/TCP Endpoints: 10.0.0.3:27017 Session Affinity: None No events. $ kubectl describe rc mongo-rc1 Name: mongo-rc1 Namespace: default Image(s): mongo Selector: name=mongo-node1 Labels: name=mongo-rc Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Volumes: mongo-persistent-storage1: Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine) PDName: mongodb-disk1-in-cluster1 FSType: ext4 Partition: 0 ReadOnly: false No events. $ kubectl describe pod mongo-rc1-h0j8r Name: mongo-rc1-h0j8r Namespace: default Node: gke-cluster1-default-pool-d58b6c05-74fq/10.128.0.6 Start Time: Sat, 11 Feb 2017 18:43:00 +0530 Labels: instance=rod name=mongo-node1 Status: Running IP: 10.0.0.3 Controllers: ReplicationController/mongo-rc1 Containers: mongo-node1: Container ID: docker://9f28e482d3806b74f7f595c47e6c7940c2313e95860db13d137ad6eaa88bb341 Image: mongo Image ID: docker://sha256:ad974e767ec4f06945b1e7ffdfc57bd10e06baf66cdaf5a003e0e6a36924e30b Port: 27017/TCP Requests: cpu: 100m State: Running Started: Sat, 11 Feb 2017 18:45:37 +0530 Ready: True Restart Count: 0 Volume Mounts: /data/db from mongo-persistent-storage1 (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-3wfv3 (ro) Environment Variables: &lt;none&gt; Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: mongo-persistent-storage1: Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine) PDName: mongodb-disk1-in-cluster1 FSType: ext4 Partition: 0 ReadOnly: false default-token-3wfv3: Type: Secret (a volume populated by a Secret) SecretName: default-token-3wfv3 QoS Class: Burstable Tolerations: &lt;none&gt; No events. </code></pre> <p>But when i try to remote connect, <code>mongo --hostname &lt;Extenal_IP_of_the_service&gt;</code> to the newly created monogdb instance, i am unable to connect. I believe the configuration seems alright. Any help would be appreciated. Thanks,</p> <ul> <li>Pavan</li> </ul>
<p>Tried a different approach and nailed it. </p> <p>Followed the following tutorial: <a href="https://kubernetes.io/docs/tutorials/stateful-application/run-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/run-stateful-application/</a></p> <p>Exposed the headless service: </p> <pre><code>kubectl expose service your_service_name --port=27017 --target-port=27017 --name=mongo-lb-service --type=LoadBalancer </code></pre>
<p>EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!</p> <p>running <code>$ gcloud docker push gcr.io/kubernetes-test-1367/myapp</code> results in:</p> <pre><code>The push refers to a repository [gcr.io/kubernetes-test-1367/myapp] 595e622f9b8f: Preparing 219bf89d98c1: Preparing 53cad0e0f952: Preparing 765e7b2efe23: Preparing 5f2f91b41de9: Preparing ec0200a19d76: Preparing 338cb8e0e9ed: Preparing d1c800db26c7: Preparing 42755cf4ee95: Preparing ec0200a19d76: Waiting 338cb8e0e9ed: Waiting d1c800db26c7: Waiting 42755cf4ee95: Waiting denied: Unable to create the repository, please check that you have access to do so. </code></pre> <p><code>$ gcloud init</code> results in:</p> <pre><code>Welcome! This command will take you through the configuration of gcloud. Settings from your current configuration [default] are: [core] account = &lt;my_email&gt;@gmail.com disable_usage_reporting = True project = kubernetes-test-1367 Your active configuration is: [default] </code></pre> <p>Note: this is a duplicate of <a href="https://stackoverflow.com/questions/37403634/kubernetes-unable-to-create-repository">Kubernetes: Unable to create repository</a>, but I tried his solution and it did not help me. I've tried appending <code>:v1</code>, <code>/v1</code>, and using <code>us.gcr.io</code></p> <p>Edit: Additional Info</p> <pre><code>$ gcloud --version Google Cloud SDK 116.0.0 bq 2.0.24 bq-win 2.0.18 core 2016.06.24 core-win 2016.02.05 gcloud gsutil 4.19 gsutil-win 4.16 kubectl kubectl-windows-x86_64 1.2.4 windows-ssh-tools 2016.05.13 </code></pre> <p>+</p> <pre><code>$ gcloud components update All components are up to date. </code></pre> <p>+</p> <pre><code>$ docker -v Docker version 1.12.0-rc3, build 91e29e8, experimental </code></pre>
<p>The <a href="https://cloud.google.com/container-registry/docs/access-control#permissions_and_roles" rel="nofollow noreferrer">first image push requires admin rights</a> for the project. I had the same problem trying to push a new container to GCR for a team project, which I could resolve by updating my permissions.</p> <p>You might also want to have a look at <a href="https://github.com/GoogleCloudPlatform/docker-credential-gcr" rel="nofollow noreferrer">docker-credential-gcr</a>. Hope that helps.</p>
<p>I've created a Kubernetes cluster on AWS with <a href="https://github.com/kubernetes/kops" rel="noreferrer">kops</a> and can successfully administer it via <code>kubectl</code> from my local machine.</p> <p>I can view the current config with <code>kubectl config view</code> as well as directly access the stored state at <code>~/.kube/config</code>, such as:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://api.{CLUSTER_NAME} name: {CLUSTER_NAME} contexts: - context: cluster: {CLUSTER_NAME} user: {CLUSTER_NAME} name: {CLUSTER_NAME} current-context: {CLUSTER_NAME} kind: Config preferences: {} users: - name: {CLUSTER_NAME} user: client-certificate-data: REDACTED client-key-data: REDACTED password: REDACTED username: admin - name: {CLUSTER_NAME}-basic-auth user: password: REDACTED username: admin </code></pre> <p>I need to enable other users to also administer. This <a href="https://kubernetes.io/docs/user-guide/sharing-clusters/" rel="noreferrer">user guide</a> describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?</p> <p>Also, is it safe to just share the <code>cluster.certificate-authority-data</code>?</p>
<p>For a full overview on Authentication, refer to the official Kubernetes docs on <a href="https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens" rel="noreferrer">Authentication</a> and <a href="http://kubernetes.io/docs/admin/authorization/" rel="noreferrer">Authorization</a></p> <p>For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).</p> <p>If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks</p> <p>If you self-host kubernetes (which is the case when you use kops), you may use <a href="https://github.com/coreos/dex#kubernetes--dex" rel="noreferrer">coreos/dex</a> to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part <a href="https://thenewstack.io/single-sign-kubernetes-command-line-experience/" rel="noreferrer">SSO for Kubernetes</a> article.</p> <p>kops (1.10+) now has built-in <a href="https://github.com/kubernetes/kops/blob/release-1.11/docs/authentication.md" rel="noreferrer">authentication support</a> which eases the integration with AWS IAM as identity provider if you're on AWS.</p> <p>for Dex there are a few open source cli clients as follows:</p> <ul> <li><a href="https://github.com/Nordstrom/kubelogin" rel="noreferrer">Nordstrom/kubelogin</a></li> <li><a href="https://github.com/pusher/k8s-auth-example" rel="noreferrer">pusher/k8s-auth-example</a></li> </ul> <p>If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse <code>serviceaccounts</code> - with 2 options for specialised Policies to control access. (see below)</p> <p><strong>NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup</strong></p> <p><strong>EDIT</strong>: Great, but outdated (2017-2018), guide by Bitnami on <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="noreferrer">User setup with RBAC</a> is also available.</p> <p>Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):</p> <p><strong>EDIT</strong>: <a href="https://gist.github.com/so0k/8fad3b1639b3d70cd841703fda67f16b" rel="noreferrer">Here is a bash script to automate Service Account creation - see below steps</a></p> <ol> <li><p>Create service account for user <code>Alice</code></p> <pre><code>kubectl create sa alice </code></pre> </li> <li><p>Get related secret</p> <pre><code>secret=$(kubectl get sa alice -o json | jq -r .secrets[].name) </code></pre> </li> <li><p>Get <code>ca.crt</code> from secret (using OSX <code>base64</code> with <code>-D</code> flag for decode)</p> <pre><code>kubectl get secret $secret -o json | jq -r '.data[&quot;ca.crt&quot;]' | base64 -D &gt; ca.crt </code></pre> </li> <li><p>Get service account token from secret</p> <pre><code>user_token=$(kubectl get secret $secret -o json | jq -r '.data[&quot;token&quot;]' | base64 -D) </code></pre> </li> <li><p>Get information from your kubectl config (current-context, server..)</p> <pre><code># get current context c=$(kubectl config current-context) # get cluster name of context name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1) # get endpoint of current context endpoint=$(kubectl config view -o jsonpath=&quot;{.clusters[?(@.name == \&quot;$name\&quot;)].cluster.server}&quot;) </code></pre> </li> <li><p>On a fresh machine, follow these steps (given the <code>ca.cert</code> and <code>$endpoint</code> information retrieved above:</p> <ol> <li><p>Install <code>kubectl</code></p> <pre><code> brew install kubectl </code></pre> </li> <li><p>Set cluster (run in directory where <code>ca.crt</code> is stored)</p> <pre><code> kubectl config set-cluster cluster-staging \ --embed-certs=true \ --server=$endpoint \ --certificate-authority=./ca.crt </code></pre> </li> <li><p>Set user credentials</p> <pre><code> kubectl config set-credentials alice-staging --token=$user_token </code></pre> </li> <li><p>Define the combination of alice user with the staging cluster</p> <pre><code> kubectl config set-context alice-staging \ --cluster=cluster-staging \ --user=alice-staging \ --namespace=alice </code></pre> </li> <li><p>Switch current-context to <code>alice-staging</code> for the user</p> <pre><code> kubectl config use-context alice-staging </code></pre> </li> </ol> </li> </ol> <p>To control user access with policies (using <a href="https://kubernetes.io/docs/admin/authorization/abac/" rel="noreferrer">ABAC</a>), you need to create a <a href="https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens" rel="noreferrer"><code>policy</code></a> file (for example):</p> <pre><code>{ &quot;apiVersion&quot;: &quot;abac.authorization.kubernetes.io/v1beta1&quot;, &quot;kind&quot;: &quot;Policy&quot;, &quot;spec&quot;: { &quot;user&quot;: &quot;system:serviceaccount:default:alice&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;resource&quot;: &quot;*&quot;, &quot;readonly&quot;: true } } </code></pre> <p>Provision this <code>policy.json</code> on every master node and add <code>--authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json</code> flags to API servers</p> <p>This would allow Alice (through her service account) read only rights to all resources in default namespace only.</p>
<p>I am creating a Replication controller with one init-container. however the init container fails to start and the status of the pod is:</p> <pre><code>NAME READY STATUS RESTARTS AGE testcontainer 0/1 CrashLoopBackOff 12 37m </code></pre> <p>I am not sure what part is failing exactly, and the logs do not help. My kubectl server version is 1.4 (different from client version) so I am using:</p> <pre><code>annotations: pod.beta.kubernetes.io/init-containers: </code></pre> <p>Here is the replication controller yaml file I am using. I am using the "hello-world" image (instead of the nginx to make it faster)</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: testcontainer spec: replicas: 1 selector: app: nginx template: metadata: labels: app: nginx annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "install", "image": "hello-world" } ]' spec: containers: - name: nginx image: hello-world dnsPolicy: Default nodeName: x.x.x.x </code></pre> <p>logs from kubectl describe pod:</p> <pre><code>Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=nginx pod=testcontainer()" 32m 16s 145 {kubelet x.x.x.x} spec.containers{nginx} Warning BackOff Back-off restarting failed docker container </code></pre> <p>when I check the logs of both containers (nginx and testcontainer) it shows the output of running the hello-world image, so I guess the image is downloaded and started successfully. Im not sure what fails after that ( I even tried creating a single pod, using the example provided on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container</a> , but still fails)</p>
<p>I think the problem here isn't the init container. The <code>hello-world</code> image print a text and exits immediately. Since <a href="https://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec" rel="nofollow noreferrer"><code>.spec.restartPolicy</code></a> of the pod defaults to <code>Always</code>, it just restarts the pod every time.</p> <p>The error message might be a bit confusing, but since the pod is intended to run forever it quite makes sense to display an error, even if the exit code is <code>0</code>.</p> <p>If you want to run a pod only a single time, you should use the <a href="https://kubernetes.io/docs/user-guide/jobs/" rel="nofollow noreferrer">job API</a>.</p> <hr> <p>Since you are interested in an example for the init-container, I fixed your example:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: testcontainer spec: replicas: 1 selector: app: nginx template: metadata: labels: app: nginx annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "install", "image": "hello-world" } ]' spec: containers: - name: nginx image: nginx # &lt;--- this image shouldn't be a single shot application dnsPolicy: Default nodeName: x.x.x.x </code></pre>
<p>I'm running this command:</p> <pre><code>kubectl set image deployment/www-deployment VERSION_www=newImage </code></pre> <p>Works fine. But there's a 10 second window where the website is 503, and I'm a perfectionist. </p> <p><strong>How can I configure kubernetes to wait for the image to be available before switching the ingress?</strong></p> <p>I'm using the nginx ingress controller from here:</p> <pre><code>gcr.io/google_containers/nginx-ingress-controller:0.8.3 </code></pre> <p>And this yaml for the web server:</p> <pre><code># Service and Deployment apiVersion: v1 kind: Service metadata: name: www-service spec: ports: - name: http-port port: 80 protocol: TCP targetPort: http-port selector: app: www sessionAffinity: None type: ClusterIP --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: www-deployment spec: replicas: 1 template: metadata: labels: app: www spec: containers: - image: myapp/www imagePullPolicy: Always livenessProbe: httpGet: path: /healthz port: http-port name: www ports: - containerPort: 80 name: http-port protocol: TCP resources: requests: cpu: 100m memory: 100Mi volumeMounts: - mountPath: /etc/env-volume name: config readOnly: true imagePullSecrets: - name: cloud.docker.com-pull volumes: - name: config secret: defaultMode: 420 items: - key: www.sh mode: 256 path: env.sh secretName: env-secret </code></pre> <p>The Docker image is based on a <a href="https://github.com/nodejs/docker-node/blob/master/4.5/slim/Dockerfile" rel="nofollow noreferrer">node.js server image</a>.</p> <p><code>/healthz</code> is a file in the webserver which returns <code>ok</code> I thought that liveness probe would make sure the server was up and ready before switching to the new version. </p> <p>Thanks in advance!</p>
<p>within the <a href="https://kubernetes.io/docs/user-guide/pod-states/" rel="nofollow noreferrer">Pod lifecycle</a> it's defined that:</p> <blockquote> <p>The default state of Liveness before the initial delay is Success.</p> </blockquote> <p>To make sure you don't run into issues better configure the <code>ReadinessProbe</code> for your Pods too and consider to configure <code>.spec.minReadySeconds</code> for your Deployment.</p> <p>You'll find details in the <a href="https://kubernetes.io/docs/user-guide/deployments" rel="nofollow noreferrer">Deployment</a> documentation</p>
<p>I am new to the kubernetes environment. While deploying an application, I could figure out how to do auto scaling but did not quite understand how high availability is ensured? If its not, how can I configure it?</p> <p>Edit : By HA, I mean how to ensure that pod is scheduled across multiple nodes to ensure HA on pod/service level.</p> <p>Please guide. Thanks in advance! :)</p>
<blockquote> <p>By HA, I mean how to ensure that pod is scheduled across multiple nodes to ensure HA on pod/service level.</p> </blockquote> <p>I'm guessing your app is cloud compatible and can be scaled, In this situation there are multiple feature your can take advantage of:</p> <ul> <li><code>DaemonSets</code>: containers on <code>demonsets</code> will be run on every single node. Unless you include/exclude certain nodes.</li> <li><code>Deployments</code>: <code>Deployments</code> are next generation of <code>Replication Controllers</code>. Using <code>deployments</code> you can easily scale your application as well as ensure availability of certain number of pods. Please note in order to be available on node failure, you need to set node affinity rules on the pods. In order to do that you need to set it in the pod templates. In 1.6 affinity can be specified as a field in <code>PodSpec</code>, rather than using <code>annotations</code>.</li> </ul>
<p>I'm running Kubernetes myself on Google Compute Engine (not Google Container Engine). Google Container Engine has <a href="https://cloudplatform.googleblog.com/2015/12/monitoring-Container-Engine-with-Google-Cloud-Monitoring.html" rel="nofollow noreferrer">built-in integration</a> with Stackdriver Monitoring and I'm wondering if it's possible to set this up for a Kubernetes cluster on Google Compute Engine. </p> <p>Specifically, I'd like to see more than just cpu, disk, etc. I want to see Kubernetes data like pod scheduling failures, pod counts, etc.</p>
<p>It is not possible to configure Stackdriver exactly the same way that it is done in GKE. </p> <p>However, you can set <code>ENABLE_CLUSTER_MONITORING</code> to <code>google</code> in <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/config-default.sh#L107" rel="nofollow noreferrer">config-default.sh</a> to enable Heapster and Google Cloud Monitoring. </p>
<p>I have GKE to create a Kubernetes cluster, which is having 2 nodes and i have created a deployment to download nginx image with 2 replicas. Now that i have nginx images running on the same node ( NODE 1 ) , i am able to login to NODE 2 and curl the PODIP of NODE1 .</p> <p>I would like to understand , how communication between container in NODE 1 and NODE 2 even though am not using port 80 of the node.</p> <p>Appreciate some insights on the same.</p>
<p>You should start by reading an overview on <a href="https://kubernetes.io/docs/admin/networking/" rel="nofollow noreferrer">Networking in Kubernetes</a>. The question you are asking is about the <em>overlay network</em>. The exact workings of the overlay network is viewed as an implementation detail and there are many different networking solutions (GKE is running on <a href="https://kubernetes.io/docs/admin/networking/#google-compute-engine-gce" rel="nofollow noreferrer">GCE</a>). GKE is using <a href="https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet" rel="nofollow noreferrer"><code>kubenet</code></a>.</p> <p>You might find the talk <a href="https://speakerdeck.com/thockin/illustrated-guide-to-kubernetes-networking" rel="nofollow noreferrer"><em>Illustrated Guide To Kubernetes Networking</em></a> useful.</p>
<p>I am running kubeadm alpha version to set up my kubernates cluster. From kubernates , I am trying to pull docker images which is hosted in nexus repository. When ever I am trying to create a pods , It is giving "ImagePullBackOff" every time. Can anybody help me on this ?</p> <p>Detail for this are present in <a href="https://github.com/kubernetes/kubernetes/issues/41536" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/41536</a></p> <p>Pod definition : </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pod labels: name: test spec: containers: - image: 123.456.789.0:9595/test name: test ports: - containerPort: 8443 imagePullSecrets: - name: my-secret </code></pre>
<p>You need to <a href="https://kubernetes.io/docs/user-guide/images/#referring-to-an-imagepullsecrets-on-a-pod" rel="noreferrer">refer to the secret</a> you have just created from the Pod definition.</p> <p>When you create the secret with <code>kubectl create secret docker-registry my-secret --docker-server=123.456.789.0 ...</code> the server must exactly match what's in your Pod definition - <strong>including the port number</strong> (and if it's a secure one then it also must match up with the docker command line in systemd).</p> <p>Also, the secret must be in the <strong>same namespace</strong> where you are creating your Pod, but that seems to be in order.</p>
<p>I am trying to create a pod with 2 containers each having different images! Am not sure how to expose the two different containers to the client. Following is my yaml file for deployment.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: checkdifimage spec: replicas: 1 template: metadata: labels: app: checkdifimagelab spec: containers: - name: checkallcont1 image: &lt;dockerimage&gt; ports: - containerPort: 32030 - name: checkall1cont2 image: &lt;dockerimage2&gt; ports: - containerPort: 32031 </code></pre> <p>What am currently doing is after I have the deployment up. I run the following command to expose the service :</p> <pre><code>kubectl expose pod checkdifimage --port=8080 --type=NodePort --name=diffimage </code></pre> <p>This works for a single container and am able to hit the service from a rest client. But when I use 2 containers, i am only able to hit one container. How should I proceed to hit both the containers? Also, if someone can please guide on what are the advantages and disadvantages of using one pod having one image vs one pod having multiple images!</p>
<p>You have multiple Options:</p> <ol> <li><p>Create multiple services exposing one port each, on the same deployment.</p></li> <li><p>Create single service exposing multiple ports:</p> <pre><code>--- kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - name: http protocol: TCP port: 80 targetPort: 9376 - name: https protocol: TCP port: 443 targetPort: 9377 </code></pre></li> <li><p>Using kubectl expose:</p> <pre><code>kubectl expose service nginx --port=443 --target-port=8443 --name=nginx-https </code></pre></li> </ol> <blockquote> <p>Note that if no port is specified via –port and the exposed resource has multiple ports, all will be re-used by the new service. Also if no labels are specified, the new service will re-use the labels from the resource it exposes.</p> </blockquote> <p><a href="https://kubernetes.io/docs/user-guide/kubectl/kubectl_expose/" rel="noreferrer">Source</a></p> <p>When to use multi container pods: A pod is a group of one or more containers, the shared storage for those containers, and options about how to run the containers. Pods are always <strong>co-located</strong> and <strong>co-scheduled</strong>, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are <strong>relatively tightly coupled</strong> — in a pre-container world, they would have executed on the same physical or virtual machine.</p> <p><a href="https://i.stack.imgur.com/cpipG.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/cpipG.jpg" alt="enter image description here"></a></p>
<p>How can I to source the Kubernetes deployment/job name that spawned the current pod from within the pod?</p>
<p>In many cases the hostname of the Pod equals to the name of the Pod (you can access that by the HOSTNAME environment variable). However that's not a reliable method of determining the Pod's identity.</p> <p>You will want to you use the <a href="https://kubernetes.io/docs/user-guide/downward-api/" rel="noreferrer">Downward API</a> which allows you to expose metadata as environment variables and/or files on a volume.</p> <p>The name and namespace of a Pod can be exposed as environment variables (fields: <code>metadata.name</code> and <code>metadata.namespace</code>) but the information about the creator of a Pod (which is the <em>annotation</em> <code>kubernetes.io/created-by</code>) can only be exposed as a file.</p> <p>Example:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: busybox labels: {app: busybox} spec: selector: {matchLabels: {app: busybox}} template: metadata: {labels: {app: busybox}} spec: containers: - name: busybox image: busybox command: - "sh" - "-c" - | echo "I am $MY_POD_NAME in the namespace $MY_POD_NAMESPACE" echo grep ".*" /etc/podinfo/* while :; do sleep 3600; done env: - name: MY_POD_NAME valueFrom: {fieldRef: {fieldPath: metadata.name}} - name: MY_POD_NAMESPACE valueFrom: {fieldRef: {fieldPath: metadata.namespace}} volumeMounts: - name: podinfo mountPath: /etc/podinfo/ volumes: - name: podinfo downwardAPI: items: - path: "labels" fieldRef: {fieldPath: metadata.labels} - path: "annotations" fieldRef: {fieldPath: metadata.annotations} </code></pre> <p>Too see the output:</p> <pre><code>$ kubectl logs `kubectl get pod -l app=busybox -o name | cut -d / -f2` </code></pre> <p>Output:</p> <pre><code>I am busybox-1704453464-m1b9h in the namespace default /etc/podinfo/annotations:kubernetes.io/config.seen="2017-02-16T16:46:57.831347234Z" /etc/podinfo/annotations:kubernetes.io/config.source="api" /etc/podinfo/annotations:kubernetes.io/created-by="{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"busybox-1704453464\",\"uid\":\"87b86370-f467-11e6-8d47-525400247352\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"191157\"}}\n" /etc/podinfo/annotations:kubernetes.io/limit-ranger="LimitRanger plugin set: cpu request for container busybox" /etc/podinfo/labels:app="busybox" /etc/podinfo/labels:pod-template-hash="1704453464" </code></pre>
<p>When I try to create an alerting policy in Stackdriver Monitoring, my custom metrics do not show up in the dropdown list. When I try to add a chart in the Stackdriver Monitoring dashboard, they show up. Is there something more I need to do to make these custom metrics alertable?</p> <p>These custom metrics were created using heapster on kubernetes. I'm still on the Stackdriver Premium trial.</p> <p>Here is a screenshot of the resource type list when creating a chart.</p> <p><a href="https://i.stack.imgur.com/8Xttj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Xttj.png" alt="enter image description here"></a></p> <p>Here is a screenshot of the resource type list when creating an alerting policy condition.</p> <p><a href="https://i.stack.imgur.com/OT6kG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OT6kG.png" alt="enter image description here"></a></p>
<p>you should have access to alert on the same custom metrics in the dropdown, and be able to choose "Custom Metrics" as the RESOURCE TYPE. Maybe check your permissions / logged in account, it could be different between the two cases. If it isn't, hit the "Send Feedback" button at the bottom right of the screen and add my name to the text. Thanks.</p>
<p>I've read the <a href="https://cloud.google.com/container-engine/docs/quickstart" rel="nofollow noreferrer">quick start guide</a> and I've got as far as</p> <ul> <li>I have a cluster</li> <li>I can see that I need to type a <code>kubectl run</code> command to start my container.</li> </ul> <p>I want to start a publicly available docker container, which I can start on any docker-enabled machine with this command</p> <pre><code>docker run -d \ -e DRONE_SERVER=wss://ci.fommil.com/ws/broker \ -e DRONE_SECRET=&lt;redacted&gt; \ -e DOCKER_MAX_PROCS=1 \ -e DRONE_TIMEOUT=30m \ -v /var/run/docker.sock:/var/run/docker.sock \ --restart=always \ --name=drone-agent \ drone/drone:0.5 agent </code></pre> <p>what is the equivalent Google Console / cubectl command? I've got as far as</p> <pre><code>kubectl run agent \ --image=drone/drone:0.5 \ --env="DRONE_SERVER=wss://ci.fommil.com/ws/broker" \ --env="DRONE_SECRET=&lt;redacted&gt;" \ --env="DOCKER_MAX_PROCS=1" \ --env="DRONE_TIMEOUT=30m" \ -v /var/run/docker.sock:/var/run/docker.sock </code></pre> <p>but this <code>-v</code> line isn't quite right. I need to make sure that the <code>/var/run/docker.sock</code> is mounted in the container as the sole purpose of it is to launch sub-processes in docker to run CI jobs.</p>
<p>you're right, creating volumes using the <a href="https://kubernetes.io/docs/user-guide/kubectl/kubectl_create/" rel="nofollow noreferrer">imperative commands</a> to create volumes without config files isn't available in Kubernetes.</p> <p>But it's easy to write some configuration. Based on this <a href="https://gc-taylor.com/blog/2015/10/27/example-drone-ci-kubernetes-manifests" rel="nofollow noreferrer">blog post</a> and your requirements a "modern" config might look like.</p> <p>deployment.yml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: agent spec: replicas: 1 template: metadata: labels: app: agent spec: containers: - env: - name: DRONE_SERVER value: "wss://ci.fommil.com/ws/broker" - name: DRONE_SECRET value: &lt;redacted&gt; - name: DOCKER_MAX_PROCS value: "1" - name: DRONE_TIMEOUT value: 30m image: drone/drone:0.5 name: agent args: ["agent"] securityContext: privileged: true volumeMounts: - mountPath: /var/run/docker.sock name: docker-sock - mountPath: /var/lib/docker name: docker-lib volumes: - name: docker-sock hostPath: path: /var/run/docker.sock - name: docker-lib hostPath: path: /var/lib/docker </code></pre> <p>This could now be used with <code>kubectl create -f deployment.yml</code> and stopped with <code>kubectl delete deployments -l app=agent</code></p>
<p>I have installed Deis Workflow v.2.11 in a GKE cluster, and some of our applications share values in common, like a proxy URL e credentials. I can use these values putting them into environment variables, or even in a .env file. However, every new application, I need to create a .env file, with shared values and then, call </p> <pre><code>deis config:push </code></pre> <p>If one of those shared value changes, I need to adjust every configuration of every app and restart them. I would like to modify the value in ConfigMap once and, after changes, Deis restart the applications.</p> <p>Does anyone know if it is possible to read values from Kubernetes ConfigMap and to put them into Deis environment variables? Moreover, if yes, how do I do it?</p>
<p>I believe what you're looking for is a way to set environment variables globally across all applications. That is currently not implemented. However, please feel free to hack up a PR and we'd likely accept it!</p> <p><a href="https://github.com/deis/controller/issues/383" rel="nofollow noreferrer">https://github.com/deis/controller/issues/383</a></p> <p><a href="https://github.com/deis/controller/issues/1219" rel="nofollow noreferrer">https://github.com/deis/controller/issues/1219</a></p>
<p>I have 5 machines running Ubuntu 16.04.1 LTS. I want to set them up as a Kubernetes Cluster. Iḿ trying to follow this <a href="http://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">getting started guide</a> where they're using <code>kubeadm</code>. </p> <p>It all worked fine until step <strong>3/4 Installing a pod network</strong>. I've looked at there <a href="http://kubernetes.io/docs/admin/addons/" rel="nofollow noreferrer">addon page</a> to look for a pod network and chose the flannel overlay network. Iǘe copied the yaml file to the machine and executed:</p> <pre><code>root@up01:/home/up# kubectl apply -f flannel.yml </code></pre> <p>Which resulted in:</p> <pre><code>configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created </code></pre> <p>So i thought that it went ok, but when I display all the pod stuff:</p> <pre><code>root@up01:/etc/kubernetes/manifests# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system dummy-2088944543-d5f50 1/1 Running 0 50m kube-system etcd-up01 1/1 Running 0 48m kube-system kube-apiserver-up01 1/1 Running 0 50m kube-system kube-controller-manager-up01 1/1 Running 0 49m kube-system kube-discovery-1769846148-jvx53 1/1 Running 0 50m kube-system kube-dns-2924299975-prlgf 0/4 ContainerCreating 0 49m kube-system kube-flannel-ds-jb1df 2/2 Running 0 32m kube-system kube-proxy-rtcht 1/1 Running 0 49m kube-system kube-scheduler-up01 1/1 Running 0 49m </code></pre> <p>The problem is that the kube-dns keeps in the ContainerCreating state. I don't know what to do. </p>
<p>It is very likely that you missed this critical piece of information from the guide:</p> <blockquote> <p>If you want to use flannel as the pod network, specify --pod-network-cidr 10.244.0.0/16 if you’re using the daemonset manifest below.</p> </blockquote> <p>If you omit this <code>kube-dns</code> will never leave the <code>ContainerCreating</code> STATUS.</p> <p>Your <code>kubeadm init</code> command should be:</p> <pre><code># kubeadm init --pod-network-cidr 10.244.0.0/16 </code></pre> <p>and not</p> <pre><code># kubeadm init </code></pre>
<p>I run a k8s cluster in google cloud (GKE) and a MySQL server in aws (RDS). Pods need to connect to RDS which only allows connections from certain IP. How can I configure outgoing traffic to have a static IP?</p>
<p>I had the same problem to connect to a sftp server from a Pod. To solve this, first you need to create an external IP address:</p> <pre><code>gcloud compute addresses create {{ EXT_ADDRESS_NAME }} --region {{ REGION }} </code></pre> <p>Then, I suppose that your pod is assigned to your default-pool node cluster. Extract your default-pool node name:</p> <pre><code>gcloud compute instances list | awk '{ print $1 }' | grep default-pool </code></pre> <p>Erase default external ip of the vm instance:</p> <pre><code>gcloud compute instances delete-access-config {{ VM_DEFAULT-POOL_INSTANCE }} --access-config-name external-nat </code></pre> <p>Add your external static ip created before:</p> <pre><code>gcloud compute instances add-access-config {{ VM_DEFAULT-POOL_INSTANCE }} --access-config-name external-nat --address {{ EXT_ADDRESS_IP }} </code></pre> <p>If your Pod is not attached to the default-pool node, don't forget to select it with a nodeSelector:</p> <pre><code>nodeSelector: cloud.google.com/gke-nodepool: {{ NODE_NAME }} </code></pre>
<p>I have setup my kubernetes cluster from scratch following this doc: <a href="https://kubernetes.io/docs/getting-started-guides/scratch/" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/scratch/</a></p> <p>My kubernetes master and worker are working correctly, but I didn't find the instruction to deploy dns addons.</p>
<p>Addons can be deployed through yaml files as well as using the <code>addon manager</code>. I have already installed <code>dashboard</code>, <code>monitoring</code>, <code>DNS</code> manually using the <code>yaml</code> files provided (with small modifications) in this <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons" rel="noreferrer">repo</a>.</p> <p>Please note <code>addon-manager</code> is pretty special, You should copy all files into a directory then:</p> <pre><code>./kube-addons.sh </code></pre> <p>Btw I prefer installing addons manually instead of using addon manager.</p> <p><strong>DNS addon manual example:</strong></p> <p>Take the <code>kubedns-controller.yaml.sed</code>, Replace the <code>$DNS_DOMAIN</code> with <code>cluster.local</code>(you should use the domain specified in your setup here). You can also set it as a variable. Please note there are multiple occurrences in this file.</p> <p>Then:</p> <pre><code>mv kubedns-controller.yaml.sed kubedns-deployement.yaml kubectl create -f kubedns-deployement.yaml </code></pre>
<p>Occasionally, I see an issue where a pod will start up without network connectivity. Because of this, the pod goes into a CrashLoopBackOff and is unable to recover. The only way I am able to get the pod running again is by running a <code>kubectl delete pod</code> and waiting for it to reschedule. Here's an example of a liveness probe failing due to this issue:</p> <p><code>Liveness probe failed: Get http://172.20.78.9:9411/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)</code></p> <p>I've also noticed that there are no iptables entries for the pod IP when this happens. When the pod is deleted and rescheduled (and is in a working state) I have the iptables entries.</p> <p>If I turn off the livenessprobe in the container and exec into it, I confirmed it has no network connectivity to the cluster or the local network or internet.</p> <p>Would like to hear any suggestions as to what it could be or what else I can look into to further troubleshoot this scenario.</p> <p>Currently running:</p> <p>Kubernetes version: </p> <pre><code>Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7", GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0", GitTreeState:"clean", BuildDate:"2016-12-10T04:49:33Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7", GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0", GitTreeState:"clean", BuildDate:"2016-12-10T04:43:42Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>OS: </p> <pre><code>NAME=CoreOS ID=coreos VERSION=1235.0.0 VERSION_ID=1235.0.0 BUILD_ID=2016-11-17-0416 PRETTY_NAME="CoreOS 1235.0.0 (MoreOS)" ANSI_COLOR="1;32" HOME_URL="https://coreos.com/" BUG_REPORT_URL="https://github.com/coreos/bugs/issues" </code></pre>
<p>Looks like your network driver is not working properly. Without more information about your setup, I could only suggest you the following:</p> <ol> <li>Find out what network driver was used? You can tell by checking kubelet <code>--network-plugin</code> flag. If no network plugin is specified, then it is using native docker network. </li> <li>Given the network driver, examine the pod network setup and see what is missing. Use tcpdump to see where the packet goes. </li> </ol>
<p>Coming from numerous years of running node/rails apps on bare metal; i was used to be able to run as many apps as i wanted on a single machine (let's say, a 2Go at digital ocean could easily handle 10 apps without worrying, based on correct optimizations or fairly low amount of traffic)</p> <p>Thing is, using kubernetes, the game sounds quite different. I've setup a "getting started" cluster with 2 standard vm (3.75Go).</p> <p>Assigned a limit on a deployment with the following :</p> <pre><code> resources: requests: cpu: "64m" memory: "128Mi" limits: cpu: "128m" memory: "256Mi" </code></pre> <p>Then witnessing the following :</p> <pre><code>Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default api 64m (6%) 128m (12%) 128Mi (3%) 256Mi (6%) </code></pre> <p>What does this 6% refers to ?</p> <p>Tried to lower the CPU limit, to like, 20Mi… the app does to start (obviously, not enough resources). The docs says it is percentage of CPU. So, 20% of 3.75Go machine ? Then where this 6% comes from ? </p> <p>Then increased the size of the node-pool to the n1-standard-2, the same pod effectively span 3% of node. That sounds logical, but what does it actually refers to ?</p> <p>Still wonder what is the metrics to be taken in account for this part.</p> <p>The app seems to need a large amount of memory on startup, but then it use only a minimal fraction of this 6%. I then feel like I misunderstanding something, or misusing it all</p> <p>Thanks for any experienced tips/advices to have a better understanding Best</p>
<p>According to the <a href="https://kubernetes.io/docs/user-guide/compute-resources/" rel="noreferrer">docs</a>, CPU requests (and limits) are always fractions of available CPU cores on the node that the pod is scheduled on (with a <code>resources.requests.cpu</code> of <code>"1"</code> meaning reserving one CPU core exclusively for one pod). Fractions are allowed, so a CPU request of <code>"0.5"</code> will reserve half a CPU for one pod.</p> <p>For convenience, Kubernetes allows you to specify CPU resource requests/limits in <em>millicores</em>:</p> <blockquote> <p>The expression <code>0.1</code> is equivalent to the expression <code>100m</code>, which can be read as “one hundred millicpu” (some may say “one hundred millicores”, and this is understood to mean the same thing when talking about Kubernetes). A request with a decimal point, like <code>0.1</code> is converted to <code>100m</code> by the API, and precision finer than <code>1m</code> is not allowed.</p> </blockquote> <p>As already mentioned in the other answer, resource requests are <em>guaranteed</em>. This means that Kubernetes will schedule pods in a way that the sum of all <em>requests</em> will not exceed the amount of resources actually available on a node.</p> <p>So, by requesting <code>64m</code> of CPU time in your deployment, you are requesting actually 64/1000 = 0,064 = 6,4% of one of the node's CPU cores time. So that's where your 6% come from. When upgrading to a VM with more CPU cores, the amount of available CPU resources increases, so on a machine with two available CPU cores, a request for 6,4% of <em>one</em> CPU's time will allocate 3,2% of the CPU time of <em>two</em> CPUs.</p>
<p>I'm reading some document about k8s resource model, but I'm puzzled with "Opaque Integer Resources" in <a href="https://kubernetes.io/docs/user-guide/compute-resources/" rel="nofollow noreferrer">here</a>, what's the exactly "opaque" mean?</p> <p>thanks for your help~</p>
<p>Kubernetes <em>understands</em> what CPU and memory are and can discover these resources by various means. So a node might have 4 CPUs and 32GB memory. Pods can request these resources and the scheduler will make sure that a Pod is scheduled on a node that have enough of these resources still available.</p> <p>But there can be other types of resources. Say some of your nodes have special <a href="https://www.hobbymining.com/usb-bitcoin-miners/" rel="noreferrer">USB dongles attached for BitCoin mining</a>, 8 plugged into each of those nodes. This resource is called an <em>opaque</em> resource as Kubernetes doesn't <em>understand</em> what a BitCoin mining dongle is (that is, a waste of money).</p> <p>With the <a href="https://kubernetes.io/docs/user-guide/compute-resources/#opaque-integer-resources-alpha-feature" rel="noreferrer">Opaque Integer Resources</a> feature you can declare that there is such a thing as a BitCoin mining dongle and that a specific node has 8 of them. It's called an <em>integer</em> resource as it's not possible to have 3.5 dongles attached to a node and also a Pod may not request 1.5 dongles to do it's job.</p> <p>Another example could be that some of your nodes are equipped with a 1TB SSD. You could specify their sizes in MBs and attach this information to the node. Let's call the resource <code>ssdmb</code>. Now your database application could requrest 400 of the <code>ssdmb</code> resource. Altough Kubernetes has no clue what an <code>ssdmb</code> is now it understands that your Pod needs to be scheduled on a node that has <code>ssdmb</code> resource available and at least 400 of it is still available.</p> <p>To mark the node named node-2 as having 1000 of the <code>ssdmb</code> (the URL assumes you are running <code>kubectl proxy</code> on your machine):</p> <pre><code>curl -X PATCH \ --header "Accept: application/json" \ --header "Content-Type: application/json-patch+json" \ --data-raw '[{"op":"add", "path":"/status/capacity/pod.alpha.kubernetes.io~1opaque-int -resource-ssdmb", "value":"1000"}]' \ http://127.0.0.1:8001/api/v1/nodes/node-2/status </code></pre> <p>The same can be achieved with</p> <pre><code>kubectl patch node node-2 \ --patch='[{"op":"add", "path":"/status/capacity/pod.alpha.kubernetes.io~1opaque-int -resource-ssdmb", "value":"1000"}]' </code></pre> <p>Then, to make your Pod request this resource you could:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: my-container image: myimage resources: requests: cpu: 1 pod.alpha.kubernetes.io/opaque-int-resource-ssdmb: 400 </code></pre> <p>After you create this Pod and do <code>kubectl describe node node-2</code> you will see something like this:</p> <pre><code>Capacity: cpu: 2 pod.alpha.kubernetes.io/opaque-int-resource-ssdmb: 1k pods: 110 Allocatable: cpu: 1 pod.alpha.kubernetes.io/opaque-int-resource-ssdmb: 600 pods: 109 </code></pre>
<p>I found a few people that tried to tackle this, only slightly relevant posts <a href="https://stackoverflow.com/questions/37786244/what-username-does-the-kubernetes-kubelet-use-when-contacting-the-kubernetes-api">here</a> and <a href="https://stackoverflow.com/questions/33790438/using-kubectl-with-kubernetes-authorization-mode-abac">here</a> but doesn't solve it for me.</p> <p>The problem: I want to create a read-only user for my cluster using ABAC policy. My cluster has 3 masters and 3 workers, version 1.4.7 hosted on AWS.</p> <p>I edited my manifests/apiserver.yml like so on all 3 masters (added these 3 lines - at the bottom of the file of course I mounted the relevant paths etc..):</p> <ul> <li>--token-auth-file=/etc/kubernetes/policy/user-tokens.csv</li> <li>--authorization-mode=ABAC</li> <li>--authorization-policy-file=/etc/kubernetes/policy/apiusers.yml</li> </ul> <p>my apiuser.yml looks like this:</p> <pre><code>{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group": "system:authenticated", "readonly": true, "nonResourcePath": "*", "namespace": "*", "resource": "*", "apiGroup": "*"}} </code></pre> <p>My users-token.csv looks like this:</p> <pre><code>tdU0ynyO3wG6UAzwWP0DO7wvF2tH8pbH,bob,bob </code></pre> <p>My kubeconfig file has this in it:</p> <pre><code>users: - name: bob user: token: tdU0ynyO3wG6UAzwWP0DO7wvF2tH8pbH </code></pre> <p>When I try <code>kubectl get nodes</code> it's failing, I can print the output with --v=8 if it's relevant but it basically says <code>Forbidden(403)</code>. It seems to me, I'm missing something fundamental here, the policy is in place and blocking everything and everyone, though it should allow authenticated users read only rights.</p> <p>Any kind of help or suggestions would be most appreciated.</p>
<p>the <code>system:authenticated</code> group was added in 1.5</p> <p>prior to 1.5, you can use <code>"user":"*"</code></p>
<p>I am trying to understand on how we can start a interactive shell on a desired container using Kubernetes <a href="https://github.com/kubernetes-incubator/client-python" rel="nofollow noreferrer">client-python</a> API. </p> <p>I found that we can use <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/docs/CoreV1Api.md#connect_get_namespaced_pod_exec" rel="nofollow noreferrer">connect_get_namespaced_pod_exec</a> to run individual commands. </p> <p>Is there any way we can start a bash session on the desired pod and do somestuff specifically on the pod(I am using Docker Container)</p> <p>Any help is most welcome.</p>
<p>from reading the tests I'd guess that the linked documentation already holds your answer. Use <code>/bin/bash</code> as command and send any further commands through the stdin stream.</p> <p>Invokation should be done with:</p> <pre><code>api.connect_get_namespaced_pod_exec('pod', 'namespace', command='/bin/bash' stderr=True, stdin=True, stdout=True, tty=True) </code></pre> <p>The related <code>kubectl exec --tty ...</code> <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/exec.go" rel="nofollow noreferrer">client code</a> is implemented the same way and could be used as a reference too.</p>
<p>I have a sort of ELK stack, with fluentd instead of logstash, running as a DaemonSet on a Kubernetes cluster and sending all logs from all containers, in logstash format, to an Elasticsearch server.</p> <p>Out of the many containers running on the Kubernetes cluster some are nginx containers which output logs of the following format:</p> <pre><code>121.29.251.188 - [16/Feb/2017:09:31:35 +0000] host="subdomain.site.com" req="GET /data/schedule/update?date=2017-03-01&amp;type=monthly&amp;blocked=0 HTTP/1.1" status=200 body_bytes=4433 referer="https://subdomain.site.com/schedule/2589959/edit?location=23092&amp;return=monthly" user_agent="Mozilla/5.0 (Windows NT 6.1; WOW64; rv:51.0) Gecko/20100101 Firefox/51.0" time=0.130 hostname=webapp-3188232752-ly36o </code></pre> <p>The fields visible in Kibana are as per this screenshot:</p> <p><a href="https://i.stack.imgur.com/rqk6K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rqk6K.png" alt="kibana nginx log"></a></p> <p>Is it possible to extract fields from this type of log after it was indexed?</p> <p>The fluentd collector is configured with the following source, which handles all containers, so enforcing a format at this stage is not possible due to the very different outputs from different containers:</p> <pre><code>&lt;source&gt; type tail path /var/log/containers/*.log pos_file /var/log/es-containers.log.pos time_format %Y-%m-%dT%H:%M:%S.%NZ tag kubernetes.* format json read_from_head true &lt;/source&gt; </code></pre> <p>In an ideal situation, I would like to enrich the fields visible in the screenshot above with the meta-fields in the "log" field, like "host", "req", "status" etc.</p>
<p>After a few days of research and getting accustomed to the <a href="https://github.com/so0k/docker-efk" rel="noreferrer">EFK stack</a>, I arrived to an EFK specific solution, as opposed to that in Darth_Vader's answer, which is only possible on the ELK stack.</p> <p>So to summarize, I am using Fluentd instead of Logstash, so any grok solution would work if you also install the <a href="https://github.com/fluent/fluent-plugin-grok-parser" rel="noreferrer">Fluentd Grok Plugin</a>, which I decided not to do, because:</p> <p>As it turns out, Fluentd has its own field extraction functionality through the use of <a href="http://docs.fluentd.org/v0.12/articles/parser-plugin-overview" rel="noreferrer">parser filters</a>. To solve the problem in my question, right before the <code>&lt;match **&gt;</code> line, so after the log line object was already enriched with kubernetes metadata fields and labels, I added the following:</p> <pre><code>&lt;filter kubernetes.var.log.containers.webapp-**.log&gt; type parser key_name log reserve_data yes format /^(?&lt;ip&gt;[^-]*) - \[(?&lt;datetime&gt;[^\]]*)\] host="(?&lt;hostname&gt;[^"]*)" req="(?&lt;method&gt;[^ ]*) (?&lt;uri&gt;[^ ]*) (?&lt;http_version&gt;[^"]*)" status=(?&lt;status_code&gt;[^ ]*) body_bytes=(?&lt;body_bytes&gt;[^ ]*) referer="(?&lt;referer&gt;[^"]*)" user_agent="(?&lt;user_agent&gt;[^"]*)" time=(?&lt;req_time&gt;[^ ]*)/ &lt;/filter&gt; </code></pre> <p><strong>To explain:</strong></p> <p><code>&lt;filter kubernetes.var.log.containers.webapp-**.log&gt;</code> - apply the block on all the lines matching this label; in my case the containers of the web server component are called webapp-{something}</p> <p><code>type parser</code> - tells fluentd to apply a parser filter</p> <p><code>key_name log</code> - apply the pattern only on the <code>log</code> property of the log line, not the whole line, which is a json string</p> <p><code>reserve_data yes</code> - very important, if not specified the whole log line object is replaced by only the properties extracted from <code>format</code>, so if you already have other properties, like the ones added by the <code>kubernetes_metadata</code> filter, these are removed when not adding the <code>reserve_data</code> option</p> <p><code>format</code> - a regex that is applied on the value of the <code>log</code> key to extract named properties</p> <p><strong>Please note</strong> that I am using Fluentd 1.12, so this syntax is not fully compatible with the newer 1.14 syntax, but the principle will work with minor tweaks to the parser declaration.</p>
<p>I am having issues getting the visitors real IP in my PHP app. I have Kubernetes running in Google Container Engine (master: 1.4.8, node: 1.4.7). </p> <p>Service definition:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-service spec: type: LoadBalancer # spawning google loadbalancer selector: name: app # running simple php/nginx container ports: - port: 80 targetPort: 80 </code></pre> <p>How can it be that the <code>X-Forwarded-For</code> headers etc. don't get passed through to my php app? I am only getting back the source ip (in php <code>REMOTE_ADDR</code>), which is <code>10.0.1.1</code>. In Google Cloud I can see the service is using a <strong>layer 4</strong> load balancer. Could this be the issue that the real source ip is lost and the <code>X-Forwarded-For</code> header never gets set?</p> <p>If someone could explain me what is going on, that would be super helpful!</p> <p>For what its worth, I am using the following nginx configuration in my app container:</p> <pre><code>location ~ \.php$ { fastcgi_pass php-upstream; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; fastcgi_param DOCUMENT_ROOT $realpath_root; } </code></pre> <p><strong>EDIT</strong> I have put my whole application behind CloudFlare, so it is now pointing from CloudFlare http proxy -> GCE Load Balancer. And somehow the <code>X-Forwarded-For</code> headers and all are present! For me it seems like the issue is with the GCE Load Balancer, it is somehow unable to set those headers?</p>
<p>A new feature was <a href="https://kubernetes.io/docs/user-guide/load-balancer/#loss-of-client-source-ip-for-external-traffic" rel="nofollow noreferrer">added to Kubernetes 1.5</a>:</p> <blockquote> <p>Due to the implementation of this feature, the source IP for sessions as seen in the target container will not be the original source IP of the client. This is the default behavior as of Kubernetes v1.5. However, starting in v1.5, an optional beta feature has been added that will preserve the client Source IP for GCE/GKE environments. This feature will be phased in for other cloud providers in subsequent releases.</p> </blockquote> <p>More details are <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">available here</a> and boil down to adding an annotation for services of type <code>loadbalancer</code>:</p> <pre><code>$ kubectl annotate service loadbalancer service.beta.kubernetes.io/external-traffic=OnlyLocal </code></pre> <p>This will open a healthcheck port on the node, to verify if service endpoints are available on the node.</p> <p>More details on how this issue is rolled out to other clusters seems to be available on <a href="https://github.com/kubernetes/features/issues/27#issuecomment-269518563" rel="nofollow noreferrer">kubernetes/features</a> issue tracking this feature.</p> <p>Note: Question seems similar to <a href="https://stackoverflow.com/questions/36988369/kubernetes-not-preserve-source-ip">StackOverflow/Kubernetes-not-preserving-source-ip</a></p>
<p>I built a Kubernete cluster in local data center. There are 4 nodes and 1 master. Looking for a L4 loadbalance solution for internal service.</p> <pre><code>root@niubi01:/home/mesos# kubectl get nodes NAME STATUS AGE niubi01 Ready,master 7d niubi02 Ready 7d niubi03 Ready 7d niubi04 Ready 7d niubi05 Ready 7d </code></pre> <p>Assume we have three Pods with 'hello world' web service. A service with exposed External IP is created with 'NodePort'. The external IP is 'Nodes' and port is 30145.</p> <pre><code>root@niubi01:/home/mesos# kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service 10.100.220.21 &lt;nodes&gt; 8080:30145/TCP 6d </code></pre> <p>As the document mentioned, we can access any node IP to access this 'hello world' service. Like:</p> <pre><code>curl http://niubi01:30145 curl http://niubi02:30145 curl http://niubi03:30145 curl http://niubi04:30145 curl http://niubi05:30145 </code></pre> <p>from outside. The problem is we can't guarantee any node is active forever, even master. Which URL should we take to use? How to do a LoadBalance like Haproxy to provide high availability to this service? Should we have another server provide loadbalance sevice between these 5 addresses? Seeking for any better solution for this case.</p>
<p>independent from where your <code>LoadBalancer</code> is located you could just have a virtual IP address load balanced between your nodes and include it in your service definition as shown <a href="https://kubernetes.io/docs/user-guide/services/#external-ips" rel="nofollow noreferrer">in the documentation</a>:</p> <pre><code>--- kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - name: http protocol: TCP port: 80 targetPort: 9376 externalIPs: - 80.11.12.10 </code></pre> <p>Once the traffic for this IP (<code>80.11.12.10</code>) hits any of the nodes, <code>kube-proxy</code> will redirect it to your service.</p> <p>One option to implement this would be using Pacemaker on the nodes as described in many <a href="http://jensd.be/156/linux/building-a-high-available-failover-cluster-with-pacemaker-corosync-pcs" rel="nofollow noreferrer">blog posts</a>. But also having a dedicated load balancer in front of the cluster would work just fine.</p> <p>The benefit of using virtual IPs is that you don't have to mess with the NodePorts in firewalls or related configuration. Another benefit is that this isn't limited to HTTP traffic.</p> <p>The downside is that the configuration of the external load balancer and the IP assignment for the service is not automated and has to be done manually. To mitigate this issue you could either implement your own provider (see other <a href="https://github.com/kubernetes/kubernetes/tree/e7950e6f4961595e6c9df878bbc6dd520c792d03/pkg/cloudprovider/providers" rel="nofollow noreferrer">provider implementations on Github</a>) or you could read the service configuration from etcd and use that as source for the configuration of your external load balancer.</p>
<p>We have an application with 4 pods running with a load balancer! We want to try the rolling update, but we are not sure what happens when a pod goes down! The documentation is unclear! Particularly this quote from <a href="https://kubernetes.io/docs/user-guide/pods/#termination-of-pods" rel="noreferrer">Termination Of Pods</a>:</p> <blockquote> <p>Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly can continue to serve traffic as load balancers (like the service proxy) remove them from their rotations. </p> </blockquote> <p>So, if someone can guide us on the following questions : </p> <p>1.) When a pod is shutting down, can it still serve new requests? Or does the load balancer not consider it?</p> <p>2.) Does it complete the requests it is processing till the grace-period is exhausted? and then kills the container even if any process is still running? </p> <p>3.) Also, this mentions replication controllers, what we have is a Deployment and Deployment has replica sets, so will there be any difference?</p> <p>We went through this question but the answers are conflicting without any source : <a href="https://stackoverflow.com/questions/33611319/does-a-kubernetes-rolling-update-gracefully-remove-pods-from-a-service-load-bala"></a><a href="https://stackoverflow.com/questions/33611319/does-a-kubernetes-rolling-update-gracefully-remove-pods-from-a-service-load-bala">Does a Kubernetes rolling-update gracefully remove pods from a service load balancer</a></p>
<p>1) when a Pod is shutting down it's state is changed to <code>Terminating</code> and it is not considered by the LoadBalancer - as described in the <a href="https://kubernetes.io/docs/concepts/abstractions/pod-termination/" rel="noreferrer">Pod termination</a> docs</p> <p>2) Yes - you might want to look at the <code>pod.Spec.TerminationGracePeriodSeconds</code> configuration to gain some control. You'll find details in the <a href="https://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec" rel="noreferrer">API documentation</a></p> <p>3) No - the ReplicaSet and the Deployment take care of scheduling Pods, there's no difference when it comes to the shutdown behaviour of the Pods </p>
<p>Stupid question, but right now I'm deploying my Kubernetes cluster inside a VM. Is there a way to deploy it directly onto my machine?</p> <p>I'm sure there has to be a easy fix but many of the docs I've read have been focused on deploying it inside VM.</p>
<p>I am assuming you are using some flavor of Linux; otherwise the information below won't be useful to you.</p> <p>The easiest way of <em>bare metal</em> deployment ("onto your machine") is by using <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">kubeadm</a>. The documentation for that is excellent.</p> <p>(If you need help with then reply with your exact OS flavor and version and I can edit this answer to reflect that specific situation.)</p>
<p>I followed the guide in the following link: <a href="http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html" rel="noreferrer">http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html</a></p> <p>and set up a mongo DB replica set on Kubernetes with StatefulSets. So far so good, but how do I expose that static hostnames outside the cluster so that I can access them from a Google instance for example? </p> <p>If I use the IPs of the nodes it will work fine but those can change anytime (upon pod failure and restart with a different IP etc.)... </p> <p>Thanks in advance!</p>
<p>It looks like the answer is present in the StatefulSet Basics documentation section <a href="https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/%20https://kubernetes.io/docs/user-guide/services/#headless-services" rel="nofollow noreferrer">Using Stable Network Identities</a>:</p> <blockquote> <p>The Pods’ ordinals, hostnames, SRV records, and A record names have not changed, but the IP addresses associated with the Pods may have changed. In the cluster used for this tutorial, they have. This is why it is important not to configure other applications to connect to Pods in a StatefulSet by IP address.</p> <p>If you need to find and connect to the active members of a StatefulSet, you should query the CNAME of the Headless Service <strong><code>(nginx.default.svc.cluster.local)</code></strong>. The SRV records associated with the CNAME will contain only the Pods in the StatefulSet that are Running and Ready.</p> <p>If your application already implements connection logic that tests for liveness and readiness, you can use the SRV records of the Pods <strong><code>( web-&gt; 0.nginx.default.svc.cluster.local, web-1.nginx.default.svc.cluster.local)</code></strong>, as they are stable, and your application will be able to discover the Pods’ addresses when they transition to Running and Ready.</p> </blockquote>
<p>I am trying to create a new service for one of my deployments named <code>node-js-deployment</code> in GCE hostes Kubernetes Cluster</p> <p>I followed the Documentation to <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/docs/CoreV1Api.md#create_namespaced_service" rel="nofollow noreferrer">create_namespaced_service</a></p> <p>This is the service data: </p> <pre><code>{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "node-js-service" }, "spec": { "selector": { "app": "node-js" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 8000 } ] } } </code></pre> <p>This is the Python function to create the service </p> <pre><code>api_instance = kubernetes.client.CoreV1Api() namespace = 'default' body = kubernetes.client.V1Service() # V1Serice # Creating Meta Data metadata = kubernetes.client.V1ObjectMeta() metadata.name = "node-js-service" # Creating spec spec = kubernetes.client.V1ServiceSpec() # Creating Port object ports = kubernetes.client.V1ServicePort() ports.protocol = 'TCP' ports.target_port = 8000 ports.port = 80 spec.ports = ports spec.selector = {"app": "node-js"} body.spec = spec try: api_response = api_instance.create_namespaced_service(namespace, body, pretty=pretty) pprint(api_response) except ApiException as e: print("Exception when calling CoreV1Api-&gt;create_namespaced_service: %s\n" % e) </code></pre> <p>Error: </p> <pre><code>Reason: Bad Request HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Tue, 21 Feb 2017 03:54:55 GMT', 'Content-Length': '227'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Service in version \"v1\" cannot be handled as a Service: only encoded map or array can be decoded into a struct","reason":"BadRequest","code":400} </code></pre> <p>But the service is being created if I am passing JSON. Not sure what I am doing wrong. </p> <p>Any help is greatly appreciated, thank you.</p>
<p>From reading your code, it seems that you miss assigning the metadata to <code>body.metadata</code>. And you missed that the <code>ports</code> field of the <code>V1ServiceSpec</code> is supposed to be a list, but you used a single <code>V1ServicePort</code> so without testing I assume this should works:</p> <pre><code>api_instance = kubernetes.client.CoreV1Api() namespace = 'default' body = kubernetes.client.V1Service() # V1Serice # Creating Meta Data metadata = kubernetes.client.V1ObjectMeta() metadata.name = "node-js-service" body.metadata = metadata # Creating spec spec = kubernetes.client.V1ServiceSpec() # Creating Port object port = kubernetes.client.V1ServicePort() port.protocol = 'TCP' port.target_port = 8000 port.port = 80 spec.ports = [ port ] spec.selector = {"app": "node-js"} body.spec = spec </code></pre> <p>The definition could also be loaded from json / yaml directly as shown in two of the examples within the offical repo - see <a href="https://github.com/kubernetes-incubator/client-python/blob/master/examples/exec.py" rel="nofollow noreferrer">exec.py</a> <a href="https://github.com/kubernetes-incubator/client-python/blob/master/examples/create_deployment.py" rel="nofollow noreferrer">create_deployment.py</a>.</p> <p>Your solution could then look like:</p> <pre><code>api_instance = kubernetes.client.CoreV1Api() namespace = 'default' manifest = { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "node-js-service" }, "spec": { "selector": { "app": "node-js" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 8000 } ] } } try: api_response = api_instance.create_namespaced_service(namespace, manifest, pretty='true') pprint(api_response) except ApiException as e: print("Exception when calling CoreV1Api-&gt;create_namespaced_endpoints: %s\n" % e) </code></pre>
<p>I daily find myself doing...</p> <pre><code>$ kubectl --context=foo get pods &lt; copy text manually &gt; $ kubectl --context=foo logs dep1-12345678-10101 </code></pre> <p>I would like to cycle through matching resources with</p> <pre><code>$ kubectl --context=foo logs dep1&lt;TAB&gt; </code></pre> <p>but this doesn't seem to do anything with my stock setup. Any ideas?</p> <p>osx 10.12.3 kubectl v1.4.5 zsh zsh 5.2 (x86_64-apple-darwin16.0)</p>
<p>Both <code>bash</code> and <code>zsh</code> supports scripts that completes printed command when you press <code>&lt;TAB&gt;</code>. The feature is called <em>Programmable completion</em>, and you can find more details about that here: <a href="https://www-s.acm.illinois.edu/workshops/zsh/completion/completion.html" rel="noreferrer">zsh completion</a>.</p> <p>Fortunately, you don't need to write your own script - kubectl provides it for zsh > 5.2. Try running this command: <code>source &lt;(kubectl completion zsh)</code>.</p> <p>Another option is to use this tool: <a href="https://github.com/mkokho/kubemrr" rel="noreferrer">https://github.com/mkokho/kubemrr</a> (disclaimer: I'm the author). The reason it exists is because standard completion script is too slow - it might take seconds before Kubernetes cluster replies will all pod names. But <code>kubemrr</code> keeps the names locally, so the response comes back almost immediately. </p>
<p>I have two kubernetes clusters on google container engine but on seperate google accounts (one using my company's email and another using my personal email). I attempted to switch from one cluster to another. I did this by:</p> <ol> <li><p>Logging in with my other email address</p> <p><code>$ gcloud init</code></p></li> <li><p>Getting new kubectl credentials</p> <p><code>gcloud container cluster get-credentials</code></p></li> <li><p>Test to see if connected to new cluster</p> <p><code>$ kubectl get po</code></p></li> </ol> <p>However, I was still not able to get the kubernetes resources in the cluster. The error I received was:</p> <p><code>the server doesn't have a resource type "pods"</code></p>
<p>This occurs because although I logged in with the new credentials... kubectl isn't using the new credentials. In order to change the login/access credentials that kubectl will use to access your cluster you need to run the following command:</p> <pre><code>gcloud auth application-default login </code></pre> <p>You will then get the following response:</p> <pre><code>Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&amp;prompt=select_account&amp;respons e_type=code&amp;client_id=...&amp; scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email +https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform&amp;access_type=offline Credentials saved to file: [/Users/.../.config/gcloud/application_default_credentials.json] These credentials will be used by any library that requests Application Default Credentials. </code></pre> <p>Then get cluster credentials</p> <p><code>gcloud container clusters get-credentials [cluster name/id]</code></p> <p>You should now be able to access the cluster using kubectl.</p>
<p>I am trying to finally choose between Spring Cloud Netflix, Kubernetes and Swarm for building our microservices environment. They are all very cool and do some choice is very hard. I'll describe a little which kind of problems I want to solve. I couldn't find any best way to design Api Gateway (not a simple load balancer) with Kubernetes or Swarm , that's why I want to use Zuul. But from other side Api Gateway must use service discovery which in case of Kubernetes or Swarm will be embedded inside the orchestra. With Kubernetes I can use it's spring cloud integration, but this way I will have server side discovery and client side discovery inside Kubernetes. Which is overkill I think. I am wondering does anyone have some experience with them and any suggestions about that. Thanks.</p>
<p>Kubernetes and Docker Swarm are container orchestration tools. Spring Cloud is a collection of tools to build microservices/streaming architectures. There is a bit of overlap, like service discovery, gateway or configuration services. But you could use Spring Cloud without containers and deploy the jars yourself without needing Kuberentes or Swarm.</p> <p>So you'll have to choose between Kubernetes and Swarm for the orchestration of your containers, if you'll use containers.</p> <p>Comparison: <a href="https://dzone.com/articles/deploying-microservices-spring-cloud-vs-kubernetes" rel="noreferrer">https://dzone.com/articles/deploying-microservices-spring-cloud-vs-kubernetes</a></p>
<p>has anyone tried having the image for an init container in a private repo when using the imagePullSecret to access the registry? Below is a sample for a private registry with image pull secrets. </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: privatereg:5000/private-image-name imagePullSecrets: - name: regsecret </code></pre> <p>When you define the init containers, the secrets cannot be included with the init-container since its not part of the container spec.</p> <p>So will it still work using init containers? I see in the example <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/</a> that they have a volume created in the spec and its available in the init. </p>
<p>You can add it via annotations:</p> <pre><code>annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "install", "image": "my-init-container", "imagePullSecrets": "something" } ]' </code></pre>
<p>I would like to access to OpenShift and Kubernetes API from inside a pod to query and modify objects in the application the pod belongs to.</p> <p>In the documentation (<a href="https://docs.openshift.org/latest/dev_guide/service_accounts.html" rel="noreferrer">https://docs.openshift.org/latest/dev_guide/service_accounts.html</a>) I found this description on how to access the api:</p> <pre><code>$ TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \ "https://openshift.default.svc.cluster.local/oapi/v1/users/~" \ -H "Authorization: Bearer $TOKEN" </code></pre> <p>The problem is when I for example want to access a pod, I need to know the namespace I'm in:</p> <p><code>https://openshift.default.svc.cluster.local/oapi/v1/namespaces/${namespace}/pods</code></p> <p>The only way I found so far is to submit the namespace as an environment variable, but I would like to not requiring the user to enter that information.</p>
<p>At least in kubernetes 1.5.3 I can also see the namespace in <code>/var/run/secrets/kubernetes.io/serviceaccount/namespace</code>. </p>
<p>I am running minikube/Kubernetes and am having difficulty accessing a volume from a volumeMount in a deployment. </p> <p>I can confirm that when the microservice starts up, it is not able to access the /config directory (ie. the "mountPath" in the "volumeMounts"). I have verified that the hostPath/path is valid.</p> <p>I have experimented with a number of techniques and have also validated that the deployment files is correct. I have also tried using quotes/double-quotes/no-quotes around the path specifications, but this does not address the issue.</p> <p>Note that I am using a "hostPath" for simple testing purposes, however, this is the scenario that I nevertheless need to address.</p> <p>My minikube configuration is illustrated below:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T07:30:54Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I am running minikube on MacOS/Sierra version 10.12.3 (16D32).</p> <p>My deployment file (deployment.yaml):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: atmp1000-deployment spec: replicas: 1 template: metadata: labels: app: atmp1000 spec: containers: - name: atmp1000 image: atmp1000 ports: - containerPort: 7010 volumeMounts: - name: atmp1000-volume mountPath: '/config' volumes: - name: atmp1000-volume hostPath: path: '/Users/&lt;username&gt;/&lt;some-path&gt;/config' </code></pre> <p>Any help is appreciated.</p>
<p>In the interest of completeness, below is the solution that I found... I got the hostPath and mounts working on minikube (on Mac) which took a few steps but required several "minikube delete" commands to get the most current version and reset the environment. Below are some additional notes about how to get this functioning:</p> <ul> <li><p>I had to use the xhyve driver to make it all work properly -- it probably works using other drivers but I did not try them.</p></li> <li><p>I found that minikube mounts host paths at "/User" which means the "volumes/hostPath/path" should start at "/User"</p></li> <li><p>I found a variety of ways that this worked including using claims but the files in the original question now reflect a correct and simple configuration. </p></li> </ul>
<p>I've been trying to start kubernetes-dashboard (and eventualy other services) on a NodePort outside the default port range with little success, here is my setup: Cloud provider: Azure (Not azure container service) OS: CentOS 7</p> <p>here is what I have tried:</p> <h1>Update the host</h1> <pre><code>$ yum update </code></pre> <h1>Install kubeadm</h1> <pre><code>$ cat &lt;&lt;EOF &gt; /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF $ setenforce 0 $ yum install -y docker kubelet kubeadm kubectl kubernetes-cni $ systemctl enable docker &amp;&amp; systemctl start docker $ systemctl enable kubelet &amp;&amp; systemctl start kubelet </code></pre> <h1>Start the cluster with kubeadm</h1> <pre><code>$ kubeadm init </code></pre> <h1>Allow runing containers on master node, because we have a single node cluster</h1> <pre><code>$ kubectl taint nodes --all dedicated- </code></pre> <h1>Install a pod network</h1> <pre><code>$ kubectl apply -f https://git.io/weave-kube </code></pre> <h1>Our kubernetes-dashboard Deployment (@ ~/kubernetes-dashboard.yaml</h1> <pre><code># Copyright 2015 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Configuration to deploy release version of the Dashboard UI. # # Example usage: kubectl create -f &lt;this_file&gt; kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 selector: matchLabels: app: kubernetes-dashboard template: metadata: labels: app: kubernetes-dashboard # Comment the following annotation if Dashboard must not be deployed on master annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: containers: - name: kubernetes-dashboard image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1 imagePullPolicy: Always ports: - containerPort: 9090 protocol: TCP args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 --- kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 8880 targetPort: 9090 nodePort: 8880 selector: app: kubernetes-dashboard </code></pre> <h1>Create our Deployment</h1> <pre><code>$ kubectl create -f ~/kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created The Service "kubernetes-dashboard" is invalid: spec.ports[0].nodePort: Invalid value: 8880: provided port is not in the valid range. The range of valid ports is 30000-32767 </code></pre> <p>I found out that to change the range of valid ports I could set service-node-port-range option on kube-apiserver to allow a different port range, so I tried this:</p> <pre><code>$ kubectl get po --namespace=kube-system NAME READY STATUS RESTARTS AGE dummy-2088944543-lr2zb 1/1 Running 0 31m etcd-test2-highr 1/1 Running 0 31m kube-apiserver-test2-highr 1/1 Running 0 31m kube-controller-manager-test2-highr 1/1 Running 2 31m kube-discovery-1769846148-wmbhb 1/1 Running 0 31m kube-dns-2924299975-8vwjm 4/4 Running 0 31m kube-proxy-0ls9c 1/1 Running 0 31m kube-scheduler-test2-highr 1/1 Running 2 31m kubernetes-dashboard-3203831700-qrvdn 1/1 Running 0 22s weave-net-m9rxh 2/2 Running 0 31m </code></pre> <p>Add "--service-node-port-range=8880-8880" to kube-apiserver-test2-highr</p> <pre><code>$ kubectl edit po kube-apiserver-test2-highr --namespace=kube-system { "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "kube-apiserver", "namespace": "kube-system", "creationTimestamp": null, "labels": { "component": "kube-apiserver", "tier": "control-plane" } }, "spec": { "volumes": [ { "name": "k8s", "hostPath": { "path": "/etc/kubernetes" } }, { "name": "certs", "hostPath": { "path": "/etc/ssl/certs" } }, { "name": "pki", "hostPath": { "path": "/etc/pki" } } ], "containers": [ { "name": "kube-apiserver", "image": "gcr.io/google_containers/kube-apiserver-amd64:v1.5.3", "command": [ "kube-apiserver", "--insecure-bind-address=127.0.0.1", "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota", "--service-cluster-ip-range=10.96.0.0/12", "--service-node-port-range=8880-8880", "--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem", "--client-ca-file=/etc/kubernetes/pki/ca.pem", "--tls-cert-file=/etc/kubernetes/pki/apiserver.pem", "--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem", "--token-auth-file=/etc/kubernetes/pki/tokens.csv", "--secure-port=6443", "--allow-privileged", "--advertise-address=100.112.226.5", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname", "--anonymous-auth=false", "--etcd-servers=http://127.0.0.1:2379" ], "resources": { "requests": { "cpu": "250m" } }, "volumeMounts": [ { "name": "k8s", "readOnly": true, "mountPath": "/etc/kubernetes/" }, { "name": "certs", "mountPath": "/etc/ssl/certs" }, { "name": "pki", "mountPath": "/etc/pki" } ], "livenessProbe": { "httpGet": { "path": "/healthz", "port": 8080, "host": "127.0.0.1" }, "initialDelaySeconds": 15, "timeoutSeconds": 15, "failureThreshold": 8 } } ], "hostNetwork": true }, "status": {} $ :wq </code></pre> <p>The following is the truncated response</p> <pre><code># pods "kube-apiserver-test2-highr" was not valid: # * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds` </code></pre> <p>So I tried a different approach, I edited the deployment file for kube-apiserver with the same change described above and ran the following:</p> <pre><code>$ kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.json --namespace=kube-system </code></pre> <p>And got this response:</p> <pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>So now i'm stuck, how can I change the range of valid ports?</p>
<p>You are specifying <code>--service-node-port-range=8880-8880</code> wrong. You set it to one port only, Set it to a range.</p> <p>Second problem: You are setting the service to use 9090 and it's not in the range.</p> <pre><code> ports: - port: 80 targetPort: 9090 nodePort: 9090 </code></pre> <p>API Server should have a deployment too, Try to editing the port-range in the deployment itself and delete the api server pod so it gets recreated via new config.</p>
<p>This question pertains to the Kubernetes tutorial on Google's CodeLabs found here: <a href="https://codelabs.developers.google.com/codelabs/cloud-compute-kubernetes/index.html?index=..%2F..%2Fgcp-next#15" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/cloud-compute-kubernetes/index.html?index=..%2F..%2Fgcp-next#15</a></p> <p>I'm new to both Docker and Kubernetes and am confused over their use of the term &quot;pods&quot; which seems to contradict itself.</p> <p>From that tutorial:</p> <blockquote> <p>A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain one or more containers. All containers within a single pod will share the same networking interface, IP address, disk, etc. All containers within the same pod instance will live and die together. It's especially useful when you have, for example, a container that runs the application, and another container that periodically polls logs/metrics from the application container.</p> </blockquote> <p>That is in-line with my understanding of how Kubernetes pods relate to containers, however they then go on to say:</p> <blockquote> <p>Optional interlude: Look at your pod running in a Docker container on the VM</p> <p>If you ssh to that machine (find the node the pod is running on by using kubectl describe pod | grep Node), you can then ssh into the machine with gcloud compute ssh . Finally, run sudo docker ps to see the actual pod</p> </blockquote> <p>My problems with the above quote:</p> <ol> <li>. &quot;Look at your pod running in a Docker container&quot; appears to be backwards. Shouldn't it say &quot;Look at your Docker container running on the VM&quot;?</li> <li>&quot;...run sudo docker ps to see the actual <strong>pod</strong>&quot; doesn't make sense, since &quot;docker ps&quot; lists docker containers, not pods.</li> </ol> <p>So am I way off base here or is the tutorial incorrect?</p>
<p>As mentioned above pod can run more than one container, but in fact to make it simple running more than one container in a pod is an exception and definitely not the common use. you may look at a pod as a container++ that's the easy way to look at it. </p> <p>If you starting with kubernetes I have wrote the blog below that explain the main 3 entities you need to be familiar with to get started with kubernetes, which are pods, deployments and services.</p> <p>here it is <a href="http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/" rel="noreferrer">http://codefresh.io/blog/kubernetes-snowboarding-everything-intro-kubernetes/</a></p> <p>feedback welcome!</p>
<p>I set up a single-node Kubernetes cluster, using kubeadm, on Ubuntu 16.04 LTS with flannel.</p> <p>Most of the time everything works well, but every couple of days, the cluster gets into a state where it can't schedule new pods - the pods are stuck in "Pending" state and When I <code>kubectl describe pod</code> of those pods, I error messages like these:</p> <pre><code>Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned dex-1939802596-zt1r3 to superserver-03 1m 2s 21 {kubelet superserver-03} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "somepod-1939802596-zt1r3_somenamespace" with SetupNetworkError: "Failed to setup network for pod \"somepod-1939802596-zt1r3_somenamespace(167f8345-faeb-11e6-94f3-0cc47a9a5cf2)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod" </code></pre> <p>I've found this <a href="https://stackoverflow.com/q/41359224/1563935">stackoverflow question</a> and the workaround he's suggested. It does help to recover (it takes a several minutes though), but the problem comes back after a while...</p> <p>I've also encountered this <a href="https://github.com/kubernetes/kubernetes/issues/39557" rel="nofollow noreferrer">open issue</a>, and also got the issue recovered using the suggested workaround, but again, the problem comes back. Also, it's not exactly my case, and the issue was closed after just finding a workaround... :\</p> <p>Technical details:</p> <pre><code>kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Kubernetes Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Started the cluster with these commands:</p> <pre><code>kubeadm init --pod-network-cidr 10.244.0.0/16 --api-advertise-addresses 192.168.1.200 kubectl taint nodes --all dedicated- kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>Some syslog logs that may be relevant (I got many of those):</p> <pre><code>Feb 23 11:07:49 server-03 kernel: [ 155.480669] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Feb 23 11:07:49 server-03 dockerd[1414]: time="2017-02-23T11:07:49.735590817+02:00" level=warning msg="Couldn't run auplink before unmount /var/lib/docker/aufs/mnt/89bb7abdb946d858e175d80d6e1d2fdce0262af8c7afa9c6ad9d776f1f5028c4-init: exec: \"auplink\": executable file not found in $PATH" Feb 23 11:07:49 server-03 kernel: [ 155.496599] aufs au_opts_verify:1597:dockerd[24704]: dirperm1 breaks the protection by the permission bits on the lower branch Feb 23 11:07:49 server-03 systemd-udevd[29313]: Could not generate persistent MAC address for vethd4d85eac: No such file or directory Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.756976 1228 cni.go:255] Error adding network: no IP addresses available in network: cbr0 Feb 23 11:07:49 server-03 kernel: [ 155.514994] IPv6: eth0: IPv6 duplicate address fe80::835:deff:fe4f:c74d detected! Feb 23 11:07:49 server-03 kernel: [ 155.515380] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 23 11:07:49 server-03 kernel: [ 155.515588] device vethd4d85eac entered promiscuous mode Feb 23 11:07:49 server-03 kernel: [ 155.515643] cni0: port 34(vethd4d85eac) entered forwarding state Feb 23 11:07:49 server-03 kernel: [ 155.515663] cni0: port 34(vethd4d85eac) entered forwarding state Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.757001 1228 cni.go:209] Error while adding to cni network: no IP addresses available in network: cbr0 Feb 23 11:07:49 server-03 kubelet[1228]: E0223 11:07:49.757056 1228 docker_manager.go:2201] Failed to setup network for pod "somepod-752955044-58g59_somenamespace(5d6c28e1-f8dd-11e6-9843-0cc47a9a5cf2)" using network plugins "cni": no IP addresses available in network: cbr0; Skipping pod </code></pre> <p>Many thanks!</p> <p><strong>Edit:</strong> </p> <p>I am able to reproduce it. It seems like it is an exhaust of the IP addresses in the kubelet CIDR. Findings:</p> <ul> <li><p>First, the podCIDR of the node is (got it through <code>kubectl get node -o yaml</code>): <code>podCIDR: 10.244.0.0/24</code> (BTW, why not /16 as the cluster CIDR I've set in the kubeadm commnad?).</p></li> <li><p>Second:</p> <p><code>$ sudo ls -la /var/lib/cni/networks/cbr0 | wc -l</code></p> <p><code>256</code> (that is, 256 IPs are assigned, right?)</p></li> <li><p>But, that happens although I currently have no more than 256 running Kubernetes pods and services:</p> <p><code>$ kubectl get all --all-namespaces | wc -l</code></p> <p><code>180</code></p> <p><code>### (Yes, this includes not only pods and services, but also jobs, deployments and replicasets)</code></p></li> </ul> <p>So, home comes the IP addresses are exhausted? How to fix that? It can't be that those workarounds are the only ways... </p> <p>Thanks again.</p> <p><strong>Edit (2)</strong></p> <p>Another related issue: <a href="https://github.com/containernetworking/cni/issues/306" rel="nofollow noreferrer">https://github.com/containernetworking/cni/issues/306</a></p>
<p>For now, this is the best workaround I've found:</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/34278#issuecomment-254686727" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/34278#issuecomment-254686727</a></p> <p>I've set up a cron job to run this script on @reboot.</p> <p>It seems like that issue has been resolved with a <a href="https://github.com/kubernetes/kubernetes/pull/35816" rel="nofollow noreferrer">temp fix</a> of Garbage Collecting the pods on an event of Docker daemon restart, but that feature was probably not enabled in my cluster.</p> <p>A few days ago, the new better <a href="https://github.com/kubernetes/kubernetes/pull/37036" rel="nofollow noreferrer">long-term fix</a> was just merged, so I hope this issue will be fixed in the next Kubernetes 1.6.0 release.</p>
<p>Minikube is supposed to make it simple to run Kubernetes locally, not only for "getting started" but also for "day-to-day development workflows".</p> <p>source : <a href="https://github.com/kubernetes/minikube/blob/master/ROADMAP.md#goals" rel="noreferrer">https://github.com/kubernetes/minikube/blob/master/ROADMAP.md#goals</a></p> <p>But I can also read that : "PersistentVolumes are mapped to a directory inside the minikube VM. The Minikube VM boots into a tmpfs, so most directories will not be persisted across reboots (minikube stop)"</p> <p>source : <a href="https://kubernetes.io/docs/getting-started-guides/minikube/#persistent-volumes" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/minikube/#persistent-volumes</a></p> <p>So what if my developments need persistent storage (MySQL database, mongodb database, ...) ? Do I need to throw my Minikube and install directly the full Kubernetes ?</p>
<p>This is covered in <a href="https://kubernetes.io/docs/getting-started-guides/minikube/#persistent-volumes" rel="noreferrer">the documentation</a>. The relevant section starts right after the sentence that you've already quoted:</p> <blockquote> <p>However, Minikube is configured to persist files stored under the following host directories:</p> <ul> <li>/data</li> <li>/var/lib/localkube</li> <li>/var/lib/docker</li> </ul> <p>Here is an example PersistentVolume config to persist data in the ‘/data’ directory:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: accessModes: - ReadWriteOnce capacity: storage: 5Gi hostPath: path: /data/pv0001/ </code></pre> </blockquote> <p>Simply declare <code>hostPath</code> volumes that are mapped to any directory in <code>/data</code> on the host, and these should persist across reboots.</p>
<p>Lets say I have 3 containers X,Y and Z. I want container Y to start only after container X is running, and container Z to start only after Y is running.</p> <p>How do I do this?</p>
<p>You can use <a href="https://docs.docker.com/compose/overview" rel="nofollow noreferrer"><code>docker-compose</code></a> file with the <a href="https://docs.docker.com/compose/compose-file/#dependson" rel="nofollow noreferrer"><code>depends_on</code></a> option to control container start order</p> <p>Here is how the <code>docker-compose.yml</code> for your example would look like:</p> <pre><code>version: '3' services: X: image: X_image Y: image: Y_image depends_on: X Z: image: Z_image depends_on: Y </code></pre> <blockquote> <p><strong>Note</strong>: <code>depends_on</code> will not wait for X to be “ready” before starting Y - only until it's running. If you need to wait for a service to be ready, see <a href="https://docs.docker.com/compose/startup-order" rel="nofollow noreferrer">Controlling startup order</a> for more on this problem and strategies for solving it.</p> </blockquote>
<p>I have a Kubernetes 1.5.3 cluster brought up on two nodes using kubeadm (details below). I've been trying to start up <a href="https://coreos.com/blog/the-prometheus-operator.html" rel="nofollow noreferrer">prometheus-operator</a> tonight. After I deleted a resource of kind=Prometheus, my cluster got in to a very strange state, where when I attempted to recreate the prometheus resource, no pod was ever created, though the TPR exists. Investigation lead me to:</p> <p><code> rrix@hypervisor01:~$ kubectl logs -n kube-system kube-apiserver-hypervisor01 | tail -10 W0226 03:57:34.641244 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:57:34.645073 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:58:04.642150 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:58:04.647953 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:58:34.642118 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:58:34.646427 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:59:04.642189 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:59:04.647978 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:59:34.646129 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist W0226 03:59:34.666355 1 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist </code></p> <p>The workaround in this <a href="https://github.com/rancher/rancher/issues/7626" rel="nofollow noreferrer">rancher github issue</a> is to simply restart the apiserver, which would be fine, except this didn't resolve the issue. Otherwise, it points to high load on the etcd, which as you can see below, is hosted as a pod, on the host which currently has a load of less than 1. etcdctl commands run successfully, leading me to believe that the etcd itself is fine. For good measure I bumped the etcd docker container, and it still works fine. From here, however, I am lost and need some assistance getting my cluster back in working order. </p> <p>Cluster overview:</p> <p><code> rrix@hypervisor01:~$ kubectl get no NAME STATUS AGE hypervisor01 Ready,master 2d kubes01.pss9.kickass.systems Ready 2d rrix@hypervisor01:~$ kubectl get -n kube-system po NAME READY STATUS RESTARTS AGE dummy-2088944543-2p3zf 1/1 Running 1 2d etcd-hypervisor01 1/1 Running 2 2d kube-apiserver-hypervisor01 1/1 Running 3 2d kube-controller-manager-hypervisor01 1/1 Running 1 2d kube-discovery-1769846148-v8h50 1/1 Running 1 2d kube-dns-2924299975-3s26d 4/4 Running 4 2d kube-proxy-vpw73 1/1 Running 1 2d kube-proxy-zfh13 1/1 Running 0 2d kube-registry-proxy-6hhk9 1/1 Running 1 2d kube-registry-proxy-nl1s1 1/1 Running 0 2d kube-registry-v0-4d94t 1/1 Running 0 2d kube-scheduler-hypervisor01 1/1 Running 1 2d rrix@hypervisor01:~$ kubectl get po NAME READY STATUS RESTARTS AGE kube-flannel-ds-589fw 2/2 Running 3 2d kube-flannel-ds-7f5sx 2/2 Running 0 2d kube-state-metrics-3229993571-20nbc 1/1 Running 0 1h node-exporter-1cdlj 1/1 Running 0 1h node-exporter-jc54s 1/1 Running 0 1h prometheus-operator-996254120-0wzg6 1/1 Running 0 2h </code></p>
<p><code> E0226 16:20:47.861762 1 pet_set.go:272] Error syncing StatefulSet default/prometheus-prometheus-k8s, requeuing: Failed to create prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0: PersistentVolumeClaim "prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0" is invalid: spec.resources[storage]: Required value I0226 16:20:47.862793 1 event.go:217] Event(api.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"prometheus-prometheus-k8s", UID:"272d42fd-fbd1-11e6-9ae6-a0481cb808c8", APIVersion:"apps", ResourceVersion:"304212", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' pvc: prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0, error: PersistentVolumeClaim "prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0" is invalid: spec.resources[storage]: Required value E0226 16:20:47.865352 1 pet_set.go:272] Error syncing StatefulSet default/prometheus-prometheus-services, requeuing: Failed to create prometheus-prometheus-services-db-prometheus-prometheus-services-0: PersistentVolumeClaim "prometheus-prometheus-services-db-prometheus-prometheus-services-0" is invalid: spec.resources[storage]: Required value I0226 16:20:47.865472 1 event.go:217] Event(api.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"prometheus-prometheus-services", UID:"1899b833-fbd2-11e6-9ae6-a0481cb808c8", APIVersion:"apps", ResourceVersion:"304733", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' pvc: prometheus-prometheus-services-db-prometheus-prometheus-services-0, error: PersistentVolumeClaim "prometheus-prometheus-services-db-prometheus-prometheus-services-0" is invalid: spec.resources[storage]: Required value E0226 16:21:17.854692 1 pet_set.go:272] Error syncing StatefulSet default/prometheus-prometheus-k8s, requeuing: Failed to create prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0: PersistentVolumeClaim "prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0" is invalid: spec.resources[storage]: Required value I0226 16:21:17.855043 1 event.go:217] Event(api.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"prometheus-prometheus-k8s", UID:"272d42fd-fbd1-11e6-9ae6-a0481cb808c8", APIVersion:"apps", ResourceVersion:"304212", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' pvc: prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0, error: PersistentVolumeClaim "prometheus-prometheus-k8s-db-prometheus-prometheus-k8s-0" is invalid: spec.resources[storage]: Required value E0226 16:21:17.858436 1 pet_set.go:272] Error syncing StatefulSet default/prometheus-prometheus-services, requeuing: Failed to create prometheus-prometheus-services-db-prometheus-prometheus-services-0: PersistentVolumeClaim "prometheus-prometheus-services-db-prometheus-prometheus-services-0" is invalid: spec.resources[storage]: Required value I0226 16:21:17.858561 1 event.go:217] Event(api.ObjectReference{Kind:"StatefulSet", Namespace:"default", Name:"prometheus-prometheus-services", UID:"1899b833-fbd2-11e6-9ae6-a0481cb808c8", APIVersion:"apps", ResourceVersion:"304733", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' pvc: prometheus-prometheus-services-db-prometheus-prometheus-services-0, error: PersistentVolumeClaim "prometheus-prometheus-services-db-prometheus-prometheus-services-0" is invalid: spec.resources[storage]: Required value </code></p> <p>User error <em>facepalm</em>, didn't think to look in the manager logs.</p>
<p>Is there anyway to filter on source-ip on the kubernetes ingress in GCE? I have tried the ingress.kubernetes.io/whitelist-source-range: but doesnt seems to be working in GCE.</p>
<p>I guess You are trying to use this feature with GCE native controller instead NGINX? This works with NGINX only at the moment.</p> <p>NGINX Controller configuration: <a href="https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md</a></p> <p>Example configuration of ingress :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: whitelist annotations: ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24" spec: rules: - host: whitelist.test.net http: paths: - path: / backend: serviceName: webserver servicePort: 80 </code></pre>
<p>My Kubernetes cluster setup has n-tier web application running in dev and test environments on AWS. For the production environment, postgres RDS was chosen, to ensure periodic backup. While creating a postgres RDS instance, kubernetes-vpc was selected for db-subnet to keep networking stuff simple during pilot run. Also, security group selected is the same as kubernetes-minions.</p> <p>Following is the service and endpoint yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: name: pgsql-rds name: pgsql-rds spec: ports: - port: 5432 protocol: TCP targetPort: 5432 </code></pre> <p>--</p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: pgsql-rds subsets: - addresses: - ip: 52.123.44.55 ports: - port: 5432 name: pgsql-rds protocol: TCP </code></pre> <p>When web-app service and deployment is created, it's unable to connect to RDS instance. The log is as follows:</p> <p><em>java.sql.SQLException: Error in allocating a connection. Cause: Connection could not be allocated because: Connection to pgsql-rds:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.</em></p> <p>What am I missing? any pointers to resolve the issue appreciated.</p>
<p>This has to do with DNS resolving. When you use the RDS dns name INSIDE the same VPC it will be resolved to a private ip. When you use the same dns name on the internet or another VPC you will get the public ip of the RDS instance.</p> <p>This is a problem because from another VPC you can not make use of the load balancing feature unless you expose the RDS instance to the public internet.</p>
<p>Lets say I have 3 containers X,Y and Z. I want container Y to start only after container X is running, and container Z to start only after Y is running.</p> <p>How do I do this?</p>
<p>You should consider to redesign your application parts if they have such heavy dependencies, then they should maybe not be split to different containers or you should use some decoupeling to avoid</p> <p>What you could do is to:</p> <ol> <li>Have one ore more <a href="https://kubernetes.io/docs/concepts/abstractions/init-containers/#init-containers-in-use" rel="nofollow noreferrer">init containers</a></li> <li>Have a <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer">entrypoint script</a> within each of the containers which blocks until the dependency is fulfilled</li> <li>Implement something outside the cluster which controls which pods are started.</li> <li>Write a <a href="https://www.youtube.com/watch?v=66NzGKJpyyg" rel="nofollow noreferrer">custom controller</a></li> </ol>
<p>I'm trying to configure kubernetes and in my project I've separeted UI and API. I created one Pod and I exposed both as services.</p> <p>How can I set API_URL inside pod.yaml configuration in order to send requests from user's browser?</p> <p>I can't use localhost because the communication isn't between containers.</p> <p>pod.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: project labels: name: project spec: containers: - image: 'ui:v1' name: ui ports: - name: ui containerPort: 5003 hostPort: 5003 env: - name: API_URL value: &lt;how can I set the API address here?&gt; - image: 'api:v1' name: api ports: - name: api containerPort: 5000 hostPort: 5000 env: - name: DATABASE_URL valueFrom: secretKeyRef: name: postgres-url key: url </code></pre> <p>services.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: api labels: name: api spec: type: NodePort ports: - name: 'http' protocol: 'TCP' port: 5000 targetPort: 5000 nodePort: 30001 selector: name: project --- apiVersion: v1 kind: Service metadata: name: ui labels: name: ui spec: type: NodePort ports: - name: 'http' protocol: 'TCP' port: 80 targetPort: 5003 nodePort: 30003 selector: name: project </code></pre>
<p>The service IP is already available in a environment variable inside the pod, because Kubernetes initializes a set of environment variables for each service that exists at that moment.</p> <p>To list all the environment variables of a pod</p> <pre><code>kubectl exec &lt;pod-name&gt; env </code></pre> <p>If the pod was created before the service you must delete it and create it again.</p> <p>Since you named your service <code>api</code>, one of the variables the command above should list is <code>API_SERVICE_HOST</code>.</p> <p>But you don't really need to lookup the service IP address inside environment variables. You can simply use the service name as the hostname. Any pod can connect to the service <code>api</code>, simply by calling <code>api.default.svc.cluster</code> (assuming your service is in the <code>default</code> namespace).</p>
<p>I've been trying to run a deployment using a private init container image with little success, I always get this error:</p> <pre><code>Failed to pull image "private/app": Error: image private/app:latest not found Error syncing pod, skipping: failed to "StartContainer" for "app" with ErrImagePull: "Error: image private/app:latest not found" </code></pre> <p>Here is my deployment:</p> <pre><code>"kind": "Deployment" "apiVersion": "extensions/v1beta1" "metadata": "name": "tomcat" "creationTimestamp": null "spec": "replicas": 1 "template": "metadata": "creationTimestamp": null "labels": "service": "tomcat" "annotations": "pod.beta.kubernetes.io/init-containers": '[ { "name": "app", "image": "private/app", "imagePullPolicy": "IfNotPresent" } ]' "spec": "containers": - "name": "tomcat" "image": "private/tomcat" "ports": - "containerPort": 8080 "protocol": "TCP" "imagePullSecrets": - "name": "my-secret" "restartPolicy": "Always" "strategy": {} "status": {} </code></pre> <p>I also tried it with the change suggested here <a href="https://stackoverflow.com/questions/42422892/kubernetes-init-containers-using-a-private-repo">kubernetes init containers using a private repo</a>:</p> <pre><code>"pod.beta.kubernetes.io/init-containers": '[ { "name": "app", "image": "private/app", "imagePullPolicy": "IfNotPresent", "imagePullSecrets": [ { "name": "my-secret" } ] } ]' </code></pre> <p>But still nothing...</p> <p>Note that I have tested this deployment without the init container and the image pulling was successful.</p> <p>Also note that this is a simplified version of my actual configuration, in the real configuration there is volume mounting for both containers and some env variables.</p> <p>How do I configure "imagePullSecrets" for an init-container?</p> <p><strong>Edit:</strong> I was asking around in the kubernetes slack channel and it seems I forgot to give permissions to the cluster docker user (CI docker user if you would) permissions to this hub repository, once I did that the "imagePullPolicy" on the init container was redundant, the one on the "template" > "spec" was enough.</p> <p>Thanks @koki, wherever you might be.</p>
<p>You should using <code>secret</code> object.</p> <p>something like this:</p> <pre><code> kubectl create secret docker-registry myregistry \ --docker-server=https://example.io \ --docker-username=foo \ --docker-password=boosecret \ [email protected] </code></pre> <p>And use it , in another object like this:</p> <pre><code> imagePullSecrets: - name: myregistry </code></pre>
<p>Im trying to create a replication Controller based on an image that I created locally. But when I try to create the rc it gives error <code>ImagePullBackOff</code>. I have created a cluster locally using <code>minikube</code></p> <p>Here is my <code>.yaml</code> file:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: example spec: replicas: 1 selector: app: ayonAppserver template: metadata: name: example.com labels: app: ayonAppserver spec: containers: - name: something image: nktest:10 resources: limits: cpu: 500m memory: 1024Mi </code></pre> <p>Command that I run to create the rc:</p> <pre><code>kubectl create -f &lt;file&gt; </code></pre> <p>When Im running <code>docker images</code> I see the image in the list</p> <pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE nktest 10 e60b3c9c3bc6 10 hours ago 425 MB </code></pre> <p>when I run <code>kubectl get pods</code></p> <pre><code>NAME READY STATUS RESTARTS AGE example-gr9v2 0/1 ImagePullBackOff 0 2m </code></pre> <p>I have tried to run the docker image locally, and it runs fine</p> <pre><code>docker run -d --name="testAyonApp1" nktest:10 </code></pre> <p>Can anyone help to solve this?</p>
<p>So thanks to @BMW for helping me with the issue. The problem was that I was thinking since I created the cluster using <code>minikube</code> (locally) every image that I create in my local machine will be visible to minikube cluster. But an image is visible only when its present inside the node. Thats why every time I wanted to build it, it was looking for downloading the image.</p> <p>I have now created a dockerhub account and pushed the image in the hub. And now things are working just fine. </p>
<p>my web application is running as a Kubernetes pod behind an nginx reverse proxy for SSL. Both the proxy and my application use Kubernetes services for load balancing (as described <a href="http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html">here</a>).</p> <p>The problem is that all of my HTTP request logs only show the internal cluster IP addresses instead of the addresses of the actual HTTP clients. Is there a way to make Kubernetes services pass this information to my app servers?</p>
<p>As of 1.5, if you are running in GCE (by extension GKE) or AWS, you simply need to add an annotation to your Service to make HTTP source preservation work.</p> <pre><code>... kind: Service metadata: annotations: service.beta.kubernetes.io/external-traffic: OnlyLocal ... </code></pre> <p>It basically exposes the service directly via nodeports instead of providing a proxy--by exposing a health probe on each node, the load balancer can determine which nodes to route traffic to.</p> <p>In 1.7, this config has become GA, so you can set <code>"externalTrafficPolicy": "Local"</code> on your Service spec.</p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="noreferrer">Click here to learn more</a></p>
<p>I want to store files in Kubernetes Secrets but I haven't found how to do it using a <code>yaml</code> file.</p> <p>I've been able to make it using the cli with <code>kubectl</code>:</p> <pre><code>kubectl create secret generic some-secret --from-file=secret1.txt=secrets/secret1.txt </code></pre> <p>But when I try something similar in a <code>yaml</code>:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: some-secret type: Opaque data: secret1.txt: secrets/secret1.txt </code></pre> <p>I´ve got this error:</p> <pre><code>[pos 73]: json: error decoding base64 binary 'assets/elasticsearch.yml': illegal base64 data at input byte 20 </code></pre> <p>I'm following this guide <a href="http://kubernetes.io/docs/user-guide/secrets/" rel="noreferrer">http://kubernetes.io/docs/user-guide/secrets/</a>. It explains how to create a secret using a <code>yaml</code> but not how to create a secret from a <strong>file</strong> using <code>yaml</code>.</p> <p>Is it possible? If so, how can I do it?</p>
<p>As answered on previous post, we need to provide the certificate/key encoded as based64 to the file. </p> <p>Here is generic example for a certiticate (in this case SSL):</p> <p>The <code>secret.yml.tmpl</code>:</p> <pre><code> apiVersion: v1 kind: Secret metadata: name: test-secret namespace: default type: Opaque data: server.crt: SERVER_CRT server.key: SERVER_KEY </code></pre> <p>Pre-process the file to include the certificate/key:</p> <pre><code>sed "s/SERVER_CRT/`cat server.crt|base64 -w0`/g" secret.yml.tmpl | \ sed "s/SERVER_KEY/`cat server.key|base64 -w0`/g" | \ kubectl apply -f - </code></pre> <p>Note that the certificate/key are encoded using base64 without whitespaces (-w0).</p> <p>For the TLS can be simply:</p> <pre><code>kubectl create secret tls test-secret-tls --cert=server.crt --key=server.key </code></pre>
<p>I'm running a number of python apps as Replica Sets inside of kubernetes on Google Container Engine (gke). Along side them I've created the Datadog DaemonSet which launches a dd-agent on each node in my cluster.</p> <p>Now I would like to use that agents dogstatsd for metrics logging from python apps as well as try out the new Datadog APM. If I just install the ddtrace python package and use it like documented it fills up my logs with</p> <pre><code>[2017-02-24 14:09:15,199] [5] [ddtrace.writer] [ERROR] cannot send spans: [Errno 110] Connection timed out [2017-02-24 14:11:23,660] [5] [ddtrace.writer] [ERROR] cannot send spans: [Errno 110] Connection timed out </code></pre> <p>Clearly it don't have magical way to guess how to access port 8126/7777 of the ddagent pods.</p> <p>Ive tried creating a Service which expose the ports:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: 'true' labels: app: datadog-statsd name: datadog-statsd spec: ports: - name: dogstatsd port: 8125 targetPort: dogstatsdport protocol: UDP - name: ddapm port: 8126 targetPort: ddtraceport protocol: TCP selector: app: dd-agent </code></pre> <p>but my python pods still don't seem to be able access for example <code>os.environ['DATADOG_STATSD_PORT_8126_TCP_ADDR']</code> and <code>.._PORT</code>. They are defined and all, I just still get the connection timed out. If I connect to the dd-agent pods and enable tcpdump I also don't see any trafic on ports 8126 etc.</p> <p>The dd-agent DaemonSet is defined like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: dd-agent spec: template: metadata: labels: app: dd-agent name: dd-agent spec: containers: - image: datadog/docker-dd-agent:latest imagePullPolicy: Always name: dd-agent ports: - containerPort: 8125 name: dogstatsdport protocol: UDP - containerPort: 8126 name: ddtraceport protocol: TCP env: - name: API_KEY value: ..... - name: KUBERNETES value: "yes" - name: SD_BACKEND value: docker - name: DD_APM_ENABLED value: "true" volumeMounts: - name: dockersocket mountPath: /var/run/docker.sock - name: procdir mountPath: /host/proc readOnly: true - name: cgroups mountPath: /host/sys/fs/cgroup readOnly: true volumes: - hostPath: path: /var/run/docker.sock name: dockersocket - hostPath: path: /proc name: procdir - hostPath: path: /sys/fs/cgroup name: cgroups </code></pre>
<p>So, while trying to debug this I deleted the deployment + dameonset and service and recreated it. Afterwards it worked....</p>
<p>I am following the Kubernetes guide here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-ram-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-ram-container/</a></p> <p>When I run this command:</p> <p><code>kubectl create -f http://k8s.io/docs/tasks/configure-pod-container/cpu-ram.yaml</code></p> <p>About 30 seconds later, I get this message:</p> <p><code>Unable to connect to the server: dial tcp 172.17.0.1:4321: i/o timeout</code></p> <p>I have tried lots of suggestions on the web (including here on Stack Overflow), but can't figure it out. I am using Google Cloud Shell on Google Cloud Platform and trying to set up Kubernetes (via the official Kubernetes guides on their website). Here is the output of <code>gcloud info</code>:</p> <pre><code>Google Cloud SDK [145.0.0] Platform: [Linux, x86_64] Python Version: [2.7.9 (default, Jun 29 2016, 13:08:31) [GCC 4.9.2]] Python Location: [/usr/bin/python2] Site Packages: [Disabled] Installation Root: [/google/google-cloud-sdk] Installed Components: kubectl: [] app-engine-python: [1.9.50] pubsub-emulator: [2017.02.07] gsutil-nix: [4.18] gsutil: [4.22] cloud-datastore-emulator: [1.2.1] disable_update_check: [True] app-engine-java: [1.9.49] gcloud: [] core: [2017.02.21] datalab: [20170215] gcloud-deps: [2017.02.21] beta: [2016.01.12] bq: [2.0.24] alpha: [2016.01.12] datalab-nix: [20170105] core-nix: [2016.11.07] app-engine-go-linux-x86_64: [1.9.50] app-engine-go: [] app-engine-php: [ ] gcloud-deps-linux-x86_64: [2017.02.21] gcd-emulator: [v1beta3-1.0.0] kubectl-linux-x86_64: [1.5.2] bq-nix: [2.0.24] System PATH: [/google/google-cloud-sdk/bin:/usr/local/nvm/versions/node/v6.9.2/bin:/home/_removed_/gopath/bin:/google/gopath/bin:/usr/local/go/bin:/gradle-2.12/bin:/apache-maven-3.3.9/bin:/google/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/google/go_appengine:/google/google_appengine] Cloud SDK on PATH: [True] Kubectl on PATH: [/google/google-cloud-sdk/bin/kubectl] WARNING: There are old versions of the Google Cloud Platform tools on your system PATH. /google/google_appengine/endpointscfg.py /google/go_appengine/dev_appserver.py /google/google_appengine/dev_appserver.py /google/go_appengine/endpointscfg.py Installation Properties: [/google/google-cloud-sdk/properties] User Config Directory: [/tmp/tmp.XKaUThqtJm] Active Configuration Name: [cloudshell-29694] Active Configuration Path: [/tmp/tmp.XKaUThqtJm/configurations/config_cloudshell-29694] Account: [[email protected]] Project: [_removed_] Current Properties: [metrics] environment: [devshell] [core] check_gce_metadata: [False] project: [_removed_] account: [[email protected]] disable_usage_reporting: [False] [compute] region: [us-central1] gce_metadata_read_timeout_sec: [5] zone: [us-central1-b] [component_manager] disable_update_check: [True] Logs Directory: [/tmp/tmp.XKaUThqtJm/logs] Last Log File: [/tmp/tmp.XKaUThqtJm/logs/2017.02.27/21.51.45.088860.log] </code></pre>
<p>You're trying to schedule a <a href="https://kubernetes.io/docs/user-guide/pods/" rel="nofollow noreferrer"><code>Pod</code></a> in a cluster which does not exist. Prior to creating resources you need to provision the underlying cluster.</p> <p>I kindly suggest you read the documentation at <a href="https://k8s.io" rel="nofollow noreferrer">https://k8s.io</a> in order to understand Kubernetes and its building blocks. The documentation is also a good place to find out how to provision a Kubernetes cluster. On Google Cloud this is pretty trivial.</p>
<p>I have a service exposed of type=LoadBalancer and when I do a</p> <p><code>kubectl describe services servicename</code>,</p> <p>I get this output :</p> <pre><code>Name: ser1 Namespace: default Labels: app=online1 Selector: app=online1 Type: LoadBalancer IP: 10.0.0.32 External IPs: 192.168.99.100 Port: &lt;unset&gt; 8080/TCP NodePort: &lt;unset&gt; 30545/TCP Endpoints: 172.17.0.10:8080,172.17.0.11:8080,172.17.0.8:8080 + 1 more... Session Affinity: None </code></pre> <p>Can someone please guide on the following doubts :</p> <p>1.) I can't understand what <code>&lt;unset&gt;</code> means in Port and NodePort. Also, how does it affect my service?</p> <p>2.) When I want to hit a service, I should hit the service using <code>&lt;external-ip:NodePort&gt;</code> right? Then what's the use of Port?</p>
<p><strong>Port unset</strong> means: You didn't specify a name in service creation.</p> <p>Service Yaml excerpt (note <code>name: grpc</code>):</p> <pre><code>spec: ports: - port: 26257 targetPort: 26257 name: grpc type: NodePort </code></pre> <p><code>kubectl describe services servicename</code> output excerpt:</p> <pre><code>Type: NodePort IP: 10.101.87.248 Port: grpc 26257/TCP NodePort: grpc 31045/TCP Endpoints: 10.20.12.71:26257,10.20.12.73:26257,10.20.8.81:26257 </code></pre> <p><strong>Port</strong> is definition of container ports that service will send the traffic on (Actual Endpoint).</p>
<p>I'm defining this autoscaler with kubernetes and GCE and I'm wondering what exactly should I specify for <code>targetCPUUtilizationPercentage</code>. That target points to what exactly? Is it the total CPU in my cluster? When the pods referenced in this autoscaler consume more than <code>targetCPUUtilizationPercentage</code> what happens?</p>
<p>The CPU utilization is the average CPU usage of a all pods in a deployment across the last minute divided by the requested CPU of this deployment. If the mean of the pods' CPU utilization is higher than the target you defined, then your replicas will be adjusted accordingly.</p> <p>You can read more about this topic <a href="https://github.com/kubernetes/website/blob/main/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md#algorithm-details" rel="nofollow noreferrer">here</a>.</p>
<p>I'm running multiple web pods for the same application and I have this folder in my application where clients can upload their files. the folder is sitting in this directory(if my war file name is sample.war) /usr/local/tomcat/webapps/sample/uploadfiles and I want it to be shared between all the web pods, in case the client login to a random session within a specific web POD 1 and upload some files there, can find them next time he logs in to another random session within another web POD N I'm using Google Cloud Platform container engine and Google persistent disk as persistent volume</p> <p>my dockerfile looks something like this</p> <pre><code>FROM tomcat:8-jre8 ADD sample.war /usr/local/tomcat/webapps/ CMD ["catalina.sh", "run"] </code></pre> <p>I have this in my Kubernetes Deployment YAML file </p> <pre><code>volumeMounts: - mountPath: /usr/local/tomcat/webapps/ </code></pre> <p>but I get an empty /usr/local/tomcat/webapps/ directory inside my pod with only lost+found and even when I mountpath any sub-directory of /usr/local/tomcat/webapps/ that sub-directory is empty too.</p> <p>So should I edit my application in a way the files gets uploaded to a directory outside /usr/local/tomcat/webapps/ like /usr/local/tomcat/uploadfiles, or there is a way to share /usr/local/tomcat/webapps/sample/uploadfiles between the pods.</p>
<p>Sharing the same writeable volume between different pods is not something that is supported by all volume types. Have a read of the <a href="https://kubernetes.io/docs/user-guide/persistent-volumes/" rel="nofollow noreferrer">Persistent Volumes manual</a> and look for volume types that support <code>ReadWriteMany</code></p>
<p>I deploy a kubernetes cluster following the guide: <a href="https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/" rel="nofollow noreferrer">https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/</a>. It basically uses hypriotOS and kubernetes from the debian repository.</p> <p>After the deployment, all the pods were running and no faults were shown. However, the dns server was not working properly on the worker node.</p> <p><strong>master</strong></p> <pre><code>$ kubectl -n kube-system get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 34m kubernetes-dashboard 10.103.97.112 &lt;nodes&gt; 80:30518/TCP 31m # I installed the dnsutils to have the dig command $ dig @10.96.0.10 || echo "FAIL" # shows a valid response (note that we are not resolving anything) </code></pre> <p><strong>worker</strong></p> <pre><code>$ dig @10.96.0.10 || echo "FAIL" .... FAIL </code></pre>
<p>It turn out that the answer was in one of the <a href="https://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/#comment-3142210207" rel="nofollow noreferrer">comments</a> from , but it was not clear that this was my issue.</p> <p>As the author of the comment stated is due to the iptables policies from Docker versions > 1.13.</p> <p>To solve it, execute the following on both nodes:</p> <pre><code>sudo iptables -A FORWARD -i cni0 -j ACCEPT sudo iptables -A FORWARD -o cni0 -j ACCEPT </code></pre>
<p>I'm wondering how people are deploying a production-caliber Kubernetes cluster in AWS and, more importantly, how they chose their approach.</p> <p>The <a href="https://kubernetes.io/docs/getting-started-guides/aws/#supported-production-grade-tools-with-high-availability-options" rel="nofollow noreferrer">k8s documentation</a> points towards <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a> for Debian, Ubuntu, CentOS, and RHEL or <a href="https://github.com/coreos/kube-aws/" rel="nofollow noreferrer">kube-aws</a> for CoreOS/Container Linux. Among these choices it's not clear how to pick one over the others. CoreOS seems like the most compelling option since it's designed for container workloads.</p> <p>But wait, there's more.</p> <p><a href="https://github.com/kubernetes-incubator/bootkube" rel="nofollow noreferrer">bootkube</a> seems to be next iteration of the CoreOS deployment technology and is on the <a href="https://github.com/coreos/kube-aws/blob/master/ROADMAP.md#v09x" rel="nofollow noreferrer">roadmap</a> for inclusion within kube-aws. Should I wait until kube-aws uses bootkube?</p> <p><a href="https://www.heptio.com/" rel="nofollow noreferrer">Heptio</a> recently announced a <a href="https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/" rel="nofollow noreferrer">Quickstart architecture</a> for deploying k8s in AWS. This is the newest approach and so probably the least mature approach but it does seem to have gained traction from within AWS.</p> <p>Lastly <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">kubeadm</a> is a thing and I'm not really sure where it fits into all of this.</p> <p>There are probably more approaches that I'm missing too.</p> <p>Given the number of options with overlapping intent it's very difficult to choose a path forward. I'm not interested in a proof-of-concept. I want to be able to deploy a secure, highly-available cluster for production use and be able to upgrade the cluster (host OS, etcd, and k8s system components) over time.</p> <p>What did you choose and how did you decide? </p>
<p>I'd say pick anything which fit's your needs (see also <a href="https://kubernetes.io/docs/getting-started-guides/#table-of-solutions" rel="nofollow noreferrer">Picking the right solution</a>)...</p> <p>Which could be:</p> <ul> <li>Speed of the cluster setup</li> <li>Integration in your existing toolchain <ul> <li>e.g. kops integrates with Terraform which might be a good fit for some prople</li> </ul></li> <li>Experience within your team/company/... <ul> <li>e.g. how comfortable are you with the related Linux distribution</li> </ul></li> <li>Required maturity of the tool itself <ul> <li>some tools are very alpha, are you willing to play to role of an early adaptor?</li> </ul></li> <li>Ability to upgrade between Kubernetes versions <ul> <li>kubeadm has this on their agenda, some others prefer to throw away clusters instead of upgrading</li> </ul></li> <li>Required integration into external tools (monitoring, logging, auth, ...)</li> <li>Supported cloud providers</li> </ul> <p>With your specific requirements I'd pick the Heptio or kubeadm approach.</p> <ul> <li><a href="https://www.heptio.com/" rel="nofollow noreferrer">Heptio</a> if you can live with the given constraints (e.g. predefined OS)</li> <li><a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">kubeadm</a> if you need more flexibility, everything done with kubeadm can be transferred to other cloud providers</li> </ul> <p>Other options for AWS lower on my list:</p> <ul> <li><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">Kubernetes the hard way</a> - using this might be the only true way to setup a production cluster as this is the only way you can fully understand each moving part of the system. Lower on the list, because often the result from any of the tools might just be more than enough, even for production.</li> <li><a href="https://kubernetes.io/docs/getting-started-guides/aws/#kube-up-bash-script" rel="nofollow noreferrer">kube-up.sh</a> - is deprecated by the community, so I'd not use it for new projects</li> <li><a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a> - my team had some strange experiences with it which seemed due to our (custom) needs back then (existing VPC), that's why it's lower on my list - it would be #1 for an environment where Terraform is used too.</li> <li><a href="https://github.com/kubernetes-incubator/bootkube" rel="nofollow noreferrer">bootkube</a> - lower on my list, because it's limitation to CoreOS</li> <li><a href="http://rancher.com/" rel="nofollow noreferrer">Rancher</a> - interesting toolchain, seems to be too much for a single cluster</li> </ul> <hr> <p>Offtopic: If you don't <em>have</em> to run on AWS, I'd also always consider to rather run on GCE for production workloads, as this is a well managed platform rather than something you've to build yourself.</p>
<p>After deploying the kubernetes cluster using kargo, I found out that kubedns pod is not working properly:</p> <pre><code>$ kcsys get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE dnsmasq-alv8k 1/1 Running 2 1d 10.233.86.2 kubemaster dnsmasq-c9y52 1/1 Running 2 1d 10.233.82.2 kubeminion1 dnsmasq-sjouh 1/1 Running 2 1d 10.233.76.6 kubeminion2 kubedns-hxaj7 2/3 CrashLoopBackOff 339 22h 10.233.76.3 kubeminion2 </code></pre> <p><em>PS :</em> <code>kcsys</code> <em>is an alias of</em> <code>kubectl --namespace=kube-system</code></p> <p>Logs for each container (kubedns, dnsmasq) seems OK except healthz container as following:</p> <pre><code>2017/03/01 07:24:32 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local' error exit status 1 </code></pre> <p><strong>Update</strong></p> <p><strong>kubedns rc description</strong></p> <pre class="lang-none prettyprint-override"><code>apiVersion: v1 kind: ReplicationController metadata: creationTimestamp: 2017-02-28T08:31:57Z generation: 1 labels: k8s-app: kubedns kubernetes.io/cluster-service: "true" version: v19 name: kubedns namespace: kube-system resourceVersion: "130982" selfLink: /api/v1/namespaces/kube-system/replicationcontrollers/kubedns uid: 5dc9f9f2-fd90-11e6-850d-005056a020b4 spec: replicas: 1 selector: k8s-app: kubedns version: v19 template: metadata: creationTimestamp: null labels: k8s-app: kubedns kubernetes.io/cluster-service: "true" version: v19 spec: containers: - args: - --domain=cluster.local. - --dns-port=10053 - --v=2 image: gcr.io/google_containers/kubedns-amd64:1.9 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: kubedns ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readiness port: 8081 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: limits: cpu: 100m memory: 170Mi requests: cpu: 70m memory: 70Mi terminationMessagePath: /dev/termination-log - args: - --log-facility=- - --cache-size=1000 - --no-resolv - --server=127.0.0.1#10053 image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3 imagePullPolicy: IfNotPresent name: dnsmasq ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP resources: limits: cpu: 100m memory: 170Mi requests: cpu: 70m memory: 70Mi terminationMessagePath: /dev/termination-log - args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 &gt;/dev/null &amp;&amp; nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 &gt;/dev/null - -port=8080 - -quiet image: gcr.io/google_containers/exechealthz-amd64:1.1 imagePullPolicy: IfNotPresent name: healthz ports: - containerPort: 8080 protocol: TCP resources: limits: cpu: 10m memory: 50Mi requests: cpu: 10m memory: 50Mi terminationMessagePath: /dev/termination-log dnsPolicy: Default restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 status: fullyLabeledReplicas: 1 observedGeneration: 1 replicas: 1` </code></pre> <p><strong>kubedns svc description:</strong></p> <pre class="lang-none prettyprint-override"><code>apiVersion: v1 kind: Service metadata: creationTimestamp: 2017-02-28T08:31:58Z labels: k8s-app: kubedns kubernetes.io/cluster-service: "true" kubernetes.io/name: kubedns name: kubedns namespace: kube-system resourceVersion: "10736" selfLink: /api/v1/namespaces/kube-system/services/kubedns uid: 5ed4dd78-fd90-11e6-850d-005056a020b4 spec: clusterIP: 10.233.0.3 ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kubedns sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>I catch some errors in kubedns container:</p> <pre><code>1 reflector.go:199] pkg/dns/dns.go:145: Failed to list *api.Endpoints: Get https://10.233.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.233.0.1:443: i/o timeout 1 reflector.go:199] pkg/dns/dns.go:148: Failed to list *api.Service: Get https://10.233.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.233.0.1:443: i/o timeout </code></pre> <h1>UPDATE 2</h1> <ol> <li>iptables rules created by kube-proxy when creating hostnames service with 3 pods:</li> </ol> <p><a href="https://i.stack.imgur.com/bbwYX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bbwYX.png" alt="enter image description here"></a></p> <ol start="2"> <li><p>flags of controller-manager pod: <a href="https://i.stack.imgur.com/5nFWI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5nFWI.png" alt="enter image description here"></a></p></li> <li><p>pods status </p></li> </ol> <p><a href="https://i.stack.imgur.com/nljzz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nljzz.png" alt="enter image description here"></a></p>
<p>According to the error you posted, <code>kubedns</code> can not communicate with the API server:</p> <pre><code>dial tcp 10.233.0.1:443: i/o timeout </code></pre> <p>This can mean three things:</p> <hr> <p><strong>Your network fabric for containers is not configured properly</strong></p> <ul> <li>Look for errors in the logs of the network solution you're using</li> <li>Make sure every Docker deamon is using its own IP range</li> <li>Verify that the container network does not overlap with the host network</li> </ul> <hr> <p><strong>You have a problem with your <code>kube-proxy</code> and the network traffic is not forwarded to the API server when using the <code>kubernetes</code> internal Service (10.233.0.1)</strong></p> <ul> <li>Check the <code>kube-proxy</code> logs on your nodes (kubeminion{1,2}) and update your question with any error you may find</li> </ul> <hr> <p>If you are also seeing authentication errors:</p> <p><strong><code>kube-controller-manager</code> does not produce valid Service Account tokens</strong></p> <ul> <li><p>Check that the <code>--service-account-private-key-file</code> and <code>--root-ca-file</code> flags of <code>kube-controller-manager</code> are set to a valid key/cert and restart the service</p></li> <li><p>Delete the <code>default-token-xxxx</code> secret in the <code>kube-system</code> namespace and recreate the <code>kube-dns</code> Deployment</p></li> </ul>
<p>I have several docker images that I want to use with <code>minikube</code>. I don't want to first have to upload and then download the same image instead of just using the local image directly. How do I do this?</p> <p>Stuff I tried: <br>1. I tried running these commands (separately, deleting the instances of minikube both times and starting fresh)</p> <pre><code>kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 kubectl run hdfs --image=fluxcapacitor/hdfs:latest --port=8989 imagePullPolicy=Never </code></pre> <p>Output:</p> <pre><code>NAME READY STATUS RESTARTS AGE hdfs-2425930030-q0sdl 0/1 ContainerCreating 0 10m </code></pre> <p>It just gets stuck on some status but never reaches the ready state.</p> <p><br>2. I tried creating a registry and then putting images into it but that didn't work either. I might've done that incorrectly but I can't find proper instructions to do this task.</p> <p>Please provide instructions to use local docker images in local kubernetes instance. <br>OS: ubuntu 16.04 <br>Docker : Docker version 1.13.1, build 092cba3 <br>Kubernetes :</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;5&quot;, GitVersion:&quot;v1.5.3&quot;, GitCommit:&quot;029c3a408176b55c30846f0faedf56aae5992e9b&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2017-02-15T06:40:50Z&quot;, GoVersion:&quot;go1.7.4&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;5&quot;, GitVersion:&quot;v1.5.2&quot;, GitCommit:&quot;08e099554f3c31f6e6f07b448ab3ed78d0520507&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;1970-01-01T00:00:00Z&quot;, GoVersion:&quot;go1.7.1&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>If someone could help me get a solution that uses docker-compose to do this, that'd be awesome.</p> <p><strong>Edit:</strong></p> <p>Images loaded in <code>eval $(minikube docker-env)</code>:</p> <pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE fluxcapacitor/jupyterhub latest e5175fb26522 4 weeks ago 9.59 GB fluxcapacitor/zeppelin latest fe4bc823e57d 4 weeks ago 4.12 GB fluxcapacitor/prediction-pmml latest cae5b2d9835b 4 weeks ago 973 MB fluxcapacitor/scheduler-airflow latest 95adfd56f656 4 weeks ago 8.89 GB fluxcapacitor/loadtest latest 6a777ab6167c 5 weeks ago 899 MB fluxcapacitor/hdfs latest 00fa0ed0064b 6 weeks ago 1.16 GB fluxcapacitor/sql-mysql latest 804137671a8c 7 weeks ago 679 MB fluxcapacitor/metastore-1.2.1 latest ea7ce8c5048f 7 weeks ago 1.35 GB fluxcapacitor/cassandra latest 3cb5ff117283 7 weeks ago 953 MB fluxcapacitor/apachespark-worker-2.0.1 latest 14ee3e4e337c 7 weeks ago 3.74 GB fluxcapacitor/apachespark-master-2.0.1 latest fe60b42d54e5 7 weeks ago 3.72 GB fluxcapacitor/package-java-openjdk-1.8 latest 1db08965289d 7 weeks ago 841 MB gcr.io/google_containers/kubernetes-dashboard-amd64 v1.5.1 1180413103fd 7 weeks ago 104 MB fluxcapacitor/stream-kafka-0.10 latest f67750239f4d 2 months ago 1.14 GB fluxcapacitor/pipeline latest f6afd6c5745b 2 months ago 11.2 GB gcr.io/google-containers/kube-addon-manager v6.1 59e1315aa5ff 3 months ago 59.4 MB gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 3 months ago 47 MB gcr.io/google_containers/kube-dnsmasq-amd64 1.4 3ec65756a89b 5 months ago 5.13 MB gcr.io/google_containers/exechealthz-amd64 1.2 93a43bfb39bf 5 months ago 8.37 MB gcr.io/google_containers/pause-amd64 </code></pre>
<p>As the <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env" rel="noreferrer">handbook</a> describes, you can reuse the Docker daemon from Minikube with <code>eval $(minikube docker-env)</code>.</p> <p>So to use an image without uploading it, you can follow these steps:</p> <ol> <li>Set the environment variables with <code>eval $(minikube docker-env)</code></li> <li>Build the image with the Docker daemon of Minikube (eg <code>docker build -t my-image .</code>)</li> <li>Set the image in the pod spec like the build tag (eg <code>my-image</code>)</li> <li>Set the <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="noreferrer"><code>imagePullPolicy</code></a> to <code>Never</code>, otherwise Kubernetes will try to download the image.</li> </ol> <p><strong>Important note:</strong> You have to run <code>eval $(minikube docker-env)</code> on each terminal you want to use, since it only sets the environment variables for the current shell session.</p>
<p>I have a YAML file that I use to create a Deployment in Kubernetes which includes:</p> <pre><code> volumeMounts: - name: git-volume mountPath: /code volumes: - name: git-volume gitRepo: repository: "[email protected]:organization/bot.git" revision: "b686122a44aa271117b602e4eba4cc02f5e56044" </code></pre> <p>I have a public Bitbucket Git repo (I can view it without logging in to Bitbucket).</p> <p>I get this error when starting the Deployment in Kubernetes:</p> <pre><code>failed to exec 'git clone [email protected]:organization/bot.git': Cloning into 'bot'... Could not create directory '/root/.ssh'. Failed to add the host to the list of known hosts (/root/.ssh/known_hosts). Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. : exit status 128 </code></pre> <p>How can I get this repo to clone?</p>
<p>Solution is to switch to HTTPS instead of SSH (available in the dropdown in the top-right section of Bitbucket).</p> <p>Ie: </p> <p><a href="https://[email protected]/organization/bot.git" rel="nofollow noreferrer">https://[email protected]/organization/bot.git</a></p>
<p>We have a docker image. And am trying to deploy it using kubernetes. My doubt is can I deploy a pod with a single container but not run any process in the container while the container comes up? But run it after it starts. That is, after the container starts, go into the bash of the container, and run the process(lets say a java process)? Is that possible?</p> <p>Right now, when I am trying to deploy a pod with no process running, I get this error : </p> <blockquote> <p>Back-off restarting failed docker container Error syncing pod, skipping: failed to "StartContainer" for "containerName" with CrashLoopBackOff:</p> </blockquote> <p>But when I start the container with a java process, it works. Am not sure if its because of no process in container? Please guide.</p>
<p>What you're trying to do sounds like an antipattern anyways you can do with with the sleep process e.g. like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: app spec: containers: - name: app-container image: app-image:version command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 30; done;" ] </code></pre> <p>You could then run your process <strong>BUT</strong>:</p> <ul> <li>You container will not be bound to the seconds process and would not end when your second process ends</li> <li>You have to do manual work</li> <li>You could save if you'd just run your application in the <code>command</code> of the container</li> </ul>