prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I successfully create Cassandara Petset on Kubernetes cluster in AWS. While, this is a single DC Cassandra cluster. Thinking of adding multiple DC support for cassandra. Say, have one kubernetes cluster in US west, one in US east, one in Europe. I'd like to have cassandra installed on each of those DCs and be able to replicate data between those 3 clusters.
The question I have no answer is how to let cassandra petset node in US West know another cassandra petset node in Europe? Before the DNS are connected, they can't parse each other's DNS name, right? Anyone had experience on it?</p>
| <p>You need to take in consideration the Snitch, Seed Provider and Topology Strategy for Kubernetes, so the nodes are aware of their geographical location and guarantee that copies of the data are set up in all the regions; for this you should have a replication factor that will ensure enough copies. </p>
<p>There is already a project to integrate with Kubernetes <a href="https://github.com/kubernetes/kubernetes/issues/24286" rel="nofollow">here</a> but they mention it is still experimental.</p>
<p>Finally, when working with different geographical locations, latencies can be a killer, it is always advisable to use for those cases local consistencies (i.e. LOCAL_QUORUM, LOCAL_ONE, LOCAL_SERIAL)</p>
|
<p>In my kubernetes cluster I have several kind of pods. Some pods have to wait for other pods to start. To create a cluster I have to Run all the pods in a particular serial. This requires me to continuously check for states of previous pods. I want to reduce the time taken for creating cluster.</p>
<p>I want to explore 2 different solutions here:</p>
<ol>
<li><p>Is there a way I can add conditions like create pod 'a' if pod 'b' is in 'running' state?</p></li>
<li><p>Is there a way I can pull all the images when creating pod and run them later in order. Since most of the time taken to create the pod is for pulling the image.</p></li>
</ol>
| <p>Pet Sets might help you with this.</p>
<p><a href="http://kubernetes.io/docs/user-guide/petset/" rel="nofollow">http://kubernetes.io/docs/user-guide/petset/</a></p>
|
<p>I'd like to use regex in the path of an Ingress rule, but I haven't been able to get it to work. <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/cafe-ingress.yaml" rel="noreferrer">For example</a>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
<p>I tried putting <code>/t[a-z]a</code> for the first path, but then any path I tried that should match that regex took me to the default backend instead of the service I expected.</p>
<p>Note: I'm using an nginx ingress controller, which should be able to support regex.</p>
| <p>Apparently this question is still getting traffic, so I feel like I should update it. I'm no longer using the nginx ingress, so I can't verify this works. According to <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/</a>:</p>
<blockquote>
<p>The ingress controller supports <strong>case insensitive</strong> regular expressions in the <code>spec.rules.http.paths.path</code> field. This can be enabled by setting the <code>nginx.ingress.kubernetes.io/use-regex</code> annotation to <code>true</code> (the default is false).</p>
</blockquote>
<p>The example they provide on the page would cover it:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress-3
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: test.com
http:
paths:
- path: /foo/bar/bar
backend:
serviceName: test
servicePort: 80
- path: /foo/bar/[A-Z0-9]{3}
backend:
serviceName: test
servicePort: 80
</code></pre>
<p><strong>Original answer that no longer works.</strong></p>
<p>It appears that the solution is ridiculously simple (at least with an nginx ingress controller) - you just need to prepend the path with <code>"~ "</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
http:
paths:
- path: ~ /t[a-z]a
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
|
<p>When running a Kubernetes cluster on Google Cloud Platform is it possible to somehow have the IP address from service endpoints automatically assigned to a Google CloudDNS record? If so can this be done declaratively within the service YAML definition?</p>
<p>Simply put I don't trust that the IP address of my <code>type: LoadBalancer</code> service.</p>
| <p>GKE uses <a href="https://cloud.google.com/deployment-manager/docs/" rel="nofollow">deployment manager</a> to spin new clusters, as well as other resources like Load Balancers. At the moment deployment manager does not allow to integrate Cloud DNS functionality. Nevertheless there is a <a href="https://code.google.com/p/google-compute-engine/issues/detail?id=445" rel="nofollow">feature request to support that</a>. In the future If this feature is implemented, it might allow further integration between Cloud DNS, Kubernetes and GKE.</p>
|
<p>I run new modules of my system in Google-Container-Engine. I would like to bring stdout and stderr from them (running in pods) to my centralised logstash. Is there an easy way to forward logs from pods to external logging service, e.g., logstash or elasticsearch?</p>
| <p>I decided to log directly to <em>elasticsearch</em>, an external virtual machine that can be access at <code>elasticsearch.c.my-project.internal</code> (I am on Google-Cloud-Platform). It is quite easy:</p>
<ol>
<li><p>Setup an ExternalService with name: <em>elasticsearch</em> that points to the elasticsearch instance:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch
kubernetes.io/name: "elasticsearch"
spec:
type: ExternalName
externalName: elasticsearch.c.my-project.internal
ports:
- port: 9200
targetPort: 9200
</code></pre>
</li>
<li><p>Deploy a fluentd-elasticsearch as a DeamonSet. fluentd-elasticsearch will automatically connect to service with name <code>elasticsearch-logging</code> (based on a <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml" rel="nofollow noreferrer">fluentd-elasticsearch deployment defintion</a> :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
tier: monitoring
app: fluentd-logging
k8s-app: fluentd-logging
spec:
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: gcr.io/google_containers/fluentd-elasticsearch:1.19
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
</li>
</ol>
<p>Use <code>kubectl logs fluentd-elasticsearch-...</code> to check whether you were able to connect to the elasticsearach instance.</p>
<ol start="3">
<li>Now, you can access kibana and see the logs.</li>
</ol>
|
<p>I have deployed my application on Google <code>gcloud</code> container engine. My application required <code>MySQL</code>. Application is running fine and connecting to <code>MySQL</code> correctly.
But I want to connect <code>MySQL</code> database from my local machine using <code>MySQL</code> Client (<code>Workbench</code>, or command line), Can some one help me how to expose this to local machine? and how can I open <code>MySQL</code> command line on <code>gcloud</code> shell ?</p>
<p>I have run below command but external ip is not there :</p>
<pre><code>$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
app-mysql 1 1 1 1 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-mysql-3323704556-nce3w 1/1 Running 0 2m
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-mysql 11.2.145.79 <none> 3306/TCP 23h
</code></pre>
<p><strong>EDIT</strong></p>
<p>I am using below <code>yml</code> file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: app-mysql
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: mysql
image: mysql:5.6.22
env:
- name: MYSQL_USER
value: root
- name: MYSQL_DATABASE
value: appdb
ports:
- containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql/
---
apiVersion: v1
kind: Service
metadata:
name: app-mysql
spec:
selector:
app: app-mysql
ports:
- port: 3306
</code></pre>
| <p>Try the <code>kubectl port-forward</code> command.</p>
<p>In your case; <code>kubectl port-forward app-mysql-3323704556-nce3w 3306:3306</code></p>
<p>See <a href="https://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward" rel="noreferrer">The documentation</a> for all available options.</p>
|
<ol>
<li>In <code>kubernetes</code> I can expose services with <code>service</code>. This is fine.</li>
<li>Lets say I have 1 web instance and 10 java server instances.</li>
<li>I have a windows gateway I'm used to access those 10 java servers instances via the jconsole installed on it.</li>
<li>Obviously I do not expose all apps jmx port via kubernetes service.</li>
</ol>
<p>What are my options here? how should I allow this external to kubernetes cluster windows gateway access to those 10 servers jmx ports? Any practices here?</p>
| <p>Another option is to forward JMX port from K8 pod to your local PC with <strong>kubectl port-forward</strong>. </p>
<p>I do it like this:</p>
<p>1). Add following JVM options to your app:</p>
<pre><code>-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.rmi.port=1099
-Djava.rmi.server.hostname=127.0.0.1
</code></pre>
<p>The critical part here is that:</p>
<ul>
<li><p>The same port should be used as 'jmxremote.port' and 'jmxremote.rmi.port'. This is needed to forward one port only.</p></li>
<li><p>127.0.0.1 should be passed as rmi server hostname. This is needed for JMX connection to work via port-forwarding.</p></li>
</ul>
<p>2). Forward the JMX port (1099) to your local PC via kubectl:</p>
<pre><code>kubectl port-forward <your-app-pod> 1099
</code></pre>
<p>3). Open jconsole connection to your local port 1099:</p>
<pre><code>jconsole 127.0.0.1:1099
</code></pre>
<p>This way makes it possible to debug any Java pod via JMX without having to publicly expose JMX via K8 service (which is better from security perspective).</p>
<p>Another option that also may be useful is to attach the Jolokia (<a href="https://jolokia.org/" rel="noreferrer">https://jolokia.org/</a>) agent to the Java process inside the container so it proxies the JMX over HTTP port and expose or port-forward this HTTP port to query JMX over HTTP.</p>
|
<p>I followed the <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html" rel="noreferrer">Kubernetes docs for setting up a single node cluster as a Docker container</a>. I now have Kubernetes running on a remote Linux VM (say, <code>mykube01.example.com</code>).</p>
<p>I then downloaded and installed <code>kubectl</code> locally on my Mac laptop. I can run <code>kubectl version</code> and it verifies I have installed the correct version.</p>
<p>I then went to configure <code>kubectl</code> by <a href="http://kubernetes.io/v1.0/docs/user-guide/kubeconfig-file.html" rel="noreferrer">following this doc</a> and created the following <code>~/.kube/config</code> file:</p>
<pre><code> apiVersion: v1
clusters:
- cluster:
api-version: v1
server: http://mykube01.example.com:8080
name: testkube
</code></pre>
<p>When I run <code>kubectl cluster-info</code> I get:</p>
<pre><code>Kubernetes master is running at http://mykuber01.example.com:8080
</code></pre>
<p>But when I run <code>kubetctl get nodes</code> I get:</p>
<pre><code>The connection to the server mykube01.example.com:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Any ideas where I'm going awry? I want to get to the point where I can keep going with that first Kubernetes doc and deploy nginx to the 1-node cluster via:</p>
<pre><code>kubectl -s http://mykube01.example.com:8080 run-container nginx --image=nginx --port=80
</code></pre>
<p>But I can't do that until I get <code>kubectl</code> configured properly and correctly connecting to my remote "cluster".</p>
| <p>When you created the connection to your master, a file should be created:</p>
<pre><code>/etc/kubernetes/kubelet.conf
</code></pre>
<p>By default, your personal config is empty or missing. So, I copied the above mentioned kubelet.conf file as my <code>~/.kube/config</code> file. Worked perfectly.</p>
|
<p>I've tried to add fsGroup to a Job with no luck, the generated Pod doesn't include the fsGroup in the security context. Does a Kube Job allow for the fsGroup to be specified and then propagated to the generated Pod?</p>
| <p>I'm using Kubernetes 1.4 and successfully created a job with <code>fsGroup</code>. Maybe you misplaced <code>fsGroup</code> in your manifest? This is my manifest:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: hello-world
spec:
template:
metadata:
name: hello-world
spec:
containers:
- name: hello-world-container
image: hello-world
securityContext:
fsGroup: 1234
restartPolicy: OnFailure
</code></pre>
<p>Output from <code>kubectl describe job hello-world</code>:</p>
<pre><code> FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {job-controller } Normal SuccessfulCreate Created pod: hello-world-yzyz7
</code></pre>
<p>Output from <code>kubectl get pod hello-world-yzyz7 -o yaml | grep fsGroup</code>:</p>
<pre><code> fsGroup: 1234
</code></pre>
|
<p>We are running as hosted Kubernetes cluster on Google Cloud (GKE) and scraping it with Prometheus.</p>
<p>My Question is similar to <a href="https://stackoverflow.com/questions/39349744/kubernetes-prometheus-metrics-for-running-pods-and-nodes">this</a> one, but I'd like to know what are the most important metrics to look out for in the K8s Cluster and possibly alert on?</p>
<p>This is rather a K8s then a Prometheus question, but I'd really appreciate some hints. Please let me know if my question is to vague, so I can refine it.</p>
| <p>etcd is the foundation of Kubernetes. So having a good set of alerts for it is important.
We wrote <a href="https://coreos.com/blog/developing-prometheus-alerts-for-etcd.html" rel="noreferrer">this blog post</a> and creating alerting rules for it and provided a base set at the end.</p>
<p>Further sources of important metrics in the Prometheus format are the Kubelet and cAdvisor, API servers, and the fairly new <a href="https://github.com/kubernetes/kube-state-metrics" rel="noreferrer">kube-state-metrics</a>.
For those, I'm not aware of any public alerting rule sets as for etcd, unfortunately.</p>
<p>Generally, you want to ensure that the components as applications work flawlessly, e.g:</p>
<ul>
<li>Are my kubelets/API servers running/reachable? (<code>up</code> metric)</li>
<li>Are their response latency and error rates within bounds?</li>
<li>Can the API servers reach etcd?</li>
</ul>
<p>Then there's the Kubernetes business logic aspect, e.g:</p>
<ul>
<li>Are there pods that have been in non-ready/crashloop state forever?</li>
<li>Do I have enough CPU/memory capacity in my cluster?</li>
<li>Are my deployment replica expectations fulfilled?</li>
</ul>
<p>That's no drop-in solution unfortunately, but writing alerting rules roughly covering the scope of the above examples should get you quite far.</p>
|
<p>I tried to create kubernetes cluster(v1.2.3) on azure with coreos cluster. I followed the documentation (<a href="http://kubernetes.io/docs/getting-started-guides/coreos/azure/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/coreos/azure/</a>) </p>
<p>Then I cloned the repo( git clone <a href="https://github.com/kubernetes/kubernetes" rel="nofollow">https://github.com/kubernetes/kubernetes</a>). And I did a minor change in file( docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml) changed the kube version from v1.1.2 to v1.2.3.</p>
<p>And then I created the cluster by running the file(./create-kubernetes-cluster.js), cluster is successfully created for me. But in master node API server didn't get started..</p>
<p>I checked the log it was showing - <strong>Cloud provider could not be initialized: unknown cloud provider "vagrant"</strong>.. I could not catch why this issue was coming.. </p>
<p><strong>This is my Log of -> kube-apiserver.service</strong> </p>
<pre><code>-- Logs begin at Sat 2016-07-23 12:41:36 UTC, end at Sat 2016-07-23 12:44:19 UTC. --
Jul 23 12:43:06 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:06 anudemon-master-00 kube-apiserver[1964]: I0723 12:43:06.299966 1964 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:06 anudemon-master-00 kube-apiserver[1964]: F0723 12:43:06.300057 1964 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:16 anudemon-master-00 kube-apiserver[2015]: I0723 12:43:16.428476 2015 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:16 anudemon-master-00 kube-apiserver[2015]: F0723 12:43:16.428534 2015 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:16 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:26 anudemon-master-00 kube-apiserver[2024]: I0723 12:43:26.756551 2024 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:26 anudemon-master-00 kube-apiserver[2024]: F0723 12:43:26.756654 2024 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:36 anudemon-master-00 kube-apiserver[2039]: I0723 12:43:36.872849 2039 server.go:188] Will report 172.16.0.4 as public IP address.
</code></pre>
| <p>Have you had a look at kuberenetes-anywhere (<a href="https://github.com/kubernetes/kubernetes-anywhere" rel="nofollow">https://github.com/kubernetes/kubernetes-anywhere</a>)? Much work has been done there and now probably has all the right bits to deploy out your cluster with Azure specific cloud provider integrations. </p>
|
<p>I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.</p>
<p>So far, i've been following the example <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="noreferrer">here</a> to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.</p>
<p>I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.</p>
<p>How can I manually set my node's ExternalIP address?</p>
| <p>This is something that is tested and works for an nginx service created on a particular node.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
</code></pre>
<p>Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.</p>
|
<p>I have a Kubernetes cluster on Google Cloud, I have a database service, which is running in front of a mongodb deployment. I also have a series of microservices, which are attempting to connect to that datastore. </p>
<p>However, they can't seem to find the host. </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
</code></pre>
<p>Here's my mongo deployment... </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo:latest
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
</code></pre>
<p>And an example of one of my services... </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bandzest-artists
spec:
replicas: 1
template:
metadata:
labels:
name: bandzest-artists
spec:
containers:
- name: artists-container
image: gcr.io/<omitted>/artists:41040e8
ports:
- containerPort: 7000
imagePullPolicy: Always
env:
- name: DB_HOST
value: mongo
- name: AWS_BUCKET_NAME
value: <omitted>
- name: AWS_ACCESS_KEY_ID
value: <omitted>
- name: AWS_SECRET_KEY
value: <omitted>
</code></pre>
| <p>First, check that the service is created</p>
<p><code>kubectl describe svc mongo</code></p>
<p>You should see it show that it is both created and routing to your pod's IP. If you're wondering what your pod's IP is you can check it out via</p>
<p><code>kubectl get po | grep mongo</code></p>
<p>Which should return something like: <code>mongo-deployment-<guid>-<guid></code>, then do</p>
<p><code>kubectl describe po mongo-deployment-<guid>-<guid></code></p>
<p>You should make sure the pod is started correctly and says <code>Running</code> not something like <code>ImagePullBackoff</code>. It looks like you're mounting a volume from a <code>gcePersistentDisk</code>. If you're seeing your pod just hanging out in the <code>ContainerCreating</code> state it's very likely you're not mounting the disk correctly. Make sure you <a href="http://kubernetes.io/docs/user-guide/volumes/#creating-a-pd" rel="noreferrer">create the disk</a> before you try and <a href="http://kubernetes.io/docs/user-guide/volumes/#example-pod-2" rel="noreferrer">mount it as a volume</a>.</p>
<p>If it looks like your service is routing correctly, then you can check the logs of your pod to make sure it started mongo correctly:</p>
<p><code>kubectl logs mongo-deployment-<guid>-<guid></code></p>
<p>If it looks like the pod and logs are correct, you can exec into the pod and make sure mongo is actually starting and working:
<code>kubectl exec -it mongo-deployment-<guid>-<guid> sh</code></p>
<p>Which should get you into the container (Pod) and then you can try <a href="https://stackoverflow.com/a/5521155/316857">something like this</a> to see if your DB is running.</p>
|
<p>Where can I find an official template which describes how to create your <code>.yaml</code> file to setup services/pods in Kubernetes?</p>
| <p>You can find the specification for a pod here <a href="http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_pod" rel="nofollow noreferrer">http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_pod</a></p>
<p>A good starting point is also the examples <a href="https://github.com/kubernetes/kubernetes/tree/master/examples" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples</a></p>
<p>Additional you can create the ressource via kubectl and export the resource to yaml</p>
<p>e.g. for a pod you can run this command:</p>
<p><code>kubectl run nginx --image=nginx
kubectl get pods ngnix -o yaml > pod.yaml</code></p>
|
<p>I'm now trying to run a simple container with shell (/bin/bash) on a Kubernetes cluster.</p>
<p>I thought that there was a way to keep a container running on a Docker container by using <code>pseudo-tty</code> and detach option (<code>-td</code> option on <code>docker run</code> command).</p>
<p>For example,</p>
<pre><code>$ sudo docker run -td ubuntu:latest
</code></pre>
<p>Is there an option like this in Kubernetes?</p>
<p>I've tried running a container by using a <code>kubectl run-container</code> command like:</p>
<pre><code>kubectl run-container test_container ubuntu:latest --replicas=1
</code></pre>
<p>But the container exits for a few seconds (just like launching with the <code>docker run</code> command without options I mentioned above). And ReplicationController launches it again repeatedly.</p>
<p>Is there a way to keep a container running on Kubernetes like the <code>-td</code> options in the <code>docker run</code> command?</p>
| <p>Containers are meant to run to completion. You need to provide your container with a task that will never finish. Something like this should work:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu:latest
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
</code></pre>
|
<p>dear Kubernetes guru's!</p>
<p>I have spinned kube 1.4.1 cluster on manually created AWS hosts using 'contrib' Ansible playbook (<a href="https://github.com/kubernetes/contrib/tree/master/ansible" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/ansible</a>). </p>
<p>My problem is that Kube doesn't attach EBS drives to minion hosts. If I define the pod as follows:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka1
spec:
replicas: 1
template:
spec:
containers:
- name: kafka1
image: daniilyar/kafka
ports:
- containerPort: 9092
name: clientconnct
protocol: TCP
volumeMounts:
- mountPath: /kafka
name: storage
volumes:
- name: storage
awsElasticBlockStore:
volumeID: vol-56676d83
fsType: ext4
</code></pre>
<p>I get the following error in kubelet.log:</p>
<pre><code>Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-56676d83 /var/lib/kubelet/pods/db213783-9477-11e6-8aa9-12f3d1cdf81a/volumes/kubernetes.io~aws-ebs/storage [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-56676d83 does not exist
</code></pre>
<p>EBS volume keeps being in 'Available' state during that, so I am sure that Kube <strong>doesn't attach</strong> volume to host at all and so, <strong>doesn't mount</strong> it.
I am 100% sure that this is a Kubernetes itself issue and not the permissioning issue because I can mount the same volume manually from within this minion to this minion just fine:</p>
<pre><code>$ aws ec2 --region us-east-1 attach-volume --volume-id vol-56676d83 --instance-id $(wget -q -O - http://instance-data/latest/meta-data/instance-id) --device /dev/sdc
{
"AttachTime": "2016-10-18T15:02:41.672Z",
"InstanceId": "i-603cfb50",
"VolumeId": "vol-56676d83",
"State": "attaching",
"Device": "/dev/sdc"
}
</code></pre>
<p>Googling, hacking and trying older K8 versions didn't help me to solve this.
Could anyone please point me on what else could I do to understand the problem so I can fix it? Any help is greatly appreciated.</p>
| <p>Nobody helped me at K8 Slack channels, so after a day of pulling my hair off I found the solution by myself:</p>
<p>To get the K8 cluster installed by 'contrib' Ansible playbook (<a href="https://github.com/kubernetes/contrib/tree/master/ansible" rel="noreferrer">https://github.com/kubernetes/contrib/tree/master/ansible</a>) mounting EBS volumes properly, except for IAM roles setup, you need to add the <strong>--cloud-provider=aws</strong> flag to your existing cluster: all kubelets, the apiserver, and the controller manager.</p>
<p>Without <strong>--cloud-provider=aws</strong> flag Kubernetes will give you an unfriendly 'mount: special device xxx does not exist' error instead of real cause.</p>
|
<p>After reading <a href="https://stackoverflow.com/questions/29837531/changing-permissions-of-google-container-engine-cluster/">this earlier question</a>, I have some follow-up questions. I have a Google Container Engine cluster which lacks the Cloud Monitoring API Access permission. According to this post I cannot enable it.</p>
<p>The referenced post is one year old. Just to be sure: Is it still correct? To enable (for example) the Cloud Monitoring API for my GKE cluster, we would have to recreate the entire cluster because there is no way to change these permissions after cluster creation?</p>
<p>Also, if I have to do this it seems to me that it would be best to enable all API's with the broadest possible permissions, just in case that I want to start using one of them in the future on my production cluster when it's in use and I can't very well take the entire thing down and recreate it then. Are there any drawbacks to this approach?</p>
| <p>You can keep the same cluster, but create a new <a href="https://cloud.google.com/container-engine/docs/node-pools" rel="noreferrer">Node Pool</a> with the new scopes you need (and then delete your old "default" Node Pool):</p>
<pre><code>gcloud container node-pools create new-np --cluster $CLUSTER --scopes monitoring
</code></pre>
<p>The drawback to enabling all permissions is if you use the same service account in many different places. For example, if my <code>service-account-1</code> needs to access Cloud Monitoring from this GKE cluster, but it is also being used on an unrelated GCE VM, I might not want that GCE VM to have access to my Cloud Monitoring data.</p>
|
<p>I am working on a mac machine and installed the latest Kubernetes and followed the <a href="https://medium.com/@claudiopro/getting-started-with-kubernetes-via-minikube-ada8c7a29620#.xfrpcgv50" rel="nofollow">example here</a> (this is for dev’t purpose). All went smooth but I was hoping that Kubernetes provide me an ip address and port number where my service will be listening to so that I can access it from anywhere. </p>
<p>Please correct me if I am wrong. </p>
<p>I was able to run <code>ifconfig</code> as well as <code>curl $(minikube service hello-minikube --url)</code> and I was able to see the ip address and port but I wasn’t able to access it outside command line where Kubernetes lives in.</p>
<p>The reason I am trying to access it outside the VM is because we have other projects that run on other machines and I wanted to call the REST service I installed while we are on dev env. This way we don’t have to wait until the service is pushed to production. </p>
<p>FYI: This is my first micro service project and I would appericiate your feedback.</p>
| <p>I followed the steps in the article you linked and it works as expected.</p>
<p>Just do:</p>
<pre><code>minikube service hello-minikube --url
</code></pre>
<p>You will get a url like <code>http://192.168.99.100:32382/</code> - the port and IP could and will change for you. Also note that the exposed port will be a random port like the <code>32382</code> and not <code>8080</code> that the pod uses.</p>
<p>Use the url in your browser, say and you should be able to see the output of the service.</p>
|
<p>So we have a deployment that is using rolling updates. We need it to pause 180 seconds between each pod it brings up. My understanding is that I need to set <code>MinReadySeconds: 180</code> and to set the <code>RollingUpdateStrategy.MaxUnavailable: 1</code> and <code>RollingUpdateStrategy.MaxSurge: 1</code> for the deployment to wait. With those settings it still brings the pods up as fast as it can. . . What am I missing.</p>
<p>relevant part of my deployment</p>
<pre><code>spec:
minReadySeconds: 180
replicas: 9
revisionHistoryLimit: 20
selector:
matchLabels:
deployment: standard
name: standard-pod
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
</code></pre>
| <p>Assuming that a pod is ready after a certain delay is not very idiomatic within an orchestrator like Kubernetes, as there may be something that prevents the pod from successfully starting, or maybe delays the start by another few seconds.</p>
<p>Instead, you could use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Liveliness and Readiness probes</a> to make sure that the pod is there and ready to serve traffic before taking down the old pod</p>
|
<p>I am trying to automate the update to the deployment using </p>
<pre><code>kubectl set
</code></pre>
<p>I have no issues using kubectl set image command to push new version of the docker image out, but I also need to add a new persistent disk for the new image to use. I don't believe i can set 2 different options using the set command. What would be the best option to do this?</p>
| <p><a href="http://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources" rel="nofollow">http://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources</a> has the different options you have.</p>
<hr>
<p>You can use <a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_apply/" rel="nofollow"><code>kubectl apply</code></a> to modify multiple fields at once.</p>
<blockquote>
<p>Apply a configuration to a resource by filename or stdin. This
resource will be created if it doesn’t exist yet. To use ‘apply’,
always create the resource initially with either ‘apply’ or ‘create
–save-config’. JSON and YAML formats are accepted.</p>
</blockquote>
<p>Alternately, one can use <a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/" rel="nofollow"><code>kubectl patch</code></a>.</p>
<blockquote>
<p>Update field(s) of a resource using strategic merge patch JSON and
YAML formats are accepted.</p>
</blockquote>
|
<p>I'm trying to get an openzipkin server running in a k8s cluster, starting with testing in a minikube. I'm beginner with k8s config, but here's what I've done so far:</p>
<pre><code>$ minikube start
$ eval $(minikube docker-env)
$ kubectl run zipkin --image=openzipkin/zipkin --port=9411
$ kubectl expose deployment zipkin --port=9411 --type="NodePort" --name=zipkin-http
</code></pre>
<p>What I think I'm doing is starting a new pod and deploying the zipkin image, then exposing the Web UI at port 9411 via zipkin-http. After doing this:</p>
<pre><code>$ kubectl run -i --tty busybox --image=busybox -- sh
$ nslookup zipkin-http
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: zipkin-http
Address 1: 10.0.0.101
$ wget -qO- zipkin-http:9411
<!DOCTYPE html>
...
$ wget -qO- zipkin-http:9411/config.json
{"environment":"","queryLimit":10,"defaultLookback":3600000,"instrumented":".*"}
</code></pre>
<p>Then I run the kubectl proxy so I can access the Web UI from my browser:</p>
<pre><code>$ kubectl proxy --accept-hosts=".*"
</code></pre>
<p>Now if I browse to <a href="http://localhost:8001/api/v1/proxy/namespaces/default/services/zipkin-http/config.json" rel="nofollow">http://localhost:8001/api/v1/proxy/namespaces/default/services/zipkin-http/config.json</a> I get the config file contents:</p>
<pre><code>{"environment":"","queryLimit":10,"defaultLookback":3600000,"instrumented":".*"}
</code></pre>
<p>But if I browse to the root at <a href="http://localhost:8001/api/v1/proxy/namespaces/default/services/zipkin-http/" rel="nofollow">http://localhost:8001/api/v1/proxy/namespaces/default/services/zipkin-http/</a> I receive an error:</p>
<pre><code>Error loading config.json: undefined
</code></pre>
<p>The config.json it's attempting to load is the one at :9411/config.json. The request to load /config.json comes from a JS file that was loaded by the html in the root page.</p>
<p>Since it looks like I can get to the json file directly from both inside and outside the cluster, I'm confused as to why the JS file isn't able to load it. What am I doing wrong here?</p>
<p>Thanks!</p>
| <p>The web app is trying to access <code>config.json</code> at root (accessing as <code>/config.json</code> vs just <code>config.json</code> ) - that is <a href="http://localhost:8001/config.json" rel="nofollow">http://localhost:8001/config.json</a> . This would obviously be wrong as it should be <a href="http://localhost:8001/api/v1/proxy/namespaces/default/services/zipkin-http/config.json" rel="nofollow">http://localhost:8001/api/v1/proxy/namespaces/default/services/zipkin-http/config.json</a> </p>
<p>There is a very simple solution for this - just run:</p>
<pre><code>kubectl port-forward <name of the pod> 9411
</code></pre>
<p>Now just go to <a href="http://localhost:9411" rel="nofollow">http://localhost:9411</a> and the UI should be up (tried and verified.)</p>
<p>You can get the name of the pod by doing <code>kubectl get pods</code></p>
<p>PS: <code>kubectl proxy</code> is generally meant to access the Kubernetes API, and <code>kube port-forward</code> is the right tool in this case. </p>
|
<p>I installed minikube to use kubernetes locally. I was able to create pods and services locally.</p>
<p>However, pods (and containers) running on them, cannot resolve services using service names. </p>
<p>Example: I have redis service running that acts a proxy for redis pods. </p>
<p><code>kubectl get services</code> shows taht redis service has been created.</p>
<p>However, when my web application tries to connect to <code>redis-service</code>, I get connection timeout, because web application (pod) cannot resolve <code>redis-service</code>.</p>
<p>Is there anything special taht needs to be installed to get service resolution working locally.</p>
<p>this is the output of running <code>kubectl get services</code></p>
<pre><code> frontend 10.0.0.250 80/TCP 3h
kubernetes 10.0.0.1 <none> 443/TCP 3h
redis-service 10.0.0.156 <none> 6379/TCP 3h
rethinkdb-service 10.0.0.89 <none> 28015/TCP 3h
kubectl describe services --namespace=kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns,kubernetes.io/name=KubeDNS
Selector: <none>
Type: ClusterIP
IP: 10.0.0.10
Port: dns 53/UDP
Endpoints: 10.0.2.15:53
Port: dns-tcp 53/TCP
Endpoints: 10.0.2.15:53
Session Affinity: None
No events.
Name: kubernetes-dashboard
Namespace: kube-system
Labels: app=kubernetes-dashboard,kubernetes.io/cluster-service=true
Selector: app=kubernetes-dashboard
Type: NodePort
IP: 10.0.0.156
Port: <unset> 80/TCP
NodePort: <unset> 30000/TCP
Endpoints: 172.17.0.2:9090
Session Affinity: None
No events.
</code></pre>
| <p>You need to expose your services as <code>type: NodePort</code> in minikube instead of <code>ClusterIP</code></p>
<p>You can then access them with <code>minikube service <servicename></code> </p>
<p>(or you can find out which port they were mapped manually to with <code>kubectl get services</code>)</p>
|
<h2>Trying to set up PetSet using Kube-Solo</h2>
<p>In my local dev environment, I have set up Kube-Solo with CoreOS. I'm trying to deploy a Kubernetes PetSet that includes a Persistent Volume Claim Template as part of the PetSet configuration. This configuration fails and none of the pods are ever started. Here is my PetSet definition:</p>
<pre><code>apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: marklogic
spec:
serviceName: "ml-service"
replicas: 2
template:
metadata:
labels:
app: marklogic
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 30
containers:
- name: 'marklogic'
image: {ip address of repo}:5000/dcgs-sof/ml8-docker-final:v1
imagePullPolicy: Always
command: ["/opt/entry-point.sh", "-l", "/opt/mlconfig.sh"]
ports:
- containerPort: 7997
name: health-check
- containerPort: 8000
name: app-services
- containerPort: 8001
name: admin
- containerPort: 8002
name: manage
- containerPort: 8040
name: sof-sdl
- containerPort: 8041
name: sof-sdl-xcc
- containerPort: 8042
name: ml8042
- containerPort: 8050
name: sof-sdl-admin
- containerPort: 8051
name: sof-sdl-cache
- containerPort: 8060
name: sof-sdl-camel
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
lifecycle:
preStop:
exec:
command: ["/etc/init.d/MarkLogic stop"]
volumeMounts:
- name: ml-data
mountPath: /var/opt/MarkLogic
volumeClaimTemplates:
- metadata:
name: ml-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 1Gi
</code></pre>
<p>In the Kubernetes dashboard, I see the following error message:</p>
<pre><code>SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "ml-data-marklogic-0", which is unexpected.
</code></pre>
<p>It seems that being unable to create the Persistent Volume Claim is also preventing the image from ever being pulled from my local repository. Additionally, the Kubernetes Dashboard shows the request for the Persistent Volume Claims, but the state is continuously "pending".
I have verified the issue is with the Persistent Volume Claim. If I remove that from the PetSet configuration the deployment succeeds.</p>
<p>I should note that I was using MiniKube prior to this and would see the same message, but once the image was pulled and the pod(s) started the claim would take hold and the message would go away.</p>
<p>I am using</p>
<ul>
<li>Kubernetes version: 1.4.0</li>
<li>Docker version: 1.12.1 (on my mac) & 1.10.3 (inside the CoreOS vm)</li>
<li>Corectl version: 0.2.8</li>
<li>Kube-Solo version: 0.9.6</li>
</ul>
| <p><strong>I am not familiar with kube-solo.</strong></p>
<p>However, the issue here might be that you are attempting to use a feature, <a href="http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html" rel="nofollow">dynamic volume provisioning</a>, which is in beta, which does not have specific support for volumes in your environment. </p>
<p>The best way around this would be to create the persistent volumes that it expects to find manually, so that the PersistentVolumeClaim can find them.</p>
|
<p>I am following up guide [1] to create multi-node K8S cluster which has 1 master and 2 nodes. Also, a label needs to set to each node respectively.</p>
<pre><code>Node 1 - label name=orders
Node 2 - label name=payment
</code></pre>
<p>I know that above could be achieved running kubectl command</p>
<pre><code>kubectl get nodes
kubectl label nodes <node-name> <label-key>=<label-value>
</code></pre>
<p>But I would like to know how to set label when creating a node. Node creation guidance is in [2]. </p>
<p>Appreciate your input.</p>
<p>[1] <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="noreferrer">https://coreos.com/kubernetes/docs/latest/getting-started.html</a></p>
<p>[2] <a href="https://coreos.com/kubernetes/docs/latest/deploy-workers.html" rel="noreferrer">https://coreos.com/kubernetes/docs/latest/deploy-workers.html</a></p>
| <p>In fact there is a trivial way to achieve that since 1.3 or something like that. </p>
<p>What is responsible for registering your node is the kubelet process launched on it, all you need to do is pass it a flag like this <code>--node-labels 'role=kubemaster'</code>. This is how I differentiate nodes between different autoscaling groups in my AWS k8s cluster.</p>
|
<p>Having set the default gce ingress controller working with ingresses resources set to respond to hostnames</p>
<p>The advantage of having a static ip (in my very current point of view) is that you never wonder where to configure your domain to, it will always remain the same ip; and on the other side you can stick as much service as you want behind it</p>
<p>I'm quite new using this gce loadbalancer, can I rely on it as I would with a static ip (meaning it'll <em>never</em> change) ? Or is there a layer to add to point a static ip to a loadbalancer ?</p>
<p>I'm asking because you can set the ip of a service resource. But I have no clue yet about doing the same with this lbc/ingress combo — assigning a static ip to an ingress ?</p>
<p>I've checked around, there seem to exist some 'forwarding' (static ip to load balancer)… but I'd really appreciate some experienced help on this one, at least to end up understanding it all clearly</p>
<p>Best</p>
| <p>Finally I have a working solution. You gotta add an L4 Service using <code>loadBalancerIP: x.x.x.x</code> where you put a previously reserved static IP, and then put a selector that the deployment/RC already has, like this:</p>
<blockquote>
<p>UPDATE [Nov-2017]: Static IP should be regional and in the same region as cluster</p>
</blockquote>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-svc
spec:
type: LoadBalancer
loadBalancerIP: 104.155.55.37 # static IP pre-allocated.
ports:
- port: 80
name: http
- port: 443
name: https
selector:
k8s-app: nginx-ingress-lb
</code></pre>
<p>Controller:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- image: eu.gcr.io/infantium-platform-20/nginx-ingress
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- -nginx-configmaps=staging/nginx-staging-config
</code></pre>
<p>Solution hint was sourced from this example: <a href="https://beroux.com/english/articles/kubernetes/?part=3" rel="nofollow noreferrer">https://beroux.com/english/articles/kubernetes/?part=3</a></p>
<p>Hope this helps.</p>
|
<p>is there a way to tell kubectl that my pods should only deployed on a certain instance pool?</p>
<p>For example:</p>
<pre><code>nodeSelector:
pool: poolname
</code></pre>
<p>Assumed i created already my pool with something like:</p>
<pre><code>gcloud container node-pools create poolname --cluster=cluster-1 --num-nodes=10 --machine-type=n1-highmem-32
</code></pre>
| <p>Ok, i found out a solution:</p>
<p>gcloud creates a label for the pool name. In my manifest i just dropped that under the node selector. Very easy.</p>
<p>Here comes my manifest.yaml: i deploy ipyparallel with kubernetes</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ipengine
spec:
replicas: 1
template:
metadata:
labels:
app: ipengine
spec:
containers:
- name: ipengine
image: <imageaddr.>
args:
- ipengine
- --ipython-dir=/tmp/config/
- --location=ipcontroller.default.svc.cluster.local
- --log-level=0
resources:
requests:
cpu: 1
#memory: 3Gi
nodeSelector:
#<labelname>:value
cloud.google.com/gke-nodepool: pool-highcpu32
</code></pre>
|
<p>Is it reasonable to use Kubernetes for a clustered database such as MySQL in production environment? </p>
<p>There are example configurations such as <a href="https://github.com/eBay/Kubernetes/tree/master/examples/mysql-galera" rel="noreferrer">mysql galera example</a>. However, most examples do not make use of persistent volumes. As far as I've understood persistent volumes must reside on some shared file system as defined here <a href="http://kubernetes.io/docs/user-guide/persistent-volumes/#types-of-persistent-volumes" rel="noreferrer">Kubernetes types of persistent volumes</a>. A shared file system will not guarantee that the database files of the pod will be local to the machine hosting the pod. It will be accessed over network which is rather slow. Moreover, there are issues with MySQL and NFS, for example. </p>
<p>This might be acceptable for a test environment. However, what should I do in a production environment? Is it better to run the database cluster outside Kubernetes and run only application servers with Kubernetes? </p>
| <p>The Kubernetes project introduced <a href="http://kubernetes.io/docs/user-guide/petset/" rel="nofollow">PetSets</a>, a new pod management abstraction, intended to run stateful applications. <strong>It is an alpha feature at present</strong> (as of version 1.4) and moving rapidly. A list of the various issues as we move to beta are listed <a href="https://github.com/kubernetes/kubernetes/issues/28718" rel="nofollow">here</a>. Quoting from the section on <a href="http://kubernetes.io/docs/user-guide/petset/#when-to-use-petset" rel="nofollow">when to use petsets</a>:</p>
<p>A PetSet ensures that a specified number of "pets" with unique identities are running at any given time. The identity of a Pet is comprised of: </p>
<ul>
<li>a stable hostname, available in DNS </li>
<li>an ordinal index </li>
<li>stable storage: linked to the ordinal & hostname</li>
</ul>
<p>In addition to the above, it can be coupled with several other features which help one deploy clustered stateful applications and manage them. Coupled with <a href="http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html" rel="nofollow">dynamic volume provisioning</a> for example, it can be used to provision storage automatically. </p>
<p>There are several YAML configuration files available (such as the ones you referenced) using ReplicaSets and Deployments for MySQL and other databases which may be run in production and are probably being run that way as well. However, PetSets are expected to make it a lot easier to run these types of workloads, while supporting upgrades, maintenance, scaling and so on.</p>
<p>You can find some examples of distributed databases with petsets <a href="https://github.com/kubernetes/contrib/tree/master/pets" rel="nofollow">here</a>. </p>
<hr>
<p>The advantage of provisioning persistent volumes which are networked and non-local (such as GlusterFS) is realized at scale. However, for relatively small clusters, there is a proposal to allow for <a href="https://github.com/kubernetes/kubernetes/issues/7562" rel="nofollow">local storage persistent volumes</a> in the future.</p>
<hr>
|
<p>I recently got started to learn Kubernetes by using Minikube locally in my Mac. Previously, I was able to start a local Kubernetes cluster with Minikube 0.10.0, created a deployment and viewed Kubernetes dashboard. </p>
<p>Yesterday I tried to delete the cluster and re-did everything from scratch. However, I found I cannot get the assets deployed and cannot view the dashboard. From what I saw, everything seemed to get stuck during container creation. </p>
<p>After I ran <code>minikube start</code>, it reported </p>
<pre><code>Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
</code></pre>
<p>When I ran <code>kubectl get pods --all-namespaces</code>, it reported (pay attention to the STATUS column):</p>
<pre><code>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 51s
</code></pre>
<p><code>docker ps</code> showed nothing:</p>
<pre><code>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
</code></pre>
<p><code>minikube status</code> tells me the VM and cluster are running:</p>
<pre><code>minikubeVM: Running
localkube: Running
</code></pre>
<p>If I tried to create a deployment and an autoscaler, I was told they were created successfully:</p>
<pre><code>kubectl create -f configs
deployment "hello-minikube" created
horizontalpodautoscaler "hello-minikube-autoscaler" created
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-661011369-1pgey 0/1 ContainerCreating 0 1m
default hello-minikube-661011369-91iyw 0/1 ContainerCreating 0 1m
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 21m
</code></pre>
<p>When exposing the service, it said:</p>
<pre><code>$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.32 <nodes> 8080/TCP 6s
kubernetes 10.0.0.1 <none> 443/TCP 22m
</code></pre>
<p>When I tried to access the service, I was told:</p>
<pre><code>curl $(minikube service hello-minikube --url)
Waiting, endpoint for service is not ready yet...
</code></pre>
<p><code>docker ps</code> still showed nothing. It looked to me everything got stuck when creating a container. I tried some other ways to work around this issue:</p>
<ol>
<li>Upgraded to minikube 0.11.0</li>
<li>Use the xhyve driver instead of the Virtualbox driver</li>
<li>Delete everything cached, like ~/.minikube, ~/.kube, and the cluster, and re-try</li>
</ol>
<p>None of them worked for me.</p>
<p>Kubernetes is still new to me and I would like to know:</p>
<ol>
<li>How can I troubleshoot this kind of issue? </li>
<li>What could be the cause of this issue?</li>
</ol>
<p>Any help is appreciated. Thanks.</p>
| <p>It turned out to be a network problem in my case.</p>
<p>The pod status is "ContainerCreating", and I found during container creation, docker image will be pulled from gcr.io, which is inaccessible in China (blocked by GFW). Previous time it worked for me because I happened to connect to it via a VPN.</p>
|
<p>I have the following image created by a Dockerfile:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
ruby/lab latest f1903b1508cb 2 hours ago 729.6 MB
</code></pre>
<p>And I have my following YAML file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ruby-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: ruby
spec:
containers:
- name: ruby-app
image: ruby/lab
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4567
</code></pre>
<p>When I create the deployment I got the following info in the pods:</p>
<pre><code>ruby-deployment-3830038651-sa4ii 0/1 ImagePullBackOff 0 7m
ruby-deployment-3830038651-u1tvc 0/1 ImagePullBackOff 0 7m
</code></pre>
<p>And the error <code>Failed to pull image "ruby/lab:latest": Error: image ruby/lab not found</code> from below:</p>
<pre><code> 8m 2m 6 {kubelet minikube} spec.containers{ruby} Normal Pulling pulling image "ruby/lab:latest"
8m 2m 6 {kubelet minikube} spec.containers{ruby} Warning Failed Failed to pull image "ruby/lab:latest": Error: image ruby/lab not found
8m 2m 6 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ruby" with ErrImagePull: "Error: image ruby/lab not found"
</code></pre>
<p>Is really necessary to have registry in docker for this? I just want to make test locally and pass my code/repo to a friend for testing purposes</p>
<p>Thanks</p>
| <p>You can point your docker client to the VM's docker daemon by running </p>
<pre><code>eval $(minikube docker-env)
</code></pre>
<p>Then you can build your image normally and create your kubernetes resources normally using kubectl. Make sure that you have </p>
<pre><code>imagePullPolicy: IfNotPresent
</code></pre>
<p>in your YAML or JSON specs. </p>
<p>Additionally, there is a flag to pass in insecure registries to the minikube VM. However, this must be specified the first time you create the machine.</p>
<pre><code>minikube start --insecure-registry
</code></pre>
<p>You may also want to read this when using a private registry
<a href="http://kubernetes.io/docs/user-guide/images/" rel="noreferrer">http://kubernetes.io/docs/user-guide/images/</a></p>
|
<p>I'm working on a setup where we run our Java services in docker containers hosted on a kubernetes platform. </p>
<p>On want to create a dashboard where I can monitor the heap usage of all instances of a service in my grafana. Writing metrics to statsd with the pattern:</p>
<p><code><servicename>.<containerid>.<processid>.heapspace</code> works well, I can see all heap usages in my chart. </p>
<p>After a redeployment, the container names change, so new values are added to the existing graph. My problem is, that the old lines continue to exist at the position of the last value received, but the containers are already dead.</p>
<p>Is there any simple solution for this in grafana? Can I just say: if you didn't receive data for a metric for more than X seconds, abort the chart line?</p>
<p><img src="https://i.stack.imgur.com/VDKgv.png" alt="Many containers exit at 14:00, but the chart continues"></p>
<p>Update:
Upgrading to the newest Grafana Version and Setting "null" as value for "Null value" in Stacking and Null Value didn't work.</p>
<p>Maybe it's a problem with statsd?</p>
<p>I'm sending data to statsd in form of:</p>
<p><code>felix.javaclient.machine<number>-<pid>.heap:<heapvalue>|g</code></p>
<p>Is anything wrong with this?</p>
| <p>This can happen for 2 reasons, because grafana is using the "connected" setting for null values, and/or (as is the case here) because statsd is sending the previously-seen value for the gauge when there are no updates in the current period.</p>
<h2>Grafana Config</h2>
<p>You'll want to make 2 adjustments to your graph config:</p>
<p>First, go to the "Display" tab and under "Stacking & Null value" change "Null value" to "null", that will cause Grafana to stop showing the lines when there is no data for a series.</p>
<p>Second, if you're using a legend you can go to the "Legend" tab and under "Hide series" check the "With only nulls" checkbox, that will cause items to only be displayed in the legend if they have a non-null value during the graph period.</p>
<h2>statsd Config</h2>
<p>The <a href="https://github.com/etsy/statsd/blob/master/docs/metric_types.md#gauges" rel="nofollow">statsd documentation for gauge metrics</a> tells us:</p>
<blockquote>
<p>If the gauge is not updated at the next flush, it will send the
previous value. You can opt to send no metric at all for this gauge,
by setting <code>config.deleteGauges</code></p>
</blockquote>
<p>So, the grafana changes alone aren't enough in this case, because the values in graphite aren't actually null (since statsd keeps sending the last reading). If you change the statsd config to have <code>deleteGauges: true</code> then statsd won't send anything and graphite will contain the null values we expect.</p>
<h2>Graphite Note</h2>
<p>As a side note, a setup like this will cause your data folder to grow continuously as you create new series each time a container is launched. You'll definitely want to look into removing old series after some period of inactivity to avoid filling up the disk. If you're using graphite with whisper that can be as simple as a cron task running <code>find /var/lib/graphite/whisper/ -name '*.wsp' -mtime +30 -delete</code> to remove whisper files that haven't been modified in the last 30 days. </p>
|
<p>I'm repeatedly seeing something like;</p>
<blockquote>
<p>Warning FailedSync Error syncing pod, skipping: failed to
"StartContainer" for "some-service" with RunContainerError:
"GenerateRunContainerOptions: Couldn't find key app-id in ConfigMap
default/intercom"</p>
</blockquote>
<p>Where the deployment tries to set env. vars from a configmap, that is:</p>
<pre><code>apiVersion: v1
data:
intercom: |
app-id=some-id
api-key=some-key
kind: ConfigMap
metadata:
creationTimestamp: 2016-10-23T13:09:58Z
name: intercom
namespace: default
resourceVersion: "3836"
selfLink: /api/v1/namespaces/default/configmaps/intercom
uid: ffeea5f0-9921-11e6-b2b7-0acff65e44c3
</code></pre>
<p>And the deployment looks like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
spec:
replicas: 2
template:
metadata:
labels:
run: myservice
spec:
containers:
- name: myservice
image: somerepo/myservice:v1.0
env:
- name: INTERCOM_APPID
valueFrom:
configMapKeyRef:
name: intercom
key: app-id
- name: INTERCOM_APIKEY
valueFrom:
configMapKeyRef:
name: intercom
key: api-key
ports:
- containerPort: 9000
imagePullSecrets:
- name: docker-hub-key
</code></pre>
<p>What could be wrong here?</p>
| <p>Your configmap only contains a single key: <code>intercom</code></p>
|
<blockquote>
<p>FailedMount MountVolume.SetUp failed for volume
"kubernetes.io/glusterfs/f8c3bcce-42010a80007d-glusterfsmilogvol"
(spec.Name: "glusterfsmilogvol") pod "f8c3bcce-42010a80007d" (UID:
"f8c3bcce--42010a80007d") with: glusterfs: mount failed: mount failed:
exit status 32 Mounting arguments: IPAddress:GlusterTestVol
/var/lib/kubelet/pods/f8c3bcce-42010a80007d/volumes/kubernetes.io~glusterfs/glusterfstestlogvol</p>
<p>glusterfs [log-level=ERROR
log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/glusterfstestlogvol/pod-22ohg-glusterfs.log]
Output: mount: unknown filesystem type 'glusterfs'</p>
</blockquote>
| <p>Further Analysis revealed that I am on Kubernetes 1.4, which has gci as image-type for the node.
<a href="https://cloud.google.com/container-engine/docs/node-image-migration" rel="nofollow">https://cloud.google.com/container-engine/docs/node-image-migration</a>
Since Kubernetes 1.4, gci image is defaulted for GKE Container Cluster and it is not supporting GlusterFS FileSystem as PV - Persistent Volume. </p>
<p>So, converted image-type from gci to container_vm</p>
<p><strong><em>gcloud container clusters upgrade --image-type=container_vm "test-cluster"</em></strong></p>
<p>Also, further note says that
The following Kubernetes volume types that were supported in deprecated <strong>container-vm</strong>, are not yet supported in <strong>gci</strong>:</p>
<p>NFS</p>
<p>GlusterFS</p>
<p>iSCSI</p>
|
<p>I'm in the need of learning how to use Kubernetes. I've read the first sentences of a couple of introductory tutorials, and never have found one which explains me, step by step, how to build a simulated real world example on a single computer.</p>
<p>Is Kubernetes by nature so distributed that even the 101-level tutorials can only be performed on clusters?</p>
<p>Or can I learn (execute important examples) the important stuff there is to know by just using my Laptop without needing to use a stack of Raspberry Pi's, AWS or GCP?</p>
| <p>The easiest might be <a href="https://github.com/kubernetes/minikube" rel="nofollow">minikube</a>.</p>
<blockquote>
<p>Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes cluster inside a VM on your
laptop for users looking to try out Kubernetes or develop with it
day-to-day.</p>
</blockquote>
<p>For a resource that explains how to use this, try <a href="https://medium.com/@claudiopro/getting-started-with-kubernetes-via-minikube-ada8c7a29620#.pdmrqs5fb" rel="nofollow">this getting started guide</a>. It runs through an entire example application using a local development environment.</p>
<p>If you are okay with using Google Cloud Platform (I think one gets free credits initially), there is <a href="http://kubernetes.io/docs/hellonode/" rel="nofollow">hello-node</a>.</p>
<hr>
<p>If you want to run the latest and greatest (not necessary stable) and you're using Linux, is also possible to spin up a local cluster on Linux from a cloned copy of the <a href="https://github.com/kubernetes/kubernetes" rel="nofollow">kubernetes sources</a>, using <code>hack/local_up_cluster.sh</code>. </p>
|
<p>Very simple and common case, i have 2 pods in cluser:</p>
<ol>
<li>wordpress blog + mysql</li>
<li>nginx serving static web site</li>
</ol>
<p>I want to show static web site when user load <a href="http://my-site.com" rel="nofollow">http://my-site.com</a> and show blog when user will do <a href="http://my-site/blog" rel="nofollow">http://my-site/blog</a>
Without kubernetes I would just use haproxy with rules analysing request path, I don't have so much experience in kubernetes to build it right way.
Should 1 and 2 pods be a services as well?</p>
| <p>One way to accomplish this may be to use an ingress resource. You could make the two services and then point at them from the ingress resource.</p>
<p>Reference: <a href="http://kubernetes.io/docs/user-guide/ingress/#simple-fanout" rel="nofollow">Simple fanout using ingress resources.</a></p>
|
<p>I am trying to mount a GCE persistent disk in a kubernetes pod via the deployment object yaml.
I am observing this behavior that as long as the node (on which the pod resides) is in the same zone as the persistent disk (say us-central1-a), the mounting succeeds.
However, if there are in different zones (say node in us-central1-a and disk in us-central1-b) then mounting times out. </p>
<p>Is this behavior valid? I could not find anything in the documentation that verifies that it is.</p>
<p><a href="http://kubernetes.io/docs/user-guide/volumes/#gcePersistentDisk" rel="nofollow">http://kubernetes.io/docs/user-guide/volumes/#gcePersistentDisk</a></p>
<p>We are using multi-zone clusters which is making it cumbersome to load the right disk. </p>
| <p>You can use this nodeSelector:</p>
<pre><code> nodeSelector:
failure-domain.beta.kubernetes.io/zone: us-central1-b
</code></pre>
|
<p>I have a pod that is being run as an individual pod created from the API directly and not from kubectl. I can confirm the only container within the pod is running and it is logging when I go directly to the node and run <code>docker logs -f <container id></code> but when I do a <code>kubectl logs -f <pod name></code> no logs are outputted. I've been running Kubernetes for awhile now, and this is the first time I've run into this. I am running the latest stable version (1.4.x).</p>
| <p>The issue was that the container in the pod was set as a TTY enabled container which caused the process inside the container to have a prompt that was blocking any logs from being sent out to the connection kubectl opens up.</p>
|
<p>I'm trying to get a good understanding of container technologies but am somewhat confused. It seems like certain technologies overlap different portions of the stack and different pieces of different technologies can be used as the DevOps team sees fit (e.g., can use Docker containers but don't have to use the Docker engine, could use engine from cloud provider instead). My confusion lies in understanding what each layer of the "Container Stack" provides and who the key providers are of each solution.</p>
<p>Here's my layman's understanding; would appreciate any corrections and feedback on holes in my understanding</p>
<ol>
<li>Containers: self-contained package including application, runtime environment, system libraries, etc.; like a mini-OS with an application
<ul>
<li>It seems like Docker is the de-facto standard. Any others that are notable and widely used?</li>
</ul></li>
<li>Container Clusters: groups of containers that share resources</li>
<li>Container Engine: groups containers into clusters, manages resources</li>
<li>Orchestrator: is this any different from a container engine? How?
<ul>
<li>Where do Docker Engine, rkt, Kubernetes, Google Container Engine, AWS Container Service, etc. fall between #s 2-4?</li>
</ul></li>
</ol>
| <p>This may be a bit long and present some oversimplification but should be sufficient to get the idea across.</p>
<h1>Physical machines</h1>
<p>Some time ago, the best way to deploy simple applications was to simply buy a new webserver, install your favorite operating system on it, and run your applications there. </p>
<p><a href="https://i.stack.imgur.com/OyEgq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OyEgq.png" alt="Traditional model"></a></p>
<p>The cons of this model are:</p>
<ul>
<li><p>The processes may interfere with each other (because they share CPU and file system resources), and one may affect the other's performance. </p></li>
<li><p>Scaling this system up/down is difficult as well, taking a lot of effort and time in setting up a new physical machine. </p></li>
<li><p>There may be differences in the hardware specifications, OS/kernel versions and software package versions of the physical machines, which make it difficult to manage these application instances in a hardware-agnostic manner.</p></li>
</ul>
<p>Applications, being directly affected by the physical machine specifications, may need specific tweaking, recompilation, etc, which means that the cluster administrator needs to think of them as instances at an individual machine level. Hence, this approach does not scale. These properties make it undesirable for deploying modern production applications. </p>
<h1>Virtual Machines</h1>
<p>Virtual machines solve some of the problems of the above:</p>
<ul>
<li>They provide isolation even while running on the same machine.</li>
<li>They provide a standard execution environment (the guest OS) irrespective of the underlying hardware.</li>
<li>They can be brought up on a different machine (replicated) quite quickly when scaling (order of minutes).</li>
<li>Applications typically do not need to be rearchitected for moving from physical hardware to virtual machines.</li>
</ul>
<p><a href="https://i.stack.imgur.com/b2Jrk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b2Jrk.png" alt="vms"></a></p>
<p>But they introduce some problems of their own:</p>
<ul>
<li>They consume large amounts of resources in running an entire instance of an operating system.</li>
<li>They may not start/go down as fast as we want them to (order of seconds).</li>
<li><p>Even with hardware assisted virtualization, application instances may see significant performance degradation over an application running directly on the host.
(This may be an issue only for certain kinds of applications)</p></li>
<li><p>Packaging and distributing VM images is not as simple as it could be.
(This is not as much a drawback of the approach, as it is of the existing tooling for virtualization.)</p></li>
</ul>
<h1>Containers</h1>
<p>Then, somewhere along the line, <a href="https://en.wikipedia.org/wiki/Cgroups" rel="nofollow noreferrer">cgroups (control groups)</a> were added to the linux kernel. This feature lets us isolate processes in groups, decide what other processes and file system they can see, and perform resource accounting at the group level. </p>
<p>Various container runtimes and engines came along which make the process of creating a "container", an environment within the OS, like a namespace which has limited visibility, resources, etc, very easy. Common examples of these include docker, rkt, runC, LXC, etc. </p>
<p><a href="https://i.stack.imgur.com/k0Ij7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k0Ij7.png" alt="containers"></a></p>
<p><a href="https://i.stack.imgur.com/LCpTS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LCpTS.png" alt="docker/rkt/..."></a></p>
<p>Docker, for example, includes a daemon which provides interactions like creating an "image", a reusable entity that can be launched into a container instantly. It also lets one manage individual containers in an intuitive way. </p>
<p>The advantages of containers:</p>
<ul>
<li>They are light-weight and run with very little overhead, as they do not have their own instance of the kernel/OS and are running on top of a single host OS.</li>
<li>They offer some degree of isolation between the various containers and the ability to impose limits on various resources consumed by them (using the cgroup mechanism).</li>
<li>The tooling around them has evolved rapidly to allow easy building of reusable units (images), repositories for storing image revisions (container registries) and so on, largely due to docker.</li>
<li>It is encouraged that a single container run a single application process, in order to maintain and distribute it independently. The light-weight nature of a container make this preferable, and leads to faster development due to decoupling.</li>
</ul>
<p>There are some cons as well:</p>
<ul>
<li>The level of isolation provided is a less than that in case of VMs.</li>
<li>They are easiest to use with stateless <a href="https://12factor.net/" rel="nofollow noreferrer">12-factor</a> applications being built afresh and a slight struggle if one tries to deploy legacy applications, clustered distributed databases and so on.</li>
<li>They <em>need</em> orchestration and higher level primitives to be used effectively and at scale.</li>
</ul>
<h1>Container Orchestration</h1>
<p>When running applications in production, as the complexity grows, it tends to have many different components, some of which scale up/down as necessary, or may need to be scaled. The containers themselves do not solve all our problems. We need a system that solves problems associated with real large-scale applications such as:</p>
<ul>
<li>Networking between containers</li>
<li>Load balancing</li>
<li>Managing storage attached to these containers</li>
<li>Updating containers, scaling them, spreading them across nodes in a multi-node cluster and so on.</li>
</ul>
<p>When we want to manage a cluster of containers, we use a container orchestration engine. Examples of these are Kubernetes, Mesos, Docker Swarm etc. They provide a host of functionality in addition to those listed above and the goal is to reduce the effort involved in dev-ops.</p>
<p><a href="https://i.stack.imgur.com/wabH1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wabH1.png" alt="orchestration"></a></p>
<hr>
<p>GKE (Google Container Engine) is hosted Kubernetes on Google Cloud Platform. It lets a user simply specify that they need an n-node kubernetes cluster and exposes the cluster itself as a managed instance. <a href="https://github.com/kubernetes/" rel="nofollow noreferrer">Kubernetes is open source</a> and if one wanted to, one could also set it up on Google Compute Engine, a different cloud provider, or their own machines in their own data-center.</p>
<p>ECS is a proprietary container management/orchestration system built and operated by Amazon and available as part of the AWS suite.</p>
|
<p>Having a 'working' cluster based on the following example : <a href="https://github.com/jetstack/kube-lego/tree/master/examples/nginx" rel="nofollow">https://github.com/jetstack/kube-lego/tree/master/examples/nginx</a></p>
<p>With the above config for the RC, I keep having the following error when looking at the loadbalancer's backend health check : <code>This load balancer has no health check, so traffic will be sent to all instances regardless of their status.</code> Although they do have an healthcheck, and that the default backend is up (deployment & service)</p>
<p>There is a service atop nginx in order to benefit from using static ips on lb ingresses; should this one be healed too ?</p>
<p>Captured here in case : <a href="https://i.stack.imgur.com/f5Ngh.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/f5Ngh.jpg" alt="enter image description here"></a></p>
<p>I might lack of basic knowledge about health checks; but reading the doc did not helped on this to have a clear setup and solve this issue</p>
<p>Help appreciated; best</p>
| <p>Note that LB health checks are part of the gce infra and differ from k8s internal pod healthchecks.</p>
<p>see <a href="https://stackoverflow.com/questions/34648176/is-the-google-container-engine-kubernetes-service-loadbalancer-sending-traffic-t">Is the Google Container Engine Kubernetes Service LoadBalancer sending traffic to unresponsive hosts?</a></p>
|
<p>As the title says, I am looking for a way to force a LoadBalancer service to use a predefined security group in AWS. I do not want to have to manually edit the inbound/outbound rules of the security group that is created for the ELB by Kubernetes. I have not been able to find anything within the documentation, nor have I located anything that works elsewhere online. Here is my current template:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ds-proxy
spec:
type: LoadBalancer
ports:
- port: 8761 # the port that this service should serve on
targetPort: 8761
protocol: TCP
selector:
app: discovery-service
</code></pre>
| <p>EDIT: 2021 - I am told my answer is now out of date, refer to stackoverflow.com/a/70162565/699493 instead.</p>
<p>You cannot prevent Kubernetes from creating a new security group. But since Andonaeus' answer was submitted a new feature has been added which allows for explicitly defining inbound permissions via your service's configuration file.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service" rel="nofollow noreferrer">the user guide details</a> for the specifics. The example provided there shows that by using <code>spec.loadBalancerSourceRanges</code> you can provide allow inbound IPs:</p>
<blockquote>
<p>In the following example, a load blancer will be created that is only accessible to clients with IP addresses from 130.211.204.1 and 130.211.204.2.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
</code></pre>
|
<p>I have been looking into Docker containerization for a while now but few things are still confusing to me. I understand that all the containers are grouped into a cluster and cluster management tools like Docker Swarm, DC/OS, Kubernetes or Rancher can be used to manage docker containers. I have been testing out Container cluster management with DC/OS and Kubernetes, but still a few questions remain unanswered to me.</p>
<p><strong>How does auto scaling in container level help us in production servers?</strong> How does the application serve traffic from multiple containers? </p>
<p>Suppose we have deployed a web application using containers and they have auto scaled. How does the traffic flow to the containers? How are the sessions managed?</p>
<p>What metrics are calculated for autoscaling containers?</p>
| <p>Kubernetes has concept called <code>service</code>. A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Kubernetes uses services to serve traffic from multiple containers. You can read more about services <a href="http://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies" rel="nofollow">here</a>.</p>
<p>AFAIK, Sessions are managed outside kubernetes, but Client-IP based session affinity can be selected by setting service.spec.sessionAffinity to "ClientIP". You can read more about Service and session affinity <a href="http://nishadikirielle.blogspot.in/2016/03/load-balancing-kubernetes-services-and.html" rel="nofollow">here</a></p>
<p>Multiple metrics like cpu and memory can be used for autoscaling containers. There is a good <a href="http://blog.kubernetes.io/2016/07/autoscaling-in-kubernetes.html" rel="nofollow">blog</a> you can read about autoscaling, when and how.</p>
|
<p>As part of the PetSet definition, the volumeClainTemplates are defined for Kubernetes to dynamically generate Persistent Volume Claims. For example:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 24Gi
</code></pre>
<p>However, I already has a few of the Persistent Volumes defined:</p>
<pre><code>#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv-1-rw 24Gi RWO Retain Bound rnd/pvc-1-rw 1h
pv-2-rw 24Gi RWO Retain Bound rnd/pvc-2-rw 6d
pv-3-rw 24Gi RWO Retain Bound rnd/pvc-3-rw 6d
...
</code></pre>
<p>I would like the Kubernetes to choose the persistent volumes from the existing ones rather than dynamically creating new ones. </p>
<p>I'm using Kubernetes 1.4.3. Does anyone know how to do that?</p>
| <p><code>volumeClaimTemplates</code> is an array of <code>PersistentVolumeClaim</code>. You can try to define them using <code>selector</code> and label existing volumes somehow, i.e.:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
foo: foo
bar: bar
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0001/
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0002
labels:
foo: foo
bar: bar
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0002/
---
kind: Service
apiVersion: v1
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
selector:
app: nginx
---
kind: PetSet
apiVersion: apps/v1alpha1
metadata:
name: nginx
spec:
serviceName: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: html
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
selector:
matchLabels:
foo: foo
bar: bar
</code></pre>
<p>Of course, volumes must be available for bounding.</p>
<pre><code>$ kubectl get pvc html-nginx-0
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
html-nginx-0 Bound pv0002 5Gi RWO 1m
$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv0001 5Gi RWO Retain Available 2m
pv0002 5Gi RWO Retain Bound default/html-nginx-0 2m
</code></pre>
|
<p>I need a bash script to delete only pods on specific node before shutdown or reboot using Kubernetes API.</p>
| <p>Found a solution that works for what I'm trying to achieve:</p>
<pre><code>for each in $(curl -XGET https://URL/api/v1/namespaces | jq -r '.items[].metadata.name');
do arr=($(curl -XGET https://URL/api/v1/namespaces/$each/pods | jq --arg node `hostname` -r '.items[] | select(.spec.nodeName == $node) | .metadata.name'));
for i in ${arr[@]};
do curl -XDELETE https://URL/api/v1/namespaces/$each/pods/$i ;
done
done
</code></pre>
<p>This is a script present on each Worker node and it's executed before each reboot or shutdown and deletes all the pods from all namespaces that are present on that node only. Before this, there is another command that marks the node as "unschedulable". If anyone is interested in this, I can post the complete solution</p>
|
<p>I have been using init-containers since they became available and find them super useful. My core image (below as web-dev) does not change much, but my init-container image (below as web-data-dev) does change often.</p>
<p>The init-container uses a container image with a version number. I change this version number to the latest value, and then do <strong>kubectl apply -f deployment.yaml</strong></p>
<p>For instance, i change <strong>eu.gcr.io/project/web-data-dev:187</strong> to <strong>eu.gcr.io/project/web-data-dev:188</strong> before running kubectl apply.</p>
<p>When I do this however, no deployment happens, if i make any changes to the image the init-container uses, the deployment will still not happen. I assume this is because the init-container changes are not being detected.</p>
<p>I then tried to just put some garbage in the image field, like this: <strong>"image": "thisIsNotAnImage"</strong> and run kubectl apply -f again, but the update is still not applied.</p>
<p><strong>My question is</strong> - How do i make kubectl apply -f detect an image tag change in an init-container? am i doing something wrong, is this a bug, or is this simply not implemented yet because init-containers are Alpha?</p>
<p>The full deployment YAML is below.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app: web
tier: frontend
annotations:
pod.alpha.kubernetes.io/init-containers: '[
{
"name": "initialiser1",
"image": "eu.gcr.io/project/web-data-dev:187",
"command": ["cp", "-r", "/data-in/", "/opt/"],
"volumeMounts": [
{
"name": "file-share",
"mountPath": "/opt/"
}
]
}
]'
spec:
containers:
- image: eu.gcr.io/project/web-dev:20
name: web
resources:
requests:
cpu: 10m
memory: 40Mi
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumeMounts:
- name: file-share
mountPath: /opt/
volumes:
- name: file-share
emptyDir: {}
</code></pre>
| <p>If you are using Kubernetes 1.4, try to change <code>pod.alpha.kubernetes.io/init-containers</code> to <code>pod.beta.kubernetes.io/init-containers</code>.</p>
<p>I can't find a proper issue on GitHub, but behaviour of these two annotations is different. I can do <code>kubectl apply -f</code> with the second one and the deployment will be updated.</p>
<p>You can test it using the example below:</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "install",
"image": "busybox",
"command": ["/bin/sh", "-c", "echo foo > /work-dir/index.html"],
"volumeMounts": [
{
"name": "workdir",
"mountPath": "/work-dir"
}
]
}
]'
spec:
volumes:
- name: workdir
emptyDir: {}
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
</code></pre>
<p>Try to change <code>foo</code> to <code>bar</code> and see the result:</p>
<pre><code>$ cat nginx.yaml | kubectl apply -f -
deployment "nginx" created
$ curl $(minikube service nginx --url)
Waiting, endpoint for service is not ready yet...
foo
$ cat nginx.yaml | sed -e 's/foo/bar/g' | kubectl apply -f -
deployment "nginx" configured
$ curl $(minikube service nginx --url)
Waiting, endpoint for service is not ready yet...
bar
</code></pre>
<p>The same thing using <code>pod.alpha.kubernetes.io/init-containers</code>:</p>
<pre><code>$ curl $(minikube service nginx --url)
Waiting, endpoint for service is not ready yet...
foo
$ cat nginx.yaml | sed -e 's/foo/bar/g' | kubectl apply -f -
deployment "nginx" configured
$ curl $(minikube service nginx --url)
foo
</code></pre>
|
<p>I am having an issue with some (but not all) HPAs in my cluster stopping updating their CPU utilization. This appears to happen after some different HPA scales its target deployment.</p>
<p>Running <code>kubectl describe hpa</code> on the affected HPA yields these events:</p>
<pre><code> 56m <invalid> 453 {horizontal-pod-autoscaler } Warning FailedUpdateStatus Operation cannot be fulfilled on horizontalpodautoscalers.autoscaling "sync-api": the object has been modified; please apply your changes to the latest version and try again
</code></pre>
<p>The <code>controller-manager</code> logs show affected HPAs start having problems right after a scaling event on another HPA:</p>
<pre><code>I0920 03:50:33.807951 1 horizontal.go:403] Successfully updated status for sync-api
I0920 03:50:33.821044 1 horizontal.go:403] Successfully updated status for monolith
I0920 03:50:34.982382 1 horizontal.go:403] Successfully updated status for aurora
I0920 03:50:35.002736 1 horizontal.go:403] Successfully updated status for greyhound-api
I0920 03:50:35.014838 1 horizontal.go:403] Successfully updated status for sync-api
I0920 03:50:35.035785 1 horizontal.go:403] Successfully updated status for monolith
I0920 03:50:48.873503 1 horizontal.go:403] Successfully updated status for aurora
I0920 03:50:48.949083 1 horizontal.go:403] Successfully updated status for greyhound-api
I0920 03:50:49.005793 1 horizontal.go:403] Successfully updated status for sync-api
I0920 03:50:49.103726 1 horizontal.go:346] Successfull rescale of monolith, old size: 7, new size: 6, reason: All metrics below t
arget
I0920 03:50:49.135993 1 horizontal.go:403] Successfully updated status for monolith
I0920 03:50:49.137008 1 event.go:216] Event(api.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"monolith", UID:"086
bfbee-7ec7-11e6-a6f5-0240c833a143", APIVersion:"extensions", ResourceVersion:"4210077", FieldPath:""}): type: 'Normal' reason: 'Scaling
ReplicaSet' Scaled down replica set monolith-1803096525 to 6
E0920 03:50:49.169382 1 deployment_controller.go:400] Error syncing deployment default/monolith: Deployment.extensions "monolith"
is invalid: status.unavailableReplicas: Invalid value: -1: must be greater than or equal to 0
I0920 03:50:49.172986 1 replica_set.go:463] Too many "default"/"monolith-1803096525" replicas, need 6, deleting 1
E0920 03:50:49.222184 1 deployment_controller.go:400] Error syncing deployment default/monolith: Deployment.extensions "monolith" is invalid: status.unavailableReplicas: Invalid value: -1: must be greater than or equal to 0
I0920 03:50:50.573273 1 event.go:216] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"monolith-1803096525", UID:"086e56d0-7ec7-11e6-a6f5-0240c833a143", APIVersion:"extensions", ResourceVersion:"4210080", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: monolith-1803096525-gaz5x
E0920 03:50:50.634225 1 deployment_controller.go:400] Error syncing deployment default/monolith: Deployment.extensions "monolith" is invalid: status.unavailableReplicas: Invalid value: -1: must be greater than or equal to 0
I0920 03:50:50.666270 1 horizontal.go:403] Successfully updated status for aurora
I0920 03:50:50.955971 1 horizontal.go:403] Successfully updated status for greyhound-api
W0920 03:50:50.980039 1 horizontal.go:99] Failed to reconcile greyhound-api: failed to update status for greyhound-api: Operation cannot be fulfilled on horizontalpodautoscalers.autoscaling "greyhound-api": the object has been modified; please apply your changes to the latest version and try again
I0920 03:50:50.995372 1 horizontal.go:403] Successfully updated status for sync-api
W0920 03:50:51.017321 1 horizontal.go:99] Failed to reconcile sync-api: failed to update status for sync-api: Operation cannot be fulfilled on horizontalpodautoscalers.autoscaling "sync-api": the object has been modified; please apply your changes to the latest version and try again
I0920 03:50:51.032596 1 horizontal.go:403] Successfully updated status for aurora
W0920 03:50:51.084486 1 horizontal.go:99] Failed to reconcile monolith: failed to update status for monolith: Operation cannot be fulfilled on horizontalpodautoscalers.autoscaling "monolith": the object has been modified; please apply your changes to the latest version and try again
</code></pre>
<p>Manually updating affected HPAs using <code>kubectl edit</code> fixes the problem, but this makes me worry how reliable are HPAs for autoscaling.</p>
<p>Any help is appreciated. I am running v1.3.6.</p>
| <p>It is not correct to set up more than one HPA pointing to the same target deployment. When two different HPAs point to the same target (as described here), behavior of the system may be weird.</p>
|
<p>When creating a headless service in Kubernetes, it auto-generates the CNAME for each pod. I need to access this hostname somehow on pod boot. I can't seem to find it in the downward API or set in any kind of environment variable. Where can I get this value from within the pod itself, or is it even possible?</p>
<p>Right now running <code>dig</code> on the service returns the following:</p>
<pre><code>_etcd-server._tcp.etcd.databases.svc.cluster.local. 30 IN SRV 10 100 2380 3730623862383630.etcd.databases.svc.cluster.local.
</code></pre>
<p>At the very least I need the <code>3730623862383630</code> portion of the URL.</p>
| <p>It sounds like you want to treat your pods as pets, not cattle. Maybe you can try to use <a href="http://kubernetes.io/docs/user-guide/petset/" rel="nofollow"><code>PetSet</code></a>s and headless <code>Service</code>? Then you will have the <a href="http://kubernetes.io/docs/user-guide/petset/#network-identity" rel="nofollow">DNS entry</a> like <code>etcd-0.databases.svc.cluster.local</code> which <a href="http://kubernetes.io/docs/user-guide/petset/#peer-discovery" rel="nofollow">can be used during startup</a>.</p>
|
<p>I have a web app running Kubernetes behind an nginx ingress controller, it works fine for request browsing, but any AJAX/XMLHTTPRequest from the browser gets a 502 error from nginx.</p>
<p>I captured the HTTP headers for both regular and AJAX request that they look fine, correct Host header, protocol, etc. I am confused why only XMLHttpRequest requests get the 502 from nginx. There is no delay/hang, the 502 is immediate. The requests appear to never reaches app, but get rejected by nginx itself. Switching nginx out for a direct Load Balancer and the problem goes away.</p>
<p>I am going to dig further but I wondered if anyone else using the nginx ingress controller seen this problem before and solved it?</p>
<p>I picked this error out of the nginx log which suggests the container return a response with a too large header for an nginx buffer. However I checked nginx.conf and buffering is disabled: 'proxy_buffering off;'</p>
<pre><code>2016/10/27 19:55:51 [error] 309#309: *43363 upstream sent too big header while reading response header from upstream, client: 10.20.51.1, server: foo.example.com, request: "GET /admin/pages/listview HTTP/2.0", upstream: "http://10.20.66.97:80/admin/pages/listview", host: "foo.example.com", referrer: "https://foo.example.com/admin/pages"
</code></pre>
<p>Strangely you get the 504 error only if XmlHttpRequest requests the URL. If I request the same URL with curl it works fine and the response header is as below. What about the AJAX/XmlHttpRequest of the same URL would make the response headers too large?</p>
<pre><code>HTTP/1.1 200 OK
Server: nginx/1.11.3
Date: Thu, 27 Oct 2016 20:15:16 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 6596
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
X-Powered-By: PHP/5.5.9-1ubuntu4.19
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: max-age=0, must-revalidate, no-transform, no-cache, no-store
Pragma: no-cache
X-Controller: CMSPagesController
X-Title: Example+Site+-+Pages
X-Frame-Options: SAMEORIGIN
Vary: X-Requested-With,Accept-Encoding,User-Agent
Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
</code></pre>
| <p>I resolved the issue. The reason that only XmlHttpRequests failed was because the application has a special behavior when it saw an XmlHttpRequest request, where it dumped about 3000 bytes of extra headers into the response. This made the total header size larger than the default nginx header buffer.</p>
<p>nginx chocking on large HTTP header payloads is common, because its default buffer size is smaller than most other web servers, only 4k or 8k. The solution to the errors was to increase the buffer used for headers to 16k by adding these settings.</p>
<pre><code>proxy_buffers 8 16k; # Buffer pool = 8 buffers of 16k
proxy_buffer_size 16k; # 16k of buffers from pool used for headers
</code></pre>
<p>The nginx documentation is pretty murky and these settings ambiguously named. As I understand it there is a pool of buffers for each connection. In this case 8 buffers of 16k each. These buffers are used for receiving data from the upstream web server and passing data back to the client.</p>
<p>So <code>proxy_buffers</code> determines the pool. Then <code>proxy_buffer_size</code> determine host much of that buffer pool is available for receiving the HTTP headers from the upstream server (rounded up to whole buffer size I think). A third setting <code>proxy_busy_buffer_size</code> determines how much of the buffer pool can be busy being sent to the client (rounded up to whole buffer size I think). By default the <code>proxy_busy_buffer_size</code> is automatically set to the number of buffers in the pool minus 1.</p>
<p>So the <code>proxy_buffers</code> pool must be big enough to fit the <code>proxy_busy_buffer_size</code> and still have enough buffers left over to fit at least the <code>proxy_buffer_size</code> for HTTP headers from the upstream web server.</p>
<p>The net net of that is that if you increase <code>proxy_busy_buffer_size</code> at all you'll probably immediately get the confusing error: <code>"proxy_busy_buffers_size" must be less than the size of all "proxy_buffers" minus one buffer</code>, and then you have to increase the size of the pool.</p>
<p>And the <code>proxy_buffering off</code> setting you ask? Well that does not disable proxy buffering! Rather that is whether nginx will buffer up to the whole response (to the buffer pool or disk) while sending it to the browser, or whether it will buffer only what fits in the the buffer pool. So even if you turn <code>off</code> <code>proxy_buffering</code> proxy buffering still happens.</p>
<p><a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size" rel="noreferrer">http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size</a></p>
<p><a href="https://stackoverflow.com/questions/686217/maximum-on-http-header-values">Maximum on http header values?</a></p>
<p>I saw a bunch of recommendation for setting large buffer pools, e.g. big numbers like 8 x 512kb buffers (= 4MB). There is a buffer pool for <em>each connection</em>, so the smaller you keep the buffer pool, the more connections you can handle.</p>
|
<p>I'm running an nginx service in a docker container with Google Container Engine which forwards specific domain names to other services, like API, Frontend, etc. I have simple cluster for that with configured services. Nginx Service is Load Balance.</p>
<p>The REMOTE_ADDR environmental variable always contains an internal address in the Kubernetes cluster. I looked for is HTTP_X_FORWARDED_FOR but it's missing from the request headers. Is it possible to configure the service to save the external client ip in the requests?</p>
| <p>With the current implementation of L3 balancing (as of Kubernetes 1.4) it isn't possible to get the source IP address for a connection to your service. </p>
<p>It sounds like your use case might be well served by using an <a href="http://kubernetes.io/docs/user-guide/ingress/" rel="nofollow">Ingress</a> object (or by manually creating an <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="nofollow">HTTP/S load balancer</a>), which will put the source IP address into a the <code>X-Forwarded-For</code> HTTP header for easy retrieval by your backends. </p>
|
<p>I have been looking into Docker containerization for a while now but few things are still confusing to me. I understand that all the containers are grouped into a cluster and cluster management tools like Docker Swarm, DC/OS, Kubernetes or Rancher can be used to manage docker containers. I have been testing out Container cluster management with DC/OS and Kubernetes, but still a few questions remain unanswered to me.</p>
<p><strong>How does auto scaling in container level help us in production servers?</strong> How does the application serve traffic from multiple containers? </p>
<p>Suppose we have deployed a web application using containers and they have auto scaled. How does the traffic flow to the containers? How are the sessions managed?</p>
<p>What metrics are calculated for autoscaling containers?</p>
| <p>The autoscaling in DC/OS (note: Mesosphere is the company, DC/OS the open source project) the autoscaling is described in detail in the <a href="https://dcos.io/docs/1.8/usage/tutorials/autoscaling/" rel="nofollow">docs</a>. Essentially the same as with Kubernetes, you can use either low-level metrics such as CPU utilization to decide when to increase the number of instances of an app or higher-level stuff like app throughput, for example using the <a href="https://dcos.io/docs/1.8/usage/tutorials/autoscaling/microscaling-queue/" rel="nofollow">Microscaling</a> approach.</p>
<p>Regarding your question how the routing works (how are requests forwarded to an instance, that is a single container running): you need a load balancer and again, DC/OS provides you with this out of the box. And again, the options are detailed out in the <a href="https://dcos.io/docs/1.8/usage/service-discovery/load-balancing-vips/" rel="nofollow">docs</a>, essentially: HAProxy-based North-South or IPtables-based, East-West (cluster internal) load balancers.</p>
|
<p>Currently playing with kubernetes, I need to deploy a cluster by myself on my own hardware or cloud provider (I would love to use GCE but it is not possible in a near future).</p>
<p>I saw kubeadm allow a quick and easy cluster bootstrapping except it does only provide one kubernetes master. </p>
<p>As I'm looking for a solution I can use in production: </p>
<ul>
<li>What would happen if the master reboot for an unknown reason ? </li>
<li>Using kubeadm, is it possible to enable cloud provider features such as LB or persistent volumes plugins ?</li>
</ul>
| <p>I am also trying to bring up some experimental setups using ubu 16.04 and kubeadm, with the following experience:</p>
<p>the master reboot situation is the most critical point with kubeadm, as the cluster is not booting up properly after a reboot. Another SO user reported the issue <a href="https://stackoverflow.com/questions/39872332/how-to-fix-weave-net-crashloopbackoff-for-the-second-node/">here</a>, where I shared my scripts to relaunch the cluster if the weave net is stuck in CrashLoopBackOff.</p>
<p>I also let it alone for a while, and after a lot of restarts it started to work...but this means a quite long down time to your cluster.</p>
<p>When wiping the cluster, you loose all your configuration. The only way to prevent this is to restore etcd somehow...however I didn't find any acceptable solution, yet.</p>
<p>About cloud providers, there are experimental ones since 1.4.3 AFAIK. I didn't try it, but it is a way to go. However, if you are planning more generic, bare metal for example, you should take a look at <a href="http://larmog.github.io/2016/10/15/kubernetes-on-scaleway---part-3/" rel="nofollow noreferrer">part 3 of this article series about deploying kube on scaleway with kubeadm</a>, which covers installing glusterfs as PV. In Part 2 he also describes using traefik as ingress controller.</p>
<p>As you see, operating a production cluster with kubeadm is not a easy task, but as they say, it's still alpha. I am watching this project excited, hoping it gets production ready, soon.</p>
|
<p>Imagine I got a Kubernetes cluster which has 3 pods and each pod has its own value for the label 'name' and what I actually want is each time I run/open my app, I'd see the name of pod (so its label's value) on console output.
Of course the output/value may be different each time, based on which pod is being used. </p>
<p>So, Is there any way to access the value of a pod label of the application (pod) running from code? </p>
<p>My app is based on Java (Spring Boot) by the way.</p>
<p>Thanks.</p>
| <p>You can use Kubernete's <a href="http://kubernetes.io/docs/user-guide/downward-api/" rel="nofollow">Downward API</a> to Convey Pod Properties to the containers inside the pods without having to use Kubernete's REST API. From the docs:</p>
<blockquote>
<p>It is sometimes useful for a container to have information about itself, but we want to be careful not to over-couple containers to Kubernetes. The downward API allows containers to consume information about themselves or the system and expose that information how they want it, without necessarily coupling to the Kubernetes client or REST API.</p>
</blockquote>
|
<p>My apologies if this question sounds obvious, but the Kubernetes and Google cloud documentation is extremely confusing and contradictory in places.</p>
<p>Anyway, I have pushed a Dockerized web-server to my private Google Container Registry. I want this container to be restarted if it dies, but I only need one single instance to be running at any given moment. Moreover, there's a bunch of environment variables that need to be defined for the server to be correctly configured.</p>
<p>I have already created a new cluster. But where do I go from here? Some tutorials say one should declare pod and service files, but then the next tutorial says one shouldn't declare pods directly but use deployments instead. The result is that I'm terribly confused.</p>
<p>What's the best approach for this simple use case? Also, what is the recommended documentation for using Kubernetes in Google Cloud? (Google's official docs seem out of date.)</p>
| <p>Based on your description, I would suggest you use a <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="noreferrer">Deployment</a> with <code>replicas</code> set to 1. The Deployment will ensure that there is always one instance of your pod running. You can define your <a href="http://kubernetes.io/docs/user-guide/configuring-containers/#environment-variables-and-variable-expansion" rel="noreferrer">environment variables</a> in the <a href="http://kubernetes.io/docs/user-guide/deployments/#pod-template" rel="noreferrer">pod template</a> spec of your Deployment manifest.</p>
<p>In the documentation, you might also see suggestions to use <a href="http://kubernetes.io/docs/user-guide/replication-controller/" rel="noreferrer">replication controllers</a> for the same purpose. This is definitely an option but Deployments are considered the <a href="http://kubernetes.io/docs/user-guide/deployments/#what-is-a-deployment" rel="noreferrer">successor</a> to replication controllers and are usually <a href="http://kubernetes.io/docs/user-guide/replication-controller/#deployment-recommended" rel="noreferrer">recommended</a> at this point.</p>
<p>A bare pod is not intended to be <a href="http://kubernetes.io/docs/user-guide/pods/#durability-of-pods-or-lack-thereof" rel="noreferrer">durable</a> and will not be restarted in the case of a node failure or other type of eviction.</p>
<p>The documentation is out-of-date in many places but, as far as I know, the authoritative location (even for GKE) is <a href="http://kubernetes.io/docs/" rel="noreferrer">http://kubernetes.io/docs/</a>.</p>
|
<p>I have got 2 VMs nodes. Both see each other either by hostname (through /etc/hosts) or by ip address. One has been provisioned with kubeadm as a master. Another as a worker node. Following the instructions (<a href="http://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="noreferrer">http://kubernetes.io/docs/getting-started-guides/kubeadm/</a>) I have added weave-net. The list of pods looks like the following:</p>
<pre><code>vagrant@vm-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-vm-master 1/1 Running 0 3m
kube-system kube-apiserver-vm-master 1/1 Running 0 5m
kube-system kube-controller-manager-vm-master 1/1 Running 0 4m
kube-system kube-discovery-982812725-x2j8y 1/1 Running 0 4m
kube-system kube-dns-2247936740-5pu0l 3/3 Running 0 4m
kube-system kube-proxy-amd64-ail86 1/1 Running 0 4m
kube-system kube-proxy-amd64-oxxnc 1/1 Running 0 2m
kube-system kube-scheduler-vm-master 1/1 Running 0 4m
kube-system kubernetes-dashboard-1655269645-0swts 1/1 Running 0 4m
kube-system weave-net-7euqt 2/2 Running 0 4m
kube-system weave-net-baao6 1/2 CrashLoopBackOff 2 2m
</code></pre>
<p>CrashLoopBackOff appears for each worker node connected. I have spent several ours playing with network interfaces, but it seems the network is fine. I have found similar question, where the answer advised to look into the logs and no follow up. So, here are the logs:</p>
<pre><code>vagrant@vm-master:~$ kubectl logs weave-net-baao6 -c weave --namespace=kube-system
2016-10-05 10:48:01.350290 I | error contacting APIServer: Get https://100.64.0.1:443/api/v1/nodes: dial tcp 100.64.0.1:443: getsockopt: connection refused; trying with blank env vars
2016-10-05 10:48:01.351122 I | error contacting APIServer: Get http://localhost:8080/api: dial tcp [::1]:8080: getsockopt: connection refused
Failed to get peers
</code></pre>
<p><strong>What I am doing wrong? Where to go from there?</strong></p>
| <p>I ran in the same issue too. It seems weaver wants to connect to the Kubernetes Cluster IP address, which is virtual. Just run this to find the cluster ip:
<code>kubectl get svc</code>. It should give you something like this:</p>
<pre><code>$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 100.64.0.1 <none> 443/TCP 2d
</code></pre>
<p>Weaver picks up this IP and tries to connect to it, but worker nodes does not know anything about it. Simple route will solve this issue. On all your worker nodes, execute:</p>
<pre><code>route add 100.64.0.1 gw <your real master IP>
</code></pre>
|
<p>The <code>env</code> element added in <code>spec.containers</code> of a pod using K8 dashboard's <em>Edit</em> doesn't get saved. Does anyone know what the problem is? </p>
<p>Is there any other way to add environment variables to pods/containers? </p>
<p>I get this error when doing the same by editing the file using nano:</p>
<pre><code># pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
</code></pre>
<p>Thanks.</p>
| <p>Not all fields can be updated. This fact is sometimes mentioned in the <code>kubectl explain</code> output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:</p>
<pre><code>$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
</code></pre>
<p>If you deploy your Pods using a <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="noreferrer">Deployment object</a>, then you can change the environment variables in <em>that</em> object with <code>kubectl edit</code> since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.</p>
<p>Another option for you may be to use <a href="http://kubernetes.io/docs/user-guide/configmap/" rel="noreferrer">ConfigMaps</a>. If you use the volume plugin method for mounting the ConfigMap <em>and</em> your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you). </p>
|
<p>I want to calculate the cpu usage of all pods in a kubernetes cluster. I found two metrics in prometheus may be useful:</p>
<pre><code>container_cpu_usage_seconds_total: Cumulative cpu time consumed per cpu in seconds.
process_cpu_seconds_total: Total user and system CPU time spent in seconds.
Cpu Usage of all pods = increment per second of sum(container_cpu_usage_seconds_total{id="/"})/increment per second of sum(process_cpu_seconds_total)
</code></pre>
<p>However, I found every second's increment of <code>container_cpu_usage{id="/"}</code> larger than the increment of <code>sum(process_cpu_seconds_total)</code>. So the usage may be larger than 1...</p>
| <p>This I'm using to get CPU usage at cluster level:</p>
<pre><code>sum (rate (container_cpu_usage_seconds_total{id="/"}[1m])) / sum (machine_cpu_cores) * 100
</code></pre>
<p>I also track the CPU usage for each pod.</p>
<pre><code>sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name)
</code></pre>
<p>I have a complete kubernetes-prometheus solution on GitHub, maybe can help you with more metrics: <a href="https://github.com/camilb/prometheus-kubernetes" rel="noreferrer">https://github.com/camilb/prometheus-kubernetes</a></p>
<p><a href="https://i.stack.imgur.com/Qr7St.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qr7St.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/uxbXv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uxbXv.png" alt="enter image description here"></a></p>
|
<p>I do have deployment with single pod, with my custom docker image like:</p>
<pre><code>containers:
- name: mycontainer
image: myimage:latest
</code></pre>
<p>During development I want to push new latest version and make Deployment updated.
Can't find how to do that, without explicitly defining tag/version and increment it for each build, and do</p>
<pre><code>kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1
</code></pre>
| <p>You can configure your pod with a grace period (for example 30 seconds or more, depending on container startup time and image size) and set <code>"imagePullPolicy: "Always"</code>. And use <code>kubectl delete pod pod_name</code>.
A new container will be created and the latest image automatically downloaded, then the old container terminated.</p>
<p>Example:</p>
<pre><code>spec:
terminationGracePeriodSeconds: 30
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
</code></pre>
<p>I'm currently using Jenkins for automated builds and image tagging and it looks something like this:</p>
<pre><code>kubectl --user="kube-user" --server="https://kubemaster.example.com" --token=$ACCESS_TOKEN set image deployment/my-deployment mycontainer=myimage:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
</code></pre>
<p>Another trick is to intially run:</p>
<pre><code>kubectl set image deployment/my-deployment mycontainer=myimage:latest
</code></pre>
<p>and then:</p>
<pre><code>kubectl set image deployment/my-deployment mycontainer=myimage
</code></pre>
<p>It will actually be triggering the rolling-update but be sure you have also <code>imagePullPolicy: "Always"</code> set.</p>
<p><strong>Update:</strong></p>
<p>another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like <code>terminationGracePeriodSeconds</code>. You can do this using <code>kubectl edit deployment your_deployment</code> or <code>kubectl apply -f your_deployment.yaml</code> or using a patch like this:</p>
<pre><code>kubectl patch deployment your_deployment -p \
'{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'
</code></pre>
<p>Just make sure you always change the number value.</p>
|
<p>I want to deploy a simple containerized app on GCE. I've created a deployment file and a service file. The latter includes <code>type: NodePort</code> and <code>"ports": [{"port": 443, "targetPort": "myapp-port", "protocol": "TCP"}]</code> declarations.</p>
<p>After running <code>kubectl create -f deployment.json</code> and <code>kubectl create -f service.json</code>, the deployment (including pods and replica sets) and service are created. However, the service is not visibly externally. How do I make it so? Preferably I would want to make this change in the <code>service.json</code> file, so it's under revision control.</p>
| <p>Probably because you missed</p>
<pre><code>spec:
type: LoadBalancer
</code></pre>
<p><a href="http://kubernetes.io/docs/user-guide/load-balancer/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/load-balancer/</a></p>
|
<p>I have a Kubernetes service (a Python Flask application) exposed publicly on port 30000 (All Kubernetes NodePorts have to be in the range 30000-32767 from what I understand) using the LoadBalancer type. I need for my public-facing service to be accessible on the standard HTTP port 80. What's the best way to go about doing this?</p>
| <p>If you don't use any cloudproviders, you can just set <code>externalIPs</code> option in service and make this IP up on node, and kube-proxy will route traffic from this IP to your pod for you.</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
],
"externalIPs" : [
"80.11.12.10"
]
}
}
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a></p>
|
<p>Is there a way to obtain hardware information (e.g. number of CPU cores, capacity of RAM) of an OpenShift 3.0 node programmatically? I could not find anything useful in the API references for <a href="https://docs.openshift.com/enterprise/3.0/rest_api/openshift_v1.html" rel="nofollow">OpenShift</a> or <a href="https://docs.openshift.com/enterprise/3.0/rest_api/kubernetes_v1.html#v1-nodesysteminfo" rel="nofollow">Kubernetes</a> (except for <code>NodeSystemInfo</code> in the Kubernetes API, which does not contain most of the hardware-level specs).</p>
| <p>There is a readonly stat endpoint exposed on both Openshift and Kubernetes. Normally it is exposed as <a href="https://api-host:10250/stats" rel="nofollow noreferrer">https://api-host:10250/stats</a></p>
|
<p>I am setting up a Kubernetes deployment using auto-scaling groups and Terraform. The kube master node is behind an ELB to get some reliability in case of something going wrong. The ELB has the health check set to <code>tcp 6443</code>, and tcp listeners for 8080, 6443, and 9898. All of the instances and the load balancer belong to a security group that allows all traffic between members of the group, plus public traffic from the NAT Gateway address. I created my AMI using the following script (from the getting started guide)...</p>
<pre><code># curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt-get update
# # Install docker if you don't have it already.
# apt-get install -y docker.io
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni
</code></pre>
<p>I use the following user data scripts...</p>
<h1>kube master</h1>
<pre><code>#!/bin/bash
rm -rf /etc/kubernetes/*
rm -rf /var/lib/kubelet/*
kubeadm init \
--external-etcd-endpoints=http://${etcd_elb}:2379 \
--token=${token} \
--use-kubernetes-version=${k8s_version} \
--api-external-dns-names=kmaster.${master_elb_dns} \
--cloud-provider=aws
until kubectl cluster-info
do
sleep 1
done
kubectl apply -f https://git.io/weave-kube
</code></pre>
<h1>kube node</h1>
<pre><code>#!/bin/bash
rm -rf /etc/kubernetes/*
rm -rf /var/lib/kubelet/*
until kubeadm join --token=${token} kmaster.${master_elb_dns}
do
sleep 1
done
</code></pre>
<p>Everything seems to work properly. The master comes up and responds to kubectl commands, with pods for discovery, dns, weave, controller-manager, api-server, and scheduler. kubeadm has the following output on the node...</p>
<pre><code>Running pre-flight checks
<util/tokens> validating provided token
<node/discovery> created cluster info discovery client, requesting info from "http://kmaster.jenkins.learnvest.net:9898/cluster-info/v1/?token-id=eb31c0"
node/discovery> failed to request cluster info, will try again: [Get http://kmaster.jenkins.learnvest.net:9898/cluster-info/v1/?token-id=eb31c0: EOF]
<node/discovery> cluster info object received, verifying signature using given token
<node/discovery> cluster info signature and contents are valid, will use API endpoints [https://10.253.129.106:6443]
<node/bootstrap> trying to connect to endpoint https://10.253.129.106:6443
<node/bootstrap> detected server version v1.4.4
<node/bootstrap> successfully established connection with endpoint https://10.253.129.106:6443
<node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request
<node/csr> received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:ip-10-253-130-44 | CA: false
Not before: 2016-10-27 18:46:00 +0000 UTC Not After: 2017-10-27 18:46:00 +0000 UTC
<node/csr> generating kubelet configuration
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
</code></pre>
<p>Unfortunately, running <code>kubectl get nodes</code> on the master only returns itself as a node. The only interesting thing I see in /var/log/syslog is </p>
<pre><code>Oct 27 21:19:28 ip-10-252-39-25 kubelet[19972]: E1027 21:19:28.198736 19972 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node 'ip-10-253-130-44' not found
Oct 27 21:19:31 ip-10-252-39-25 kubelet[19972]: E1027 21:19:31.778521 19972 kubelet_node_status.go:301] Error updating node status, will retry: error getting node "ip-10-253-130-44": nodes "ip-10-253-130-44" not found
</code></pre>
<p>I am really not sure where to look...</p>
| <p>The Hostnames of the two machines (master and the node) should be different. You can check them by running <code>cat /etc/hostname</code>. If they do happen to be the same, edit that file to make them different and then do a <code>sudo reboot</code> to apply the changes. Otherwise kubeadm will not be able to differentiate between the two machines and it will show as a single one in kubectl get nodes.</p>
|
<p>I'm having a bit of hard time figuring out whether the Guestbook example is working in Minikube. My main issue is possibly that the example description <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook" rel="nofollow">here</a> details all the steps but <em>there is no indication about how to connect to the web application</em> once it's running from the default YAML files.</p>
<p>I'm using Minikube v. <code>0.10.0</code> in Mac OS X 10.9.5 (Mavericks) and this is what I eventually ended up with (which seems pretty good according to what I read from the example document):</p>
<pre><code>PolePro:all-in-one poletti$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.140 <none> 80/TCP 8s
kubernetes 10.0.0.1 <none> 443/TCP 2h
redis-master 10.0.0.165 <none> 6379/TCP 53m
redis-slave 10.0.0.220 <none> 6379/TCP 37m
PolePro:all-in-one poletti$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
frontend 3 3 3 3 20s
redis-master 1 1 1 1 42m
redis-slave 2 2 2 2 37m
PolePro:all-in-one poletti$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-708336848-0h2zj 1/1 Running 0 29s
frontend-708336848-ds8pn 1/1 Running 0 29s
frontend-708336848-v8wp9 1/1 Running 0 29s
redis-master-2093957696-or5iu 1/1 Running 0 43m
redis-slave-109403812-12k68 1/1 Running 0 37m
redis-slave-109403812-c7zmo 1/1 Running 0 37m
</code></pre>
<p>I thought that I might connect to <code>http://10.0.0.140:80/</code> (i.e. the <code>frontend</code> address and port as returned by <code>kubectl get svc</code> above) and see the application running, but I'm getting a <code>Connection refused</code>:</p>
<pre><code>PolePro:all-in-one poletti$ curl -v http://10.0.0.140:80
* About to connect() to 10.0.0.140 port 80 (#0)
* Trying 10.0.0.140...
* Adding handle: conn: 0x7fb0f9803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fb0f9803a00) send_pipe: 1, recv_pipe: 0
* Failed connect to 10.0.0.140:80; Connection refused
* Closing connection 0
curl: (7) Failed connect to 10.0.0.140:80; Connection refused
</code></pre>
<p>It's somehow suspicious that the example description misses such an important step though. What am I missing?</p>
| <p>Quick and dirty:
<code>
kubectl port-forward frontend-708336848-0h2zj 80:80
</code></p>
|
<p>In Google Cloud Platform I have a Container-Cluster with three running instances. I now want to connect from my terminal to be able to run <code>kubectl</code> commands. For this I ran the command </p>
<pre><code>gcloud container clusters get-credentials cluster-1 --zone europe-west1-b --project project-id
</code></pre>
<p>I am using the real project name of course. This is the command shown by the dashboard when clicking on 'connect with the cluster'. The output of this command is:</p>
<pre><code>Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1.
</code></pre>
<p>But when I run <code>kubectl</code>commands afterwards like <code>kubectl cluster-info</code> I always get:</p>
<pre><code>Unable to connect to the server: oauth2: cannot fetch token: 400 Bad Request
Response: {
"error" : "invalid_grant",
"error_description" : "Token has been revoked."
}
</code></pre>
<p>What am I missing here? <code>gcloud</code> commands like <code>gcloud container clusters list</code> work</p>
| <p>I tried from a different machine at home, and there it was working after installing and setting up gcloud. I think that on my work machine there is still an oauth token stored with which I authenticated to a different google account I used for a test.</p>
<p><strong>Edit:</strong> I got it running now. the problem was that I missed the second of the necessary calls:</p>
<pre><code>gcloud auth login
gcloud auth application-default login
</code></pre>
|
<p>I was wondering if there is a possibility to continuously print logs from HTTP request to Kubernetes API. I am using python for querying K8S API sth like</p>
<p><code>r = requests.get(self.url + "namespaces/" + namespace + "/pods/" + pod_name + "/log", cert=(self.cert, self.key), verify=False)</code></p>
<p>and i would like to use <code>follow=true</code> parameter probably together with <code>tailLines=100</code> to make it more like <code>tail</code> command.</p>
<p>When I am using <code>follow</code> parameter the request is collecting response but i don't know how to forward it's output to console.
Is it possible?</p>
| <p>What i was missing is <code>stream=True</code> in <code>request.get</code> parameter which allows to iterate through the response content so my code looks like this:</p>
<pre><code>import requests
class Logs():
def __init__(self, url='https://192.168.0.1:6443/api/v1/',
cert='./client.crt',
key='./client.key'):
self.url = url
self.cert = cert
self.key = key
requests.packages.urllib3.disable_warnings()
def get_pod_logs(self, namespace, pod_name):
params = dict(
follow="true",
tailLines="100"
)
r = requests.get(self.url + "namespaces/" + namespace + "/pods/" + pod_name + "/log", params=params,
cert=(self.cert, self.key), verify=False, stream=True)
for chunk in r.iter_content(chunk_size=256):
if chunk:
print(chunk)
logs = Logs()
logs.get_pod_logs(namespace="my-ns",pod_name="my-pod")
</code></pre>
|
<p>Kubectl command alway returns this error yaml: line 2: mapping values are not allowed in this context. Even when i call normal version command, config command, etc. Not sure whats causing this.</p>
<pre><code>tessact@tessact-sys-1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"4",
GitVersion:"v1.4.4",
GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56",
GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z",
GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
error: yaml: line 2: mapping values are not allowed in this context
tessact@tessact-sys-1:~/[some path]$ kubectl create -f kubernetes_configs/frontend.yaml
error: yaml: line 2: mapping values are not allowed in this context
</code></pre>
<p>The only yaml file i used is </p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: trigger
# Replace with your project ID or use `make template`
image: asia.gcr.io/trigger-backend/trigger-backend
# This setting makes nodes pull the docker image every time before
# starting the pod. This is useful when debugging, but should be turned
# off in production.
imagePullPolicy: Always
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
name: frontend
</code></pre>
<p>Whatever I try with <strong>kubectl</strong> it returns this error. What should I do to solve this?</p>
<pre><code>> tessact@tessact-sys-1:~/developer/trigger-backend-dev/trigger-backend$
> kubectl get service error: yaml: line 2: mapping values are not
> allowed in this context
</code></pre>
<p>Output of :</p>
<pre><code>strace kubectl version
</code></pre>
<p>is <a href="http://pastebin.com/E53yHrwD" rel="noreferrer">here</a></p>
| <p>That the version command already throws an error indicates that there is some default YAML file that gets loaded.</p>
<p>You can use <code>strace kubectl version</code> to see what file was opened, hopefully this is done just before <code>kubectl</code> throws the error. I assume there is some global config that it reads (or alternatively a default file in your current directory).</p>
<p>It is of course sloppy programming in kubernetes not to catch such an error, and display the name of the file, and then re-raise the error.</p>
|
<p>I have an application that uses raft to elect a leader out of multiple instances. These instances use the gossip protocol, so it just needs to know another instance to discover the rest.</p>
<p>I plan to run each instance as a kubernetes pod, with replication manage by a replication controller. I will also put a service in front of these nodes so that other apps in the cluster can talk to it.</p>
<p>My problem is: How can I get the pods within the replica set to discover each other without the kubernetes API? Is this possible through DNS, or does kubernetes provides some environment variables?</p>
| <p>The solution is to use a headless service. For example, we can deploy a headless service called <code>myservice-discovery</code>. Because the service is headless, it does not do any load-balancing or get a cluster ip address. To get the ip addresses of the pods, you then query the DNS server for <code>myservice-discovery.mycluster</code> to get a list of A records.</p>
<p>You can also set up a second normal (non-headless) service if the pods also need to be accessible to other services and pods.</p>
|
<p>I have a replication controller creating 10 instances of my pod. The pod runs a Zeppelin notebook which should be accessed by users over the web. However I need the possibility to access a specific notebook/pod over the web. If I expose the pods using a service of type LoadBalancer I will automatically be routed to any pod. </p>
<p>Is there a way to expose an extra IP per pod or another way to access specific pods over the web? Or is the only way to create 10 replication controllers and 10 services?</p>
| <p>To get 1:1 IP addresses you'd need a replication controller for each site. Replication controllers are for managing identically configured resources.</p>
<p>However, you could always do <a href="http://kubernetes.io/docs/user-guide/ingress/#simple-fanout" rel="nofollow noreferrer">host-based or path-based</a> routing to get to the right service / pod. That way you could just have a subdomain + mapping for each user.</p>
|
<p>Our application is based on an API first architecture and is currently based on a single domain / service:</p>
<p><code>api.todos.com</code></p>
<p>Consumers of the API are :</p>
<ul>
<li>Our web-frontend</li>
<li>Our mobile-apps</li>
<li>Other business / public</li>
</ul>
<p>We will be building new <em>micro-services</em> written in different languages for the same application. For example we might develop API services for:</p>
<ul>
<li>Statistics</li>
<li>Blog / Content</li>
<li>RSS Feed</li>
<li>Search</li>
</ul>
<p>My question is around dealing with domains. Would it be best to split each service into a different subdomain e.g.</p>
<ul>
<li><code>api.todos.com</code></li>
<li><code>stats.todos.com</code></li>
<li><code>content.todos.com</code></li>
<li><code>rss.todos.com</code></li>
<li><code>search.todos.com</code></li>
</ul>
<p>Or is it better to have a single unified API domain where we do HTTP (layer 7) routing to reach our endpoints. e.g.</p>
<ul>
<li><code>api.todos.com/todos</code></li>
<li><code>api.todos.com/stats</code></li>
<li><code>api.todos.com/content</code></li>
<li><code>api.todos.com/rss</code></li>
<li><code>api.todos.com/search</code></li>
</ul>
<p>Not sure which is preferable for a public API? It would be easier to have multiple sub-domains and not have to deal with an intermediate routing layer / proxy.</p>
| <p>As System Architect I think <code>it is better to have a single unified API domain where we do HTTP (layer 7) routing to reach our endpoints</code>. You can make your system more flexible without any changes for your clients. For example you have a microservice with routes:</p>
<ul>
<li>api.todos.com/route1</li>
<li>api.todos.com/route2</li>
</ul>
<p>In future you can split the microservice by this routes.</p>
<p>But mostly, it depends on what API Gateway will you use. API gateway is single entry point in your system, what proxy request to correct microservice. Also it make auth and cache. More about this microservice's pattern you can read <a href="http://microservices.io/patterns/apigateway.html" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have an admin.conf file containing info about a cluster, so that the following command works fine:</p>
<pre><code>kubectl --kubeconfig ./admin.conf get nodes
</code></pre>
<p>How can I <code>config</code> kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run:</p>
<pre><code>kubectl get nodes
</code></pre>
| <p>Here are the official documentation for how to configure kubectl</p>
<p><a href="http://kubernetes.io/docs/user-guide/kubeconfig-file/" rel="noreferrer">http://kubernetes.io/docs/user-guide/kubeconfig-file/</a></p>
<p>You have a few options, specifically to this question, you can just copy your <code>admin.conf</code> to <code>~/.kube/config</code></p>
|
<p>I have set up a kubernetes cluster running on AWS. What I want to do now is to control the cluster from remote machines, for example, my macbook pro.</p>
<p>I've learned that Kubernetes has RESTful apis, and Kubectl can serve as a proxy.
By running :
<code>kubectl proxy --port=8001 &</code>
I can access the RESTful api with curl, for example:</p>
<pre><code>curl http://localhost:8001/api
</code></pre>
<p>Then I found that I can only curl the localhost. If I curl from a remote machine with the following command:</p>
<pre><code>curl http://dns-to-the-k8-machine:8080/api
</code></pre>
<p>I will get a "Connection refused." I wonder what is happening here? And is there a way to easily access the apis remotely?</p>
<p>Thanks in advance.</p>
| <p>Depending on how you provisioned your cluster, the API server may be listening over a different port. Have a look at your <code>kubeconfig</code> file (<code>~/.kube/config</code>), in there should be a section which has the server you are connecting to.</p>
<p>Also, your cluster may be using certs or some other type of authentication which you'll need to pass as well. Those will be outlined in the same kubeconfig file. </p>
<p>When using <code>kubectl proxy</code>, kubectl is handling the pieces mentioned above automatically, and proxying back to your laptop over localhost. </p>
|
<p>I have a simple ingress resource and two services: ess-index and ess-query. Services has been exposed with type <code>NodePort</code> with <code>--session-afinity=None</code>. Ingress resource has the following structure:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ess-ingress
spec:
backend:
serviceName: ess-query
servicePort: 2280
rules:
- http:
paths:
- path: /api/index
backend:
serviceName: ess-index
servicePort: 2280
</code></pre>
<p>Created services will have proxy-mode iptables. When I expose these services as a <code>NodePort</code> kubernetes master will allocate a port from a flag-configured range, and each Node will proxy that port into the ess-index or ess-query service respectively.
So, when I POST ingress with
<code>kubectl create -f ingress.yaml</code> it will cause the following behaviour: will be automatically created GLBC controller, that manages the following GCE resource graph (Global Forwarding Rule -> TargetHttpProxy -> Url Map -> Backend Service -> Instance Group). It should appear as a pod, according to the documentation, but i can't see it in the following command output:<code>kubectl get pods --namespace=kube-system</code>. Here's the <a href="https://i.stack.imgur.com/W4Y9K.png" rel="nofollow noreferrer">sample output</a> My question is: what is the default load balancing algorithm for this loadbalancer? What happens when traffic routes to the appropriate backend? Is my understand correct that default algorithm is not round robin and, according to the <code>Service</code> docs, is random distributed (maybe based on some hash of source/destination IP, etc.)? This is important because in my case all traffic goes from small number of machines with fixed IP, so i can see the nonuniform traffic distribution on my backend instances. If so, what is the proper way to get the round robin behaviour? As far as i understand i can choose from two variants:</p>
<ol>
<li>Custom ingress controller. Pros: it can automatically detects pod restarts/etc., cons: can't support advanced l7 features that i may need in the future (like session persistence)</li>
<li>Delete ingress and use build it yourself solution like mentioned here: <a href="https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/" rel="nofollow noreferrer">https://www.nginx.com/blog/load-balancing-kubernetes-services-nginx-plus/</a>
Pros: fully customisable, cons: you should take care if pods restarts, etc. by yourself. </li>
</ol>
| <p>Incorporating kubeproxy and cloud lb algorithms so they cooperate toward a common goal is still a work in progress. Right now, it will end up spraying, over time you get roughly equal distribution but it will not be strictly rr. </p>
<p>If you really want fine grain control over the algorithm, you can deploy the <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow noreferrer">nginx ingress controller</a> and expose it as a Service of Type=lb (or even stick a GCE l7 in front of it). This will give you Ingress semantics, but allows an escape hatch for areas that cloudproviders aren't fully integrated with Kube just yet, like algorithm control. The escape hatch is exposed as <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/configuration.md#annotations" rel="nofollow noreferrer">annotations</a> or a full <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/custom-configuration" rel="nofollow noreferrer">config map</a> for the template. </p>
|
<p>I am working with Kubernetes 1.4.3 and my nodes look like the following:</p>
<blockquote>
<pre><code>ip-10-0-0-105.eu-central-1.compute.internal Ready 1d
ip-10-0-0-50.eu-central-1.compute.internal Ready,SchedulingDisabled 1d
ip-10-0-1-126.eu-central-1.compute.internal Ready 1d
</code></pre>
</blockquote>
<p>Even though the master node is set to <code>SchedulingDisabled</code>, <code>Daemonsets</code> are still being scheduled on it.</p>
<p>Firstly, why? this did not happen on prior to K8S 1.4, and if its a new, how do i cancel that option or maybe use pod affinity to exclude the master node from running daemonset pods.</p>
<p>Thanks.</p>
| <p>This was answered in <a href="https://github.com/kubernetes/kubernetes/issues/29108#issuecomment-233432397" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/29108#issuecomment-233432397</a></p>
<p>Basically, this is working as intended. DaemonSet pods will get scheduled on unschedulable nodes. In the future (not v1.4), this behavior will be selectable at the pod level (e.g., see <a href="https://github.com/kubernetes/kubernetes/issues/29178" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/29178</a>). For now, you can choose not to register your master node to avoid this problem.</p>
|
<p>Sometimes a pod should take some times to "warmup"(like load some data to cache). At that time it should not be exposed.</p>
<p>How to prevent a pod to be added to a kube-service unitl initialization complete?</p>
| <p>You should use health checks. More specifically in Kubernetes, you need a <code>ReadinessProbe</code></p>
<blockquote>
<p>ReadinessProbe: indicates whether the container is ready to service requests. If the ReadinessProbe fails, the endpoints controller will remove the pod’s IP address from the endpoints of all services that match the pod. The default state of Readiness before the initial delay is Failure. The state of Readiness for a container when no probe is provided is assumed to be Success.</p>
</blockquote>
<p>Also, difference from <code>LivenessProbe</code>:</p>
<blockquote>
<p>If you’d like to start sending traffic to a pod only when a probe succeeds, specify a ReadinessProbe. In this case, the ReadinessProbe may be the same as the LivenessProbe, but the existence of the ReadinessProbe in the spec means that the pod will start without receiving any traffic and only start receiving traffic once the probe starts succeeding.</p>
</blockquote>
<p><a href="http://kubernetes.io/docs/user-guide/pod-states/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/pod-states/</a></p>
|
<p>my containers are killed on a new installation of Fedora 24 atomic:</p>
<pre><code>#docker run -ti nginx /bin/bash
docker: Error response from daemon: Cannot start container f24104b29f7f1f1614024414e8346e1a98c722c027f4122e6c70f7ace0cc5353: [9] System error: exit status 1.
</code></pre>
<p>After some debugging session, I think the process gets killed right after trying to assign an address on network bridge (see docker daemon logs in the next links). But I have no idea why is that nor how to debug further.</p>
<p>Some additional info is reported here:</p>
<ul>
<li>SELinux is disabled:
<ul>
<li>bash-4.3# getenforce -> Permissive</li>
</ul></li>
<li>docker <a href="https://gist.github.com/mrceresa/f7cb3bc42958f5446b7baf2aa0246df5#file-gistfile1-txt" rel="nofollow noreferrer">info</a> </li>
<li>docker daemon log in <a href="https://gist.github.com/mrceresa/895efaefe14c89298ee75a56ad8bb6e7#file-docker-d" rel="nofollow noreferrer">debug</a> mode. At line 9 it receives the KILL signal.</li>
</ul>
<p>Any help would really be appreciated!</p>
<p>Best,</p>
<p>Mario</p>
| <p>I've just found the problem:
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1320601" rel="nofollow noreferrer">https://bugzilla.redhat.com/show_bug.cgi?id=1320601</a></p>
<p>Adding the systemd flag to the launch script of the daemon:
--exec-opt native.cgroupdriver=systemd</p>
<p>solved the problem.</p>
<p>Thanks, Federkun, for help!</p>
<p>Best,</p>
<p>Mario</p>
|
<p>I have a deployment configuration as follows:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
--- REMOVED FOR BREVITY ---
volumes:
- gcePersistentDisk: {fsType: pd-ssd, pdName: devtasker-disk}
name: devtasker-disk
- gcePersistentDisk: {fsType: pd-ssd, pdName: devtasker-pg}
name: devtasker-pg
</code></pre>
<p>This works fine, however it requires the persistent volumes to be created manually and then the deployment can take place.</p>
<p>I saw in Kubernetes 1.4 they have released "Dyanmic Provisioning & Storage Classes". </p>
<p>I have added a storage class as follows:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
</code></pre>
<p>I now want to add a PVC to my deployment configuration file mentioned above. The standard PVC for the above storage class goes like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {volume.beta.kubernetes.io/storage-class: ssd-storage}
name: claim1
spec:
accessModes: [ReadWriteOnce]
resources:
requests: {storage: 30Gi}
</code></pre>
<p>Im struggling to get the above PVC into my deployment configuration mentioned in the first code block above.</p>
<p>I tried this:</p>
<pre><code> volumes:
- gcePersistentDisk: {fsType: pd-ssd, pdName: devtasker-disk}
name: devtasker-disk
- gcePersistentDisk: {fsType: pd-ssd, pdName: devtasker-pg}
name: devtasker-pg
- persistentVolumeClaim: {claimName: ssd-storage, annotations: {volume.beta.kubernetes.io/storage-class: ssd-storage}}
name: ssd-storage
</code></pre>
<p>.. but I haven't had any luck with many different combinations. I get the following:</p>
<pre><code>error validating "kubernetes/deployment.yml": error validating data: found invalid field annotations for v1.PersistentVolumeClaimVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>Could anyone please point me in the right direction here?</p>
| <p>The storage class tells how to create the PV. The PVC claim requests the actual PV from the underlining infrastructure. </p>
<p>Your deployment should only know about the PVC, so using your example you would end up with the following and remove the <code>gcePersistentDisk</code> entries: </p>
<pre><code>volumes:
- name: storage
persistentVolumeClaim
claimName: claim1
- name: storage2
persistentVolumeClaim
claimName: claim2
</code></pre>
|
<p>I'm trying to install kubernetes with kubelet 1.4.5 on CoreOS beta (1192.2.0).</p>
<p>I'm using a slightly modified version of the controller and worker install scripts from <a href="https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic" rel="nofollow noreferrer">https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic</a></p>
<p>so in general I created the licenses on Gentoo Linux using the following bash script:</p>
<pre><code>#!/bin/bash
export MASTER_HOST=coreos-2.tux-in.com
export K8S_SERVICE_IP=10.3.0.1
export WORKER_IP=10.79.218.3
export WORKER_FQDN=coreos-3.tux-in.com
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
openssl genrsa -out ${WORKER_FQDN}-worker-key.pem 2048
openssl req -new -key ${WORKER_FQDN}-worker-key.pem -out ${WORKER_FQDN}-worker.csr -subj "/CN=${WORKER_FQDN}" -config worker-openssl.cnf
openssl x509 -req -in ${WORKER_FQDN}-worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out ${WORKER_FQDN}-worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
openssl genrsa -out admin-key.pem 2048
openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365
echo done
</code></pre>
<p>and this is <code>openssl.cnf</code></p>
<pre><code>[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = coreos-2.tux-in.com
DNS.2 = coreos-3.tux-in.com
IP.1 = 10.3.0.1
IP.2 = 10.79.218.2
IP.3 = 10.79.218.3
</code></pre>
<p>and this is my <code>worker-openssl.cnf</code></p>
<pre><code>[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 10.79.218.3
DNS.1 = coreos-3.tux-in.com
</code></pre>
<p>My controller machine is <code>coreos-2.tux-in.com</code> which resolves to the lan ip <code>10.79.218.2</code></p>
<p>my worker machine is <code>coreos-3.tux-in.com</code> which resolves to lan ip <code>10.79.218.3</code></p>
<p>it created the licenses just fine. but when I use them and install the controller script on the main machine, i see that when I run <code>journalctl -xef -u kubelet</code> and I noticed the following messages:</p>
<pre><code>Nov 08 21:24:06 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:06.805868 2018 event.go:208] Unable to write event: 'x509: certificate signed by unknown authority' (may retry after sleeping)
Nov 08 21:24:06 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:06.950827 2018 reflector.go:203] pkg/kubelet/kubelet.go:384: Failed to list *api.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: x509: certificate signed by unknown authority
Nov 08 21:24:07 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:07.461042 2018 reflector.go:203] pkg/kubelet/config/apiserver.go:43: Failed to list *api.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D10.79.218.2&resourceVersion=0: x509: certificate signed by unknown authority
Nov 08 21:24:07 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:07.461340 2018 reflector.go:203] pkg/kubelet/kubelet.go:403: Failed to list *api.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D10.79.218.2&resourceVersion=0: x509: certificate signed by unknown authority
Nov 08 21:24:08 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:08.024366 2018 reflector.go:203] pkg/kubelet/kubelet.go:384: Failed to list *api.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: x509: certificate signed by unknown authority
Nov 08 21:24:08 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:08.171170 2018 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node '10.79.218.2' not found
Nov 08 21:24:08 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:08.543619 2018 reflector.go:203] pkg/kubelet/kubelet.go:403: Failed to list *api.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D10.79.218.2&resourceVersion=0: x509: certificate signed by unknown authority
Nov 08 21:24:08 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:08.543926 2018 reflector.go:203] pkg/kubelet/config/apiserver.go:43: Failed to list *api.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D10.79.218.2&resourceVersion=0: x509: certificate signed by unknown authority
</code></pre>
| <p>The kubelet <a href="http://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">documentation</a> says that the <code>--tls-cert-file</code> flag needs the CA be concatenated after the certificate. In you case it is the <code>apiserver.pem</code>:</p>
<blockquote>
<p><strong>--tls-cert-file</strong> File containing x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir.</p>
</blockquote>
<p>If I read you certificate generation correctly, the <code>apiserver.pem</code> doesn't contain the root ca.</p>
|
<p>Kubernetes version: 1.4.5</p>
<p>I have a very simple service with <code>type: NodePort</code>. It only returns some text on <code>/info</code>. I am using the default GKE ingress controller (the L7 Google load balancer) with TLS. If I use the following ingress everything works as expected:</p>
<h2>Working ingress</h2>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: global-ingress
namespace: global
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: tls-secret
backend:
serviceName: gate-front
servicePort: 80
</code></pre>
<hr>
<pre><code>curl -k https://130.211.39.140/info
POD: gate-front-1871107570-ue07p
IP: 10.0.2.26
REQ: /info
</code></pre>
<hr>
<pre><code>$ kubectl describe ing
Name: global-ingress
Namespace: global
Address: 130.211.39.140
Default backend: gate-front:80 (10.0.2.25:8080,10.0.2.26:8080)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
* * gate-front:80 (10.0.2.25:8080,10.0.2.26:8080)
Annotations:
backends: {"k8s-be-31966--f3f0bf21d171a625":"HEALTHY"}
https-forwarding-rule: k8s-fws-global-global-ingress--f3f0bf21d171a625
https-target-proxy: k8s-tps-global-global-ingress--f3f0bf21d171a625
url-map: k8s-um-global-global-ingress--f3f0bf21d171a625
</code></pre>
<h2>Broken ingress</h2>
<p>However, if I introduce a rule and leave out the default backend, all requests return <code>default backend - 404</code>. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: global-ingress
namespace: global
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- path: /gate
backend:
serviceName: gate-front
servicePort: 80
</code></pre>
<hr>
<pre><code>curl -k https://130.211.33.150/gate/info
default backend - 404
</code></pre>
<hr>
<pre><code>$ kubectl describe ing
Name: global-ingress
Namespace: global
Address: 130.211.33.150
Default backend: default-http-backend:80 (10.0.2.3:8080)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
*
/gate gate-front:80 (<none>)
Annotations:
https-forwarding-rule: k8s-fws-global-global2-ingress--f3f0bf21d171a625
https-target-proxy: k8s-tps-global-global2-ingress--f3f0bf21d171a625
url-map: k8s-um-global-global2-ingress--f3f0bf21d171a625
backends: {"k8s-be-31966--f3f0bf21d171a625":"HEALTHY","k8s-be-32552--f3f0bf21d171a625":"HEALTHY"}
</code></pre>
<p>If I add hosts and use <code>curl -k --resolve ...</code> I get the same behaviour. </p>
<p>I went through the following documentation and examples:</p>
<ul>
<li><a href="http://kubernetes.io/docs/user-guide/ingress/" rel="noreferrer">http://kubernetes.io/docs/user-guide/ingress/</a></li>
<li><a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md" rel="noreferrer">https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md</a></li>
</ul>
<p>Can anyone shed some light on this? </p>
| <p><a href="https://github.com/kubernetes/ingress-gce/blob/master/README.md#paths" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/blob/master/README.md#paths</a></p>
<p>Can you look over this part and comment if it solves the issue: </p>
<blockquote>
<p>Note what just happened, the endpoint exposes /hostname, and the loadbalancer forwarded the entire matching url to the endpoint. This means if you had '/foo' in the Ingress and tried accessing /hostname, your endpoint would've received /foo/hostname and not known how to route it. Now update the Ingress to access static content via the /fs endpoint:</p>
</blockquote>
|
<p>Did someone has an experience running scheduled job? Due to the <a href="http://kubernetes.io/docs/user-guide/scheduled-jobs/" rel="noreferrer">guide</a>, ScheduledJobs available since 1.4 with enabled runtime <strong>batch/v2alpha1</strong></p>
<p>So I was ensured with <code>kubectl api-versions</code> command:</p>
<pre><code>autoscaling/v1
batch/v1
batch/v2alpha1
extensions/v1beta1
storage.k8s.io/v1beta1
v1
</code></pre>
<p>But when I tried sample template below with command <code>kubectl apply -f job.yaml</code></p>
<pre><code>apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: hello
spec:
schedule: 0/1 * * * ?
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<p>I got error</p>
<pre><code>error validating "job.yaml": error validating data: couldn't find type: v2alpha1.ScheduledJob; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>It's possible that feature still not implemented? Or I made some error during template creation?</p>
<p>Thank you in advance.</p>
| <p>Okay, I think I resolved this issue. ScheduledJobs is currently in alpha state and Google Container Engine supports this feature only for clusters with <strong>additionally enabled</strong> APIs. I was able to create such cluster with command:</p>
<p><code>gcloud alpha container clusters create my-cluster --enable-kubernetes-alpha</code></p>
<p>As a result now I have limited 30 day cluster with full feature support. I can see scheduled jobs with <code>kubectl get scheduledjobs</code> as well as create new ones with templates.</p>
<p>You can find more info about alpha clusters <a href="https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/create" rel="noreferrer">here</a>.</p>
|
<p>The best source for restart policies in Kubernetes I have found is this:</p>
<p><a href="http://kubernetes.io/docs/user-guide/pods/multi-container/#restartpolicy" rel="noreferrer">http://kubernetes.io/docs/user-guide/pods/multi-container/#restartpolicy</a></p>
<p>But it only lists the possible <code>restartPolicy</code> values and does not explain them.</p>
<p>What is the difference between <code>Always</code> and <code>OnFailure</code>? Mustn't the thing fail before it can be restarted?</p>
| <p><strong>Always</strong> means that the container will be restarted even if it exited with a zero exit code (i.e. successfully). This is useful when you don't care why the container exited, you just want to make sure that it is always running (e.g. a web server). This is the default.</p>
<p><strong>OnFailure</strong> means that the container will only be restarted if it exited with a non-zero exit code (i.e. something went wrong). This is useful when you want accomplish a certain task with the pod, and ensure that it completes successfully - if it doesn't it will be restarted until it does.</p>
<p><strong>Never</strong> means that the container will not be restarted regardless of why it exited. </p>
<p>These different restart policies basically map to the different controller types as you can see from <code>kubectl run --help</code>:</p>
<blockquote>
<p>--restart="Always": The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to 'Always' a deployment is created for this pod, if set to 'OnFailure', a job is created for this pod, if set to 'Never', a regular pod is created. For the latter two --replicas must be 1. Default 'Always'</p>
</blockquote>
<p>And the <a href="http://kubernetes.io/docs/user-guide/pod-states/#restartpolicy" rel="noreferrer">pod user-guide</a>:</p>
<blockquote>
<p>ReplicationController is only appropriate for pods with RestartPolicy = Always. Job is only appropriate for pods with RestartPolicy equal to OnFailure or Never.</p>
</blockquote>
|
<p>I am putting together a proof of concept to help identify gotchas using Spring Boot/Netflix OSS and Kubernetes together. This is also to prove out related technologies such as Prometheus and Graphana.</p>
<p>I have a Eureka service setup which is starting with no trouble within my Kubernetes cluster. This is named discovery and has been given the name "discovery-1551420162-iyz2c" when added to K8 using</p>
<pre><code></code></pre>
<p>For my config server, I am trying to use Eureka based on a logical URL so in my bootstrap.yml I have</p>
<pre><code>server:
port: 8889
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/xyz/microservice-config
</code></pre>
<p>and I am starting this using</p>
<pre><code>kubectl run configserver --image=xyz/config-microservice --replicas=1 --port=8889
</code></pre>
<p>This service ends up running named as configserver-3481062421-tmv4d. I then see exceptions in the config server logs as it tries to locate the eureka instance and cannot.</p>
<p>I have the same setup for this using docker-compose locally with links and it starts the various containers with no trouble.</p>
<pre><code>discovery:
image: xyz/discovery-microservice
ports:
- "8761:8761"
configserver:
image: xyz/config-microservice
ports:
- "8888:8888"
links:
- discovery
</code></pre>
<p>How can I setup something like eureka.client.serviceUri so my microservices can locate their peers without knowing fixed IP addresses within the K8 cluster?</p>
| <blockquote>
<p>How can I setup something like eureka.client.serviceUri?</p>
</blockquote>
<p>You have to have a Kubernetes <a href="http://kubernetes.io/docs/user-guide/services/" rel="noreferrer">service</a> on top of the eureka pods/deployments which then will provide you a referable IP address and port number. And then use that referable address to look up the Eureka service, instead of "8761".</p>
<h1>To address further question about HA configuration of Eureka</h1>
<p>You shouldn't have more than one pod/replica of Eureka per k8s service (remember, pods are ephemeral, you need a referable IP address/domain name for eureka service registry). To achieve high availability (HA), spin up more k8s services with one pod in each.</p>
<ul>
<li>Eureka service 1 --> a single pod</li>
<li>Eureka Service 2 --> another single pod</li>
<li>..</li>
<li>..</li>
<li>Eureka Service n --> another single pod</li>
</ul>
<p>So, now you have referable IP/Domain name (IP of the k8s service) for each of your Eureka.. now it can register each other.</p>
<p>Feeling like it's an overkill?
<em>If all your services are in same kubernetes namespace</em> you can achieve everything (well, almost everything, except client side load balancing) that eureka offers though k8s service + KubeDNS add-On. Read this <a href="http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/" rel="noreferrer">article</a> by Christian Posta</p>
<h1>Edit</h1>
<p>Instead of Services with one pod each, you can make use of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSets</a> as <a href="https://stackoverflow.com/a/47490410/6785908">Stefan Ocke</a> pointed out.</p>
<blockquote>
<p>Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.</p>
</blockquote>
|
<p>I have a problem when connecting to a mysql instance with a go app using standard package.
This is my connection string/log</p>
<pre><code> [13 Nov 16 13:53 +0000] [INFO] connecting to MySQL.. root:awsomepass@tcp(a-mysql-0:3340)/db?charset=utf8&parseTime=True&loc=Local
2016/11/13 13:53:25 dial tcp 10.108.1.35:3340: getsockopt: connection refused
</code></pre>
<p>I tried </p>
<pre><code>GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
</code></pre>
<p>here is how I make connection, just basic, with string concatenation only</p>
<pre><code>db, err := sql.Open("mysql", "root:awsomepass@tcp(a-mysql-0:3340)/db?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Fatal(err)
}
</code></pre>
<p>I can ping the service, connect to it with mysql-client from a different pod. </p>
<pre><code> # can connect without port for service
/ # mysql -u root -h a-mysql-0 -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 5.7.16 MySQL Community Server (GPL)
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> Ctrl-C -- exit!
Aborted
# can't' connect with port for service
/ # mysql -u root -h a-mysql-0:3340 -p
Enter password:
ERROR 2005 (HY000): Unknown MySQL server host 'a-mysql-0:3340' (-3)
</code></pre>
<p>and the mysql-service </p>
<pre><code> ➜ stg git:(develop) ✗ kubectl describe svc a-mysql-0
Name: a-mysql-0
Namespace: default
Labels: name=a-mysql-0
tier=database
type=mysql
Selector: name=a-mysql-0,tier=database
Type: ClusterIP
IP: None
Port: a-mysql-0 3340/TCP
Endpoints: 10.108.1.35:3340
Session Affinity: None
No events.
</code></pre>
<p>Is there anything I have missed or permission?</p>
| <p>got a response from kubernetes-slack, from mav. I am accessing the <code>mysql-service</code> to a wrong <code>container-port</code>. default mysql port was <code>3306</code>. I thought I was using a custom container that exposes <code>3340</code>.</p>
|
<p>I have a kubernetes cluster (hosted at the university, not in gcloud) and I'm trying to use Jenkins with the jenksci/kubernetes plugin to launch the slaves. However, it seems they cannot register to the master, no matter what I do. (k8s 1.2, jenkins 2.19.2, kub-plugin 0.9)</p>
<p>This is the configuration I use:</p>
<p><a href="https://i.stack.imgur.com/DZjRN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZjRN.png" alt="This is the configuration I use"></a>
Now:</p>
<ul>
<li><p>If I set tty:true the container starts but is never able to connect to the master. The logs are unreadable and I cannot attach to the slave to inspect what is happening:</p>
<pre><code>$ kubectl logs jnpl-slave-ec16b9ae7bbd --namespace=jenkins
Error from server: Unrecognized input header
$ kubectl attach -ti jnpl-slave-ec16b9ae7bbd --namespace=jenkins
error: pod jnpl-slave-ec16b9ae7bbd is not running and cannot be attached to; current phase is Succeeded
</code></pre></li>
<li><p>If I set tty:false the container starts and correctly executes the entrypoint /usr/local/bin/jenkins-slave, but it seems that the secret and the slaveName command-line args are not passed, as the process dies asking for them:</p>
<pre><code>$ kubectl logs jnpl-slave-ecfd3a6cbaba --namespace=jenkins
Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior
two arguments required, but got []
...
</code></pre></li>
<li><p>If I manually set the parameters (seed and the slave name) to a fake value, it starts correctly, but then dies complaining that /home/jenkins is not writable:</p>
<pre><code>Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior
hudson.remoting.jnlp.Main createEngine
Setting up slave: http://10.254.151.87
hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Exception in thread "main" java.lang.RuntimeException: Root directory not writable
...
</code></pre></li>
<li><p>However, if I create a slave manually on the webpage setup it, it works and I can see the slave online:</p>
<pre><code>node$ sudo docker run -ti docker.io/jenkinsci/jnlp-slave:latest /bin/bash
pod$ java -jar /usr/share/jenkins/slave.jar -jnlpUrl http://10.254.151.87/computer/slave1/slave-agent.jnlp
...
INFO: Connected
</code></pre></li>
</ul>
<p>So... I don't know what to test further. I would really appreciate if someone could give me an hint!</p>
<p>With best regards,</p>
<p>Mario</p>
| <p>The arguments field should be <code>${computer.jnlpmac} ${computer.name}</code> and should be set by default when adding new containers to the Pod definition</p>
|
<p>While getting familiar with kubernetes I do see tons of tools that should helps me to install kubernetes anywhere, but I don't understand exactly what it does inside, and as a result don't understand how to trouble shoot issues.</p>
<p>Can someone provide me a link with tutorial how to install kubernetes without any tools. </p>
| <p>There are two good guides on setting up Kubernetes manually:</p>
<ul>
<li>Kelsey Hightower's <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="noreferrer">Kubernetes the hard way</a></li>
<li>Kubernetes guide on <a href="http://kubernetes.io/docs/getting-started-guides/scratch/" rel="noreferrer">getting started from scratch</a></li>
</ul>
<p>Kelsey's guide assumes you are using GCP or AWS as the infrstructure, while the Kubernetes guide is a bit more agnostic.</p>
<p>I wouldn't recommend running either of these in production unless you really know what you're doing. However, they are great for learning what is going on under the hood. Even if you just read the guides and don't use them to setup any infrastructure you should gain a better understanding of the pieces that make up a Kubernetes cluster. You can then use one of the helpful setup tools to create your cluster, but now you will understand what it is actually doing and can debug when things go wrong.</p>
|
<p>I'm using node js & am wanting to upload files to a bucket of mine. I've setup the secret:</p>
<pre><code>NAME TYPE DATA AGE
cloudsql-oauth-credentials Opaque 1 5d
default-token-dv9kj kubernetes.io/service-account-token 3 5d
</code></pre>
<p>The service_account does have access to my google cloud storage API as I've set that up already & tested it locally (on my own computer). I'm unsure how I can reference the location of the <strong>service account</strong> json file?!</p>
<p>Here is my volumes mount:</p>
<pre><code>"volumes": [{
"name": "cloudsql-oauth-credentials",
"secret": {
"secretName": "cloudsql-oauth-credentials"
}
}
</code></pre>
<p>Here is the code where I'm setting up the google-cloud storage variable:</p>
<pre><code>var gcs = require('@google-cloud/storage')({
projectId: 'projectID-38838',
keyFilename: process.env.NODE_ENV == 'production'
? JSON.parse(process.env.CREDENTIALS_JSON) // Parsing js doesn't work
: '/Users/james/auth/projectID-38838.json' // This works locally
});
var bucket = gcs.bucket('bucket-name');
</code></pre>
<p>Now if I want to use this inside my docker container on kubernetes, I'll have to reference the json file location...But I don't know where it is?!</p>
<p>I've tried setting the Credentials file as an environment variable, but I cannot parse a js object to the <strong>keyFilename</strong> object. I have to parse a file location. I set the env variable up like so:</p>
<pre><code>{
"name": "CREDENTIALS_JSON",
"valueFrom": {
"secretKeyRef": {
"name": "cloudsql-oauth-credentials",
"key": "credentials.json"
}
}
},
</code></pre>
<p>How can I reference the location of the service_account json file inside my kubernetes pod?!</p>
| <p>Look <a href="http://kubernetes.io/docs/user-guide/secrets/#using-secrets-as-files-from-a-pod" rel="noreferrer">here</a> in the section <strong>Using Secrets as Files from a Pod</strong>. </p>
<p>Basically, you need to specify two things when mounting a secret volume. The bit that you have + some extra info. <em>There might be some redundancies with the key but this is what I do and it works.</em></p>
<p>When creating a secret, create it with a key:<br>
<code>kubectl create secret generic cloudsql-oauth-credentials --from-file=creds=path/to/json</code></p>
<p>Then</p>
<pre><code>"volumes": [{
"name": "cloudsql-oauth-credentials",
"secret": {
"secretName": "cloudsql-oauth-credentials"
"items": [{
"key": "creds",
"path": "cloudsql-oauth-credentials.json"
}]
}
}
</code></pre>
<p>But then also specify where it goes in the container definiton (in Pod, Deployment, Replication Controller - whatever you use):</p>
<pre><code>"spec": {
"containers": [{
"name": "mypod",
"image": "myimage",
"volumeMounts": [{
"name": "cloudsql-oauth-credentials",
"mountPath": "/etc/credentials",
"readOnly": true
}]
}],
</code></pre>
<p>The file will be mapped to <code>/etc/credentials/cloudsql-oauth-credentials.json</code>. </p>
|
<p>I have a Kubernetes v1.4 cluster running in AWS with auto-scaling nodes.
I also have a Mongo Replica Set cluster with SSL-only connections (FQDN common-name) and public DNS entries:</p>
<ul>
<li>node1.mongo.example.com -> 1.1.1.1</li>
<li>node2.mongo.example.com -> 1.1.1.2</li>
<li>node3.mongo.example.com -> 1.1.1.3</li>
</ul>
<p>The Kubernetes nodes are part of a security group that allows access to the mongo cluster, but only via their private IPs.</p>
<p>Is there a way of creating A records in the Kubernetes DNS with the private IPs when the public FQDN is queried?</p>
<p>The first thing I tried was a script & ConfigMap combination to update /etc/hosts on startup (ref. <a href="https://stackoverflow.com/questions/37166822/is-it-a-way-to-add-arbitrary-record-to-kube-dns">Is it a way to add arbitrary record to kube-dns?</a>), but that is problematic as other Kubernetes services may also update the hosts file at different times.</p>
<p>I also tried a Services & Enpoints configuration:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: node1.mongo.example.com
spec:
ports:
- protocol: TCP
port: 27017
targetPort: 27017
---
apiVersion: v1
kind: Endpoints
metadata:
name: node1.mongo.example.com
subsets:
- addresses:
- ip: 192.168.0.1
ports:
- port: 27017
</code></pre>
<p>But this fails as the Service name cannot be a FQDN...</p>
| <p>While not so obvious at first, the solution is quite simple. kube-dns image in recent versions includes <code>dnsmasq</code> as one of it's components. If you look into its man page, you will see some usefull options. Following that lecture you can choose a path similar to this :</p>
<p>Create a ConfigMap to store your dns mappings :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
myhosts: |
10.0.0.1 foo.bar.baz
</code></pre>
<p>Having that ConfigMap applied in your cluster you can now make some changes to <code>kube-dns-vXX</code> deployment you use in your kubernetes.</p>
<p>Define volume that will expose your CM to <code>dnsmasq</code></p>
<pre><code> volumes:
- name: hosts
configMap:
name: kube-dns
</code></pre>
<p>and mount is in your <code>dnsmasq</code> container of <code>kube-dns</code> deployment/rc template</p>
<pre><code> volumeMounts:
- name: hosts
mountPath: /etc/hosts.d
</code></pre>
<p>and finally, add a small config flag to your dnsmasq arguments :</p>
<pre><code> args:
- --hostsdir=/etc/hosts.d
</code></pre>
<p>now, as you apply these changes to the <code>kube-dns-vXX</code> deployment in your cluster it will mount the configmap and use files mounted in /etc/hosts.d/ (with typical hosts file format) as a source of knowledge for <code>dnsmasq</code>. Hence if you now query for foo.bar.baz in your pods, they will resolve to respective IP. These entries take precedence over public DNS, so it should perfectly fit your case.</p>
<p>Mind that <code>dnsmasq</code> is not watching for changes in ConfigMap so it has to be restarted manually if it changes.</p>
<p>Tested and validated this on a live cluster just few minutes ago.</p>
|
<h3>Overview</h3>
<p><em>kube-dns</em> can't start (SetupNetworkError) after <em>kubeadm init</em> and network setup:</p>
<pre><code>Error syncing pod, skipping: failed to "SetupNetwork" for
"kube-dns-654381707-w4mpg_kube-system" with SetupNetworkError:
"Failed to setup network for pod
\"kube-dns-654381707-w4mpg_kube-system(8ffe3172-a739-11e6-871f-000c2912631c)\"
using network plugins \"cni\": open /run/flannel/subnet.env:
no such file or directory; Skipping pod"
</code></pre>
<h3>Kubernetes version</h3>
<pre><code>Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<h3>Environment</h3>
<p>VMWare Fusion for Mac</p>
<h3>OS</h3>
<pre><code>NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
</code></pre>
<h3>Kernel (e.g. uname -a)</h3>
<pre><code>Linux ubuntu-master 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<h3>What is the problem</h3>
<pre><code>kube-system kube-dns-654381707-w4mpg 0/3 ContainerCreating 0 2m
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 {default-scheduler } Normal Scheduled Successfully assigned kube-dns-654381707-w4mpg to ubuntu-master
2m 1s 177 {kubelet ubuntu-master} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-w4mpg_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-w4mpg_kube-system(8ffe3172-a739-11e6-871f-000c2912631c)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
</code></pre>
<h3>What I expected to happen</h3>
<p>kube-dns Running</p>
<h3>How to reproduce it</h3>
<pre><code>root@ubuntu-master:~# kubeadm init
Running pre-flight checks
<master/tokens> generated token: "247a8e.b7c8c1a7685bf204"
<master/pki> generated Certificate Authority key and certificate:
Issuer: CN=kubernetes | Subject: CN=kubernetes | CA: true
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2026-11-08 11:40:21 +0000 UTC
Public: /etc/kubernetes/pki/ca-pub.pem
Private: /etc/kubernetes/pki/ca-key.pem
Cert: /etc/kubernetes/pki/ca.pem
<master/pki> generated API Server key and certificate:
Issuer: CN=kubernetes | Subject: CN=kube-apiserver | CA: false
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2017-11-10 11:40:21 +0000 UTC
Alternate Names: [172.20.10.4 10.96.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local]
Public: /etc/kubernetes/pki/apiserver-pub.pem
Private: /etc/kubernetes/pki/apiserver-key.pem
Cert: /etc/kubernetes/pki/apiserver.pem
<master/pki> generated Service Account Signing keys:
Public: /etc/kubernetes/pki/sa-pub.pem
Private: /etc/kubernetes/pki/sa-key.pem
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 14.053453 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 0.508561 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 1.503838 seconds
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns
Kubernetes master initialised successfully!
You can now join any number of machines by running the following on each node:
kubeadm join --token=247a8e.b7c8c1a7685bf204 172.20.10.4
root@ubuntu-master:~#
root@ubuntu-master:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-eo1ua 1/1 Running 0 47s
kube-system etcd-ubuntu-master 1/1 Running 3 51s
kube-system kube-apiserver-ubuntu-master 1/1 Running 0 49s
kube-system kube-controller-manager-ubuntu-master 1/1 Running 3 51s
kube-system kube-discovery-1150918428-qmu0b 1/1 Running 0 46s
kube-system kube-dns-654381707-mv47d 0/3 ContainerCreating 0 44s
kube-system kube-proxy-k0k9q 1/1 Running 0 44s
kube-system kube-scheduler-ubuntu-master 1/1 Running 3 51s
root@ubuntu-master:~#
root@ubuntu-master:~# kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
root@ubuntu-master:~#
root@ubuntu-master:~#
root@ubuntu-master:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-eo1ua 1/1 Running 0 47s
kube-system etcd-ubuntu-master 1/1 Running 3 51s
kube-system kube-apiserver-ubuntu-master 1/1 Running 0 49s
kube-system kube-controller-manager-ubuntu-master 1/1 Running 3 51s
kube-system kube-discovery-1150918428-qmu0b 1/1 Running 0 46s
kube-system kube-dns-654381707-mv47d 0/3 ContainerCreating 0 44s
kube-system kube-proxy-k0k9q 1/1 Running 0 44s
kube-system kube-scheduler-ubuntu-master 1/1 Running 3 51s
kube-system weave-net-ja736 2/2 Running 0 1h
</code></pre>
| <p>It looks like you have configured flannel before running <code>kubeadm init</code>. You can try to fix this by removing flannel (it may be sufficient to remove config file <code>rm -f /etc/cni/net.d/*flannel*</code>), but it's best to start fresh.</p>
|
<p><a href="http://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a> is an open-source system for automating deployment, scaling, and management of containerized applications.</p>
<p>In the official site of Kubernetes it says "Google runs billions of containers a week", my question is: if a container here means a containerized application, does that mean Google have billions of applications? It simply sounds ridiculous, what am I misunderstanding?</p>
| <p>Gmail itself can be millions of containers (just throwing out a number). It need not mean that many applications. There might also be short term jobs etc.</p>
|
<p>If I have 10 different services, each of which are independent from each other and run from their own container, can I get kubernetes to run all of those services on, say, 1 host?</p>
<p>This is unclear in the kubernetes documentation. It states that you can force it to schedule containers from the same pod onto one host, using a "multi-container pod", but it doesn't seem to approach the subject of whether you can have multiple pods running on one host.</p>
| <p>In fact kubernetes will do exactly what you want by default. It is capable of running dozens if not hundreds of containers on a single host (depending on its specs).</p>
<p>If you want very advanced control over scheduling pods, there is an alpha feature for that, which introduces concept of node/pod (anti)affinities. But I would say it is a rather advanced k8s topic at the moment, so you are probably good with what is in stable/beta for most use cases.</p>
<p>Honorable mention: there is a nasty trick that allows you to control when pods can <strong>not</strong> be collocated on the same node. An that is when they both declare same hostPort in their ports section. It can be usefull for some cases, but be aware it affects ie. how rolling deployments happen in some situations.</p>
|
<p>I want my deploy configuration to use an image that was the output of a build configuration.</p>
<p>I am currently using something like this:</p>
<pre><code>- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
app: myapp
deploymentconfig: myapp
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
deploymentconfig: myapp
spec:
containers:
- name: myapp
image: 123.123.123.123/myproject/myapp-staging:latest
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers: []
status: {}
</code></pre>
<p>I had to hard-code the integrated docker registry's IP address; otherwise Kubernetes/OpenShift is not able to find the image to pull down. I would like to not hard-code the integrated docker registry's IP address, and instead use something like this:</p>
<pre><code>- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
app: myapp
deploymentconfig: myapp
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp
deploymentconfig: myapp
spec:
containers:
- name: myapp
from:
kind: "ImageStreamTag"
name: "myapp-staging:latest"
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers: []
status: {}
</code></pre>
<p>But this causes Kubernetes/OpenShift to complain with:</p>
<pre><code>The DeploymentConfig "myapp" is invalid.
spec.template.spec.containers[0].image: required value
</code></pre>
<p>How can I specify the output of a build configuration as the image to use in a deploy configuration?</p>
<p>Thank you for your time!</p>
<p>Also, oddly enough, if I link the deploy configuration to the build configuration with a trigger, Kubernetes/OpenShift knows to look in the integrated docker for the image:</p>
<pre><code>- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-staging
name: myapp-staging
spec:
replicas: 1
selector:
app: myapp-staging
deploymentconfig: myapp-staging
strategy:
resources: {}
template:
metadata:
annotations:
openshift.io/container.myapp.image.entrypoint: '["python3"]'
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myapp-staging
deploymentconfig: myapp-staging
spec:
containers:
- name: myapp-staging
image: myapp-staging:latest
resources: {}
command:
- scripts/start_server.sh
ports:
- containerPort: 8000
test: false
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true
containerNames:
- myapp-staging
from:
kind: ImageStreamTag
name: myapp-staging:latest
status: {}
</code></pre>
<p>But I don't want the automated triggering...</p>
<p>Update 1 (11/21/2016):
Configuring the trigger but having the trigger disabled (hence manually triggering the deploy), still left the deployment unable to find the image:</p>
<pre><code>$ oc describe pod myapp-1-oodr5
Name: myapp-1-oodr5
Namespace: myproject
Security Policy: restricted
Node: node.url/123.123.123.123
Start Time: Mon, 21 Nov 2016 09:20:26 -1000
Labels: app=myapp
deployment=myapp-1
deploymentconfig=myapp
Status: Pending
IP: 123.123.123.123
Controllers: ReplicationController/myapp-1
Containers:
myapp:
Container ID:
Image: myapp-staging:latest
Image ID:
Port: 8000/TCP
Command:
scripts/start_server.sh
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-goe98 (ro)
Environment Variables:
ALLOWED_HOSTS: myapp-myproject.url
Conditions:
Type Status
Ready False
Volumes:
default-token-goe98:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-goe98
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
42s 42s 1 {scheduler } Scheduled Successfully assigned myapp-1-oodr5 to node.url
40s 40s 1 {kubelet node.url} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.7" already present on machine
40s 40s 1 {kubelet node.url} implicitly required container POD Created Created with docker id d3318e880e4a
40s 40s 1 {kubelet node.url} implicitly required container POD Started Started with docker id d3318e880e4a
40s 24s 2 {kubelet node.url} spec.containers{myapp} Pulling pulling image "myapp-staging:latest"
38s 23s 2 {kubelet node.url} spec.containers{myapp} Failed Failed to pull image "myapp-staging:latest": Error: image library/myapp-staging:latest not found
35s 15s 2 {kubelet node.url} spec.containers{myapp} Back-off Back-off pulling image "myapp-staging:latest"
</code></pre>
<p>Update 2 (08/23/2017):
In case, this helps others, here's a summary of the solution.</p>
<pre><code>triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true # this is required to link the build and deployment
containerNames:
- myapp-staging
from:
kind: ImageStreamTag
name: myapp-staging:latest
</code></pre>
<p>With the trigger and <code>automatic</code> set to <code>true</code>, the deployment should use the build's image in the internal registry.</p>
<p>The other comments relating to making the build not trigger a deploy relates to a separate requirement of wanting to manually deploy images from the internal registry. Here's more information about that portion:</p>
<p>The build needs to trigger the deployment at least once before <code>automatic</code> is set to <code>false</code>. So far a while, I was:</p>
<ol>
<li>setting <code>automatic</code> to <code>true</code></li>
<li>initiate a build and deploy</li>
<li>after deployment finishes, manually change <code>automatic</code> to <code>false</code></li>
<li>manually, trigger a deployment later (though I did not verify if this deployed the older, out-of-date image or not)</li>
</ol>
<p>I was initially trying to use this manual deployment as a way for a non-developer to go into the web console and make deployments. But this requirement has since been removed, so having build trigger deployments each time works just fine for us now. Builds can build at different branches and then tag the images differently. Deployments can then just use the appropriately tagged images.</p>
<p>Hope that helps!</p>
| <p>Are you constructing the resource definitions by hand?</p>
<p>It would be easier to use <code>oc new-build</code> and then <code>oc new-app</code> if you really need to set this up as two steps for some reason. If you just want to setup the build and deployment in one go, just use <code>oc new-app</code>.</p>
<p>For example, to setup build and deployment in one go use:</p>
<pre><code>oc new-app --name myapp <repository-url>
</code></pre>
<p>To do it in two steps use:</p>
<pre><code>oc new-build --name myapp <repository-url>
oc new-app myapp
</code></pre>
<p>If you still rather use hand created resources, at least use the single step variant with the <code>--dry-run -o yaml</code> options to see what it would create for the image stream, plus build and deployment configuration. That way you can learn from it how to do it. The bit you currently have missing is an image stream.</p>
<p>BTW. It looks a bit suspicious that you have the entry point set to <code>python3</code>. That is highly unusual. What are you trying to do as right now it looks like you may be trying to do something in a way which may not work with how OpenShift works. OpenShift is mainly about long running processes and not for doing single <code>docker run</code>. You can do the latter, but not how you are currently doing it.</p>
|
<p>I have a Kubernetes Cluster running and have multiple Services fronting a few Pods. When I expose a each service as a LoadBalancer, it creates an unique endpoint for Public consumption. Is there a way to configure this to expose 1 common endpoint and then have Filters that redirect traffic to the correct Pod base on request Path?
e,g
External endpoint: www.common-domain/v1/api/</p>
<p>Service 1: /account
Pods 1: account-related-pods</p>
<p>Service 2: /customer
Pods 2: customer-related-pods</p>
<p>Service 3: /profile
Pods 3: profile-related-pods</p>
<p>Then a request comes in for "www.common-domain/v1/api/account", it should invoke the account-related-pods.</p>
<p>Thanks</p>
| <p>I think you're looking for something like ingress</p>
<p>Running an ingress controller can server as a frontend for routing to different services based on http rules<br>
<a href="http://kubernetes.io/docs/user-guide/ingress/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/ingress/</a></p>
<p>And here are the docs on spinning up an nginx ingress controller
<a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/tree/master/ingress/controllers</a></p>
|
<p>So I've got my backend and front-end as seperate containers in the one <strong>Kubernetes Deployment</strong>.</p>
<p>At the moment I'm having to access the front-end & backend via different <strong>ports</strong>. </p>
<p>E.g <code>example.com:5000 = frontend</code> & <code>example.com:7000 = backend</code></p>
<p>I'm wondering how I can setup my front-end container to run on <code>www.example.com</code> & my backend container to run on <code>api.example.com</code></p>
<p>I'm using gcp (google cloud), have setup my dns properly & am having to access the services (web apps) using the ports I assigned to each of them (5000=frontend,7000=backend).</p>
<p>I'm thinking of a possible solution which is manual, but am wondering whether there is something built into <strong>Kubernetes</strong>. This solution would be:</p>
<p>I'd setup an nginx container in my Kubernetes cluster that would run on port 80, so any request that comes through would be redirected to the appropriate ports:</p>
<p>E.g I could have <code>api.example.com point to <my_cluster_ip>/backend</code> & the same for my front-end <code><my_cluster_ip>/frontend</code> and let nginx point /backend to <strong>port 5000</strong> and /frontend to <strong>port 7000</strong></p>
<p>I'm hoping there is something built into kubernetes that I can use? Here is my deployment config as it stands:</p>
<pre><code>{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "my_container"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "my_app"
}
},
"spec": {
"containers": [
{
"name": "backend",
"image": "backend_url",
"ports": [
{
"containerPort": 7000
}
],
"imagePullPolicy": "Always",
"env": [
{
"name": "NODE_PORT",
"value": "7000"
},
{
"name": "NODE_ENV",
"value": "production"
}
]
},
{
"name": "frontend",
"image": "frontend_url",
"ports": [
{
"containerPort": 5000
}
],
"imagePullPolicy": "Always",
"env": [
{
"name": "PORT",
"value": "5000"
},
{
"name": "NODE_ENV",
"value": "production"
}
]
}
]
}
}
}
}
</code></pre>
| <p>Well, for starters, you should not base exposing your service on Deployment. To do that, you should cover your Deployment(s) with Service(s). Read up on <a href="http://kubernetes.io/docs/user-guide/services/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/services/</a> for that.</p>
<p>When you go through the lecture, you might notice that it is perfectly possible to set two services that match the same backing pods (Endpoints) but on different port (ie. front:80->5000 api:80->7000). The problem is, that it still is exposing your work only inside the k8s cluster. To publish it externally you can use Service of type NodePort or LoadBalancer (first one has the disadvantage of using high ports to expose your services to the public, second one will be a separate LB (hence IP) per service).</p>
<p>What I personally prefer to publicly expose services is using Ingress/IngressController <a href="http://kubernetes.io/docs/user-guide/ingress/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/ingress/</a></p>
<p>Finally, when you split your solution with two services (front/api) you will see that there is no real reason to keep them together in one deployment/pod. If you separate them as two distinct deployments, you will get a more flexible architecture, and more fine grained control over your solution.</p>
|
<p>I'm trying to create a local Kubernetes deployment using Minikube, Docker Registry, and a demo node project.</p>
<p>The first thing I did was install Docker v1.12.3, then Minikube v0.12.2.</p>
<p>Then I created a Docker Registry container by running this command (via <a href="https://docs.docker.com/registry/" rel="noreferrer">this tutorial</a>, only running the first command below)</p>
<pre><code>docker run -d -p 5000:5000 --name registry registry:2
</code></pre>
<p>Next I ran this minikube command to create a local kubernetes cluster:</p>
<pre><code>minikube start --vm-driver="virtualbox" --insecure-registry="0.0.0.0:5000"
</code></pre>
<p>My project structure looks like this:</p>
<pre><code>.
├── Dockerfile
└── server.js
</code></pre>
<p>and my Dockerfile looks like this:</p>
<pre><code>FROM node:7.1.0
EXPOSE 8080
COPY server.js .
CMD node server.js
</code></pre>
<p>Then I built my own docker image and pushed it to my private repository:</p>
<pre><code>docker build -t hello-node .
docker tag hello-node localhost:5000/hello-node
docker push localhost:5000/hello-node
</code></pre>
<p>Then I tried to run a deployment with this command:</p>
<pre><code>kubectl run hello-node --image=localhost:5000/hello-node --port=8888
</code></pre>
<p>But then I get this:</p>
<pre><code>sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-node-3745105022-gzs5a 0/1 ErrImagePull 0 11m
kube-system kube-addon-manager-minikube 1/1 Running 4 10d
kube-system kube-dns-v20-2x64k 3/3 Running 12 10d
kube-system kubernetes-dashboard-mjpjv 1/1 Running 4 10d
</code></pre>
<p>I think I might be missing some kind of docker registry authentication, but as I'm googling I can't find something that I understand. Could someone please point me in the right direction?</p>
<p><strong>Edit</strong></p>
<p>After using ssh to access <code>bash</code> on the kubernetes VM and pull the <code>hello-node</code> image from my private registry by using this command:</p>
<pre><code>minikube ssh
Boot2Docker version 1.11.1, build master : 901340f - Fri Jul 1
22:52:19 UTC 2016
Docker version 1.11.1, build 5604cbe
docker@minikube:~$ sudo docker pull localhost:5000/hello-node
Using default tag: latest
Pulling repository localhost:5000/hello-node
Error while pulling image: Get http://localhost:5000/v1/repositories/hello-node/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused
</code></pre>
<p>Is <code>localhost:5000</code> the correct address to use within the kubernetes host VM?</p>
| <p>It looks like you're running the registry on the host. In fact, you need to run the registry inside the VM. You can point your docker client to the docker daemon inside the minikube VM by running this command first
<code>
eval $(minikube docker-env)
</code>
in your shell.</p>
<p>Then, you can run the docker build command on your host, but it will build inside the VM. </p>
<p>In fact, if your goal is to simply run the local version of your images, you should run the <code>eval $(minikube docker-env)</code> to point towards the docker daemon in your VM, and set the <code>imagePullPolicy: IfNotPresent</code> in your pod YAML. Then, kubernetes will use a locally built image if available.</p>
|
<p>I recently installed Kubernetes using Kubernetes Operations tool, but when I installed Kubernetes Dashboard using <a href="https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml" rel="nofollow noreferrer">this</a> script, the dashboard endpoints were in a private cluster.</p>
<p>Is there a way I can expose this dashboard over a public network using something like a service type <code>LoadBalancer</code> and put it behind a password or a secure authentication?</p>
<p>There is a lot that can be done with such a Dashboard, which is why I would like it behind a secure endpoint.</p>
| <p>You can easily accomplish that with <a href="http://kubernetes.io/docs/user-guide/ingress/" rel="nofollow noreferrer">Ingress</a> coupled with <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow noreferrer">NginX IngressController</a> </p>
<p>if you use something like :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard.mydomain.tld
namespace: kube-system
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: "Auth required"
ingress.kubernetes.io/auth-secret: htpasswd
spec:
rules:
- host: dashboard.mydomain.tld
http:
paths:
- path: /
backend:
serviceName: <dashsvc>
servicePort: <dashport>
</code></pre>
<p>alongside with a proper <code>htpasswd</code> secret as indicated by auth-secret annotation</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: htpasswd
namespace: kube-system
type: Opaque
data:
auth: <your htpasswd base64>
</code></pre>
<p>note: You need a working ingress controller setup prior to using this for exposing your service to the world. You can also easily combine it with <code>kube-lego</code> for automated <code>https</code> support so your service is exposed over secured channel.</p>
|
<p>I have a service that directs all traffic to a UI-pod. I want to redirect traffic from my localhost:80 to that service.
I have tried : </p>
<pre><code>kubectl port-forward my_service 6379:6379
</code></pre>
<p>This doesnt work because that service is actually supposed to be a pod.
I have tried:</p>
<pre><code>kubectl proxy --port=8080 --www=./local/www/
</code></pre>
<p>which looks for a pod too. Any suggestions?</p>
| <p>Unfortunately in kubernetes you can't port forward a service yet - <a href="https://github.com/kubernetes/kubernetes/issues/15180" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/15180</a></p>
<p>However, in minikube, you can use ssh port forwarding for the VM to achieve the same result</p>
<p><code>ssh -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) -L 30000:localhost:30000
</code></p>
|
<p>I'm starting to experiment with Kubernetes on my Windows 10 dev machine. I've got minikube running on my machine, with some "canned" test services, so it looks like Kubernetes is working properly.</p>
<p>Now I'm trying to create my first service by following this: <a href="http://kubernetes.io/docs/hellonode/" rel="nofollow noreferrer">http://kubernetes.io/docs/hellonode/</a></p>
<p>The problem is I can't build the docker image. I get an error that basically says docker isn't running. I've installed the docker toolkit, and I've looked at docker for windows, but it needs hyper-v which doesn't work with Kubernetes (it requires VirtualBox). So is there any way I can get docker running on windows using VirtualBox?</p>
| <p>Once you have the docker client on your host windows machine, you can run </p>
<p><code>minikube docker-env --shell powershell</code> </p>
<p>That will point the docker client on your host to the docker daemon inside the minikube VM.</p>
|
<p>there,</p>
<p>According to the doc:</p>
<p><code>ReadWriteOnce – the volume can be mounted as read-write by a single node</code></p>
<p>I created a PV based on nfs:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: tspv01
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /gpfs/fs01/shared/prod/democluster01/dashdb/gamestop/spv01
server: 169.55.11.79
</code></pre>
<p>a PVC for this PV:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
</code></pre>
<p>After create the PVC bind to the PV:</p>
<pre><code>root@hydra-cdsdev-dal09-0001:~/testscript# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
sclaim Bound tspv01 15Gi RWO 4m
</code></pre>
<p>Then I created 2 PODs using the same PVC:</p>
<p>POD1:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: mypodshared1
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: sclaim
</code></pre>
<p>POD2:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: mypodshared2
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: sclaim
</code></pre>
<p>After I create the 2 PODs, they are assigned to 2 different nodes. And I can exec into the container, and can read&write in the nfs mounted folder.</p>
<pre><code>root@hydra-cdsdev-dal09-0001:~/testscript# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
mypodshared1 1/1 Running 0 18s 172.17.52.7 169.45.189.108
mypodshared2 1/1 Running 0 36s 172.17.83.9 169.45.189.116
</code></pre>
<p>Anybody know why this happened?</p>
| <p>The accessModes are dependent upon the storage provider. For NFS they don't really do anything different, but a HostPath should use the modes correctly. </p>
<p>See the following table for all the various options: <a href="http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes</a></p>
|
<p>I have a PetSet with </p>
<pre><code> volumeClaimTemplates:
- metadata:
name: content
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
- metadata:
name: database
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
- metadata:
name: file
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
- metadata:
name: repository
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
</code></pre>
<p>If I annotate it with dynamic volume provisioning it will create volume claims and volumes in random availability zone and pets won't be able to start because in this example each pet requires exactly four volumes 2Gi in size to actually be scheduled.
If I create volumes manually I can have them labeled with <em>failure-domain.beta.kubernetes.io/zone: us-east-1d</em> for example and this way I can create PVCs with the selector that matchLabels by failure-domain. But how do I do something similar with volumeClaimTemplates? I mean I don't want to stick them all to one failure domain sure. But for some reason volume claim template won't create all the volumes for one pet in the same failure domain.</p>
<p>Ideas?</p>
| <p>You can create a storage class and add the failure zone there. For example, create a storage class like this:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gp2storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1b
encrypted: "true"
</code></pre>
<p>In the example above, we're creating PV's in the zone <code>us-east-1b</code> on AWS. Then in your template reference that storage class:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: default
</code></pre>
|
<p>How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?</p>
<hr>
<p>I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2. </p>
<p>So what (I think) I'd like to do is a "rolling restart" of the <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="noreferrer">deployment</a> resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?</p>
| <p>The current best solution to this problem (referenced deep in <a href="https://github.com/kubernetes/kubernetes/issues/22368" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/22368</a> linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.</p>
<p>When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.</p>
<p>Not quite as quick as just editing the ConfigMap in place, but much safer.</p>
|
<p>I'm using google cloud to store my <strong>Docker</strong> images & host my <strong>kubernetes</strong> cluster. I'm wondering how I can have <strong>kubernetes</strong> pull down the image which has the <strong>latest</strong> tag each time a new one is pushed.</p>
<p>I thought <strong>imagePullPolicy</strong> was the way to go, but it doesn't seem to be doing the job (I may be missing something). Here is my container spec:</p>
<pre><code>"name": "blah",
"image": "gcr.io/project-id/container-name:latest",
"imagePullPolicy": "Always",
"env": [...]
</code></pre>
<p>At the moment I'm having to delete and recreate the deployments when I upload a new docker image.</p>
| <p>Kubernetes it self will never trigger on container image update in repository. You need some sort of CI/CD pipeline in your tooling. Furthermore, I do strongly advise to avoid using <code>:latest</code> as it makes your container change over time. It is much better in my opinion to use some sort of versioning. Be it semantic like <code>image:1.4.3</code> commit based <code>image:<gitsha></code> or as I use <code>image:<gitsha>-<pushid></code> where push is a sequentially updated value for each push to repo (so that label changes even if I reupload from the same build).</p>
<p>With such versioning, if you change image in your manifest, the deployment will get a rolling update as expected.</p>
<p>If you want to stick to <code>image:latest</code>, you can add a label with version to your pod template, so if you bump it, it will roll. You can also just kill pods manually one by one, or (if you can afford downtime) you can scale deployment to 0 replicas and back to N</p>
|
<p>We have an application deployed on Kubernetes. One of the services is an app running in the JVM.</p>
<p>Our application is faulty, it consumes to much memory. We are hitting the limit set in the Replication Controller, which makes it restart the pod.</p>
<p>Is it a good idea to use the replication controller for this? Or is it a better idea to limit the memory on the JVM (set it to something below the replication controller limit) and use something else inside the pod to restart our application?</p>
<p>If the JVM would stop with an out-of-memory exception, I could use memory dumps written by the JVM. Now I'm blind as to what occupied the memory.</p>
<p>Thank you for your reply!</p>
| <p>In fact this is not a responsibility of Deployment / ReplicationController. The restarts are handled within Pod it self. As for handling memory, this is not a trivial issue and you should control it on both levels (things like heap sizes etc.) so that your app avoids hitting the limit, but if it does go awall, limits on pod will handle it, and that is perfectly ok (specially if you run more then one pod so you have HA).</p>
<p>The thing here is, that is is a bit tricky to tune memory limits well, and probably can be done best with trial and error + monitoring pod/container metrics (Prometheus/Grafana to the rescue :) ).</p>
|
<p>New to kubernetes. Can I use kubectl scale --replicas=N and start pods on different nodes?</p>
| <p>By default the scheduler attempts to spread pods across nodes, so that you don't have multiple pods of the same type on the same node. So there's nothing special required if you're just aiming for best-effort pod spreading.</p>
<p>If you want to express the requirement that the pod must not run on a node that already has a pod of that type on it you can use <a href="http://kubernetes.io/docs/user-guide/node-selection/" rel="nofollow noreferrer">pod anti-affinity</a>, which is currently an Alpha feature.</p>
<p>If you want to ensure that all nodes (or all nodes matching a certain selector) have that pod on them you can use a <a href="http://kubernetes.io/docs/admin/daemons/" rel="nofollow noreferrer">DaemonSet</a>.</p>
|
<p>I am running a cluster on GKE and sometimes I get into a hanging state. Right now I was working with just two nodes and allowed the cluster to autoscale. One of the nodes has a NotReady status and simply stays in it. Because of that, half of my pods are Pending, because of insufficient CPU. </p>
<h2>How I got there</h2>
<p>I deployed a pod which has quite high CPU usage from the moment it starts. When I scaled it to 2, I noticed CPU usage was at 1.0; the moment I scaled the Deployment to 3 replicas, I expected to have the third one in Pending state until the cluster adds another node, then schedule it there.<br>
What happened instead is the node switched to a <code>NotReady</code> status and all pods that were on it are now Pending.
However, the node does not restart or anything - it is just not used by Kubernetes. The GKE then thinks that there are enough resources as the VM has 0 CPU usage and won't scale up to 3.
I cannot manually SSH into the instance from console - it is stuck in the loading loop. </p>
<p>I can manually delete the instance and then it starts working - but I don't think that's the idea of fully managed. </p>
<p>One thing I noticed - <em>not sure if related</em>: in GCE console, when I look at VM instances, the Ready node is being used by the instance group and the load balancer (which is the service around an nginx entry point), but the NotReady node is only in use by the instance group - not the load balancer. </p>
<p>Furthermore, in <code>kubectl get events</code>, there was a line:</p>
<pre><code>Warning CreatingLoadBalancerFailed {service-controller } Error creating load balancer (will retry): Failed to create load balancer for service default/proxy-service: failed to ensure static IP 104.199.xx.xx: error creating gce static IP address: googleapi: Error 400: Invalid value for field 'resource.address': '104.199.xx.xx'. Specified IP address is already reserved., invalid
</code></pre>
<p>I specified <code>loadBalancerIP: 104.199.xx.xx</code> in the definition of the proxy-service to make sure that on each restart the service gets the same (reserved) static IP. </p>
<p>Any ideas on how to prevent this from happening? So that if a node gets stuck in NotReady state it at least restarts - but ideally doesn't get into such state to begin with?</p>
<p>Thanks.</p>
| <p>The first thing I would do is to define Resources and Limits for those pods. </p>
<p>Resources tell the cluster how much memory and CPU you think that the pod is going to use. You do this to help the scheduler to find the best location to run those pods.</p>
<p>Limits are crucial here: they are set to prevent your pods damaging the stability of the nodes. It's better to have a pod killed by an OOM than a pod bringing a node down because of resource starvation.</p>
<p>For example, in this case you're saying that you want 200m CPU (20%) for your pod but if for any chance it goes above 300 (30%), you want the scheduler to kill it and start a new one.</p>
<pre><code>spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources:
limits:
cpu: 300m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
</code></pre>
<p>You can read more here: <a href="http://kubernetes.io/docs/admin/limitrange/" rel="noreferrer">http://kubernetes.io/docs/admin/limitrange/</a></p>
|
Subsets and Splits