prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I've been running an app on Kubernetes very smoothly for the past few weeks, until today I found that my Kubernetes master is no longer available.</p>
<p>But <strong>the application is still up</strong>, seemingly because the cluster's nodes are still available in Google Compute Engine: </p>
<p><a href="https://i.stack.imgur.com/S2Dd5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S2Dd5.png" alt="gcloud compute engine nodes"></a></p>
<p>(We don't use any compute instances aside from our K8s cluster.)</p>
<p>My <code>gcloud</code> CLI has all the right configuration:</p>
<pre><code>$ gcloud config list
Your active configuration is: [default]
[compute]
...
[container]
use_client_certificate = True
[core]
account = {email}
disable_usage_reporting = True
project = {myproject}
</code></pre>
<p>Even my <code>kubectl</code> CLI is configured to use the cluster that once existed:</p>
<pre><code>$ kubectl config current-context
gke_{myproject}_us-central1-a_{clustername}
</code></pre>
<p>But any <code>kubectl</code> call results in a timeout.</p>
<pre><code>$ kubectl get pods
Unable to connect to the server: dial tcp {ipv4 address}:443: i/o timeout
</code></pre>
<p><code>gcloud</code> and the Google Container Engine web UI both agree that there are no clusters:</p>
<pre><code>$ gcloud container clusters list
*crickets*
</code></pre>
| <p>I got in touch with Google Support to solve this problem. The cluster's master instance was erroneously marked as abusive, and the master was taken down. Google was able to bring it back up. After that, there was a process to return the cluster to its normal state.</p>
<p>I suppose the solution here is to contact support.</p>
|
<p>Currently, I'm updating the version of an image to be deployed using the <code>set image</code> command:</p>
<pre><code>$ kubectl set image deployments myapp myapp=caarlos0/myapp:v2
</code></pre>
<p>And then I watch the changes with <code>rollout status</code>:</p>
<pre><code>$ kubectl rollout status deployments myapp
</code></pre>
<p>The problems I found while doing it this way are:</p>
<ul>
<li><p>some times, it seems that a deployment is not triggered at all, and when I call <code>rollout status</code>, I get errors like this:</p>
<pre><code>$ kubectl rollout status deployments myapp
Waiting for deployment spec update to be observed...
error: timed out waiting for the condition
</code></pre></li>
<li><p>The <code>rollout history</code> command show the <code>CHANGE-CAUSE</code> as <code><none></code>, and I can't find a way of making it show anything useful in there.</p></li>
</ul>
<p>So, am I doing something wrong (or not in the best way)? How can I improve this workflow?</p>
| <p>You're doing the right thing. Within the <a href="https://kubernetes.io/docs/user-guide/deployments/#updating-a-deployment" rel="noreferrer">Updating a deployment</a> documentation you'll find this:</p>
<blockquote>
<p>Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. <code>.spec.template</code>) is changed, e.g. updating labels or container images of the template. Other updates, such as scaling the Deployment, will not trigger a rollout.</p>
</blockquote>
<p>So running <code>$ kubectl set image deployments/app <image></code> will only trigger a rollout if <code><image></code> is not already configured for your containers.</p>
<p>The change cause can be used to record the command which was used to trigger the rollout by appending the <code>--record</code> flag to your commands (see <a href="https://kubernetes.io/docs/user-guide/deployments/#checking-rollout-history-of-a-deployment" rel="noreferrer">Checking rollout history</a>).</p>
|
<p>I try to install ElasticSearch (latest) on a cluster nodes on Google Container Engine but ElasticSearch needs the variable : <code>vm.max_map_count</code> to be >= 262144.</p>
<p>If I ssh to every nodes and I manually run :</p>
<pre><code>sysctl -w vm.max_map_count=262144
</code></pre>
<p>All goes fine then, but any new node will not have the specified configuration.</p>
<p>So my questions is :</p>
<p>Is there a way to load a system configuration on every nodes at boot time ?
Deamon Set would not be the good solution because inside a docker container, the system variables are read-only.</p>
<p>I'm using a fresh created cluster with the <code>gci</code> node image.</p>
| <p>I found another solution while looking at <a href="https://github.com/pires/kubernetes-elasticsearch-cluster/blob/master/es-client.yaml" rel="noreferrer" title="this repository">this repository</a>.</p>
<p>It relies on the use of <a href="https://kubernetes.io/docs/concepts/abstractions/init-containers/" rel="noreferrer">an init container</a>, the plus side is that only the init container is running with privileges:</p>
<pre><code>annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "sysctl",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["sysctl", "-w", "vm.max_map_count=262144"],
"securityContext": {
"privileged": true
}
}
]'
</code></pre>
<p>There is a new syntax available since Kubernetes 1.6 which still works for 1.7. Starting with 1.8 this new syntax is required. The declaration of init containers is moved to <code>spec</code>:</p>
<pre><code> - name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
</code></pre>
|
<p>Is there any way to pass image version from a varibale/config when passing a manifest .yaml to kubectl command</p>
<p>Example : </p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:${IMAGE_VERSION}
imagePullPolicy: Always
resources:
limits:
cpu: "1.2"
memory: 100Mi
ports:
- containerPort: 80
</code></pre>
<p>Use case is to launch specific image version which is set at kubernetes level, and that the variable is resolved by kubernetes itself at server side.</p>
<p>Thanks and Regards,
Ravi</p>
| <p>k8s manifest files are static yaml/json.</p>
<p>If you would like to template the manifests (and manage multiple resources in a bundle-like fashion), I strongly recommend you to have a look at <a href="https://github.com/kubernetes/helm" rel="noreferrer">Helm</a></p>
<p>I've recently created <a href="https://github.com/cloud-native-singapore/helm-workshop#templating-and-release-management" rel="noreferrer">a Workshop which focuses precisely on the "Templating" features of Helm</a>.</p>
<p>Helm does a lot more than just templating, it is built as a full fledged package manager for Kubernetes applications (think Apt/Yum/Homebrew).</p>
<p>If you want to handle everything client side, have a look at <a href="https://github.com/errordeveloper/kubegen" rel="noreferrer">https://github.com/errordeveloper/kubegen</a></p>
<p>Although, at some point, you will need the other features of Helm and a migration will be needed when that time comes - I recommend biting the bullet and going for Helm straight up.</p>
|
<p>I have a Kubernetes cluster running locally via Minikube.</p>
<p>I have a local folder on my Mac mounted via the hostPath volume config and accessible to the Docker container at the location: /code/bot</p>
<p>I can create a service and load the webpage just fine. But when I change the code and reload the page, it does not update. I have to delete and re-create the deployment for the code to refresh.</p>
<p>Is there any way to have mount be "live", so it refreshes automatically?</p>
| <p>A remark: (compiled) Code should be part of the container. You are not following the container design laid out for this, so you should expect side effects. </p>
<p>Having said that: You might have to restart your container when you change your code since your code is clearly not reloaded by your container.</p>
<p>Auto refreshing the mount depends on the application you use. Some applications realize a file-system has a changed file and will reload, in others you will have to code this yourself effectively going back to my restart remark.</p>
|
<p>I'm trying to deploy a Docker container that exposes a simple Docker server, a clone of the httpbin.org service in the Google Container Engine (Kubernetes).</p>
<p>This is the Service definition I'm using:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
selector:
app: httpbin
</code></pre>
<p>The ingress is defined as: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: httpbin-tls
annotations:
kubernetes.io/ingress.global-static-ip-name: "http-bin-static-ip"
spec:
tls:
- secretName: positive-ssl
backend:
serviceName: httpbin
servicePort: 80
</code></pre>
<p>In the Services/Ingress dashboards I can see two IPs, the one bound directly to the Service (ephemeral) and the Static IP bound to the Ingress. Calling them both directly on port 80 works like a charm.</p>
<p>After that done, I've created an A record for the Static IP and ensured the Load Balancer was OK in the GKE dashboard:</p>
<p><a href="https://i.stack.imgur.com/hWKEO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hWKEO.png" alt="GKE Load Balancers Dashboard"></a></p>
<p>The HTTPS endpoint should be working according to a lot of tutorials and manuals I checked, but it's not! Every call to HTTPS gets redirected (301) to the HTTP port.</p>
<p>The output from the curl -v:</p>
<pre><code>* Rebuilt URL to: https://httpbin-1.mydomain.com/
* Trying XXX.XX.XX.XX...
* TCP_NODELAY set
* Connected to httpbin-1.mydomain.com (XXX.XX.XX.XX) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
* Server certificate: mydomain.com
* Server certificate: COMODO ECC Domain Validation Secure Server CA 2
* Server certificate: COMODO ECC Certification Authority
> GET / HTTP/1.1
> Host: httpbin-1.mydomain.com
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Fri, 03 Mar 2017 18:01:23 GMT
< Transfer-Encoding: chunked
< Connection: keep-alive
< Cache-Control: max-age=3600
< Expires: Fri, 03 Mar 2017 19:01:23 GMT
< Location: http://httpbin-1.mydomain.com/
<
* Curl_http_done: called premature == 0
* Connection #0 to host httpbin-1.mydomain.com left intact
</code></pre>
<p>I did not create any auto redirection of any kind, followed strictly the official manuals (from Kubernetes and GKE) but I'm unable to go beyond this. Even tried to recreate the whole stack from ground up, a new cluster, new load balancers, but got the same results.</p>
<p>What am I missing? Tks!</p>
| <p>AFAIK GLB can not do correct forwarding for you. You will have to use a webserver in the service after the ingress to solve that scenario. </p>
<p>Your current behavior seems to be caused by </p>
<p><code>annotations:
kubernetes.io/ingress.global-static-ip-name: "http-bin-static-ip"</code></p>
<p>Remove that part from your ingress and you should see your https session be terminated at your ingress.</p>
|
<p>In a container inside a pod, how can I run a command using kubectl? For example, if i need to do something like this inside a container: </p>
<blockquote>
<p>kubectl get pods</p>
</blockquote>
<p>I have tried this : In my dockerfile, I have these commands : </p>
<pre><code>RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN sudo mv ./kubectl /usr/local/bin/kubectl
</code></pre>
<blockquote>
<p>EDIT : I was trying the OSX file, I have corrected it to the linux binary file. (corrected by @svenwltr</p>
</blockquote>
<p>While creating the docker file, this is successful, but when I run the kubectl get pods inside a container, </p>
<pre><code>kubectl get pods
</code></pre>
<p>I get this error :</p>
<blockquote>
<p>The connection to the server : was refused - did you specify the right host or port? </p>
</blockquote>
<p>When I was deploying locally, I was encountering this error if my docker-machine was not running, but inside a container how can a docker-machine be running? </p>
<p>Locally, I get around this error by running the following commands:
(dev is the name of the docker-machine)</p>
<pre><code>docker-machine env dev
eval $(docker-machine env dev)
</code></pre>
<p>Can someone please tell me what is it that I need to do?</p>
| <p>I would use kubernetes api, you just need to install curl, instead of <code>kubectl</code> and the rest is restful.</p>
<pre><code>curl http://localhost:8080/api/v1/namespaces/default/pods
</code></pre>
<p>Im running above command on one of my apiservers. Change the <strong>localhost</strong> to <strong>apiserver ip address/dns name</strong>.</p>
<p>Depending on your configuration you may need to use ssl or provide client certificate.</p>
<p>In order to find api endpoints, you can use <code>--v=8</code> with <code>kubectl</code>.</p>
<p>example:</p>
<pre><code>kubectl get pods --v=8
</code></pre>
<p>Resources:</p>
<p>Kubernetes <a href="https://kubernetes.io/docs/api-reference/v1.5/" rel="noreferrer">API documentation</a></p>
<p><strong>Update for RBAC:</strong></p>
<p>I assume you already configured rbac, created a service account for your pod and run using it. This service account should have list permissions on pods in required namespace. In order to do that, you need to create a role and role binding for that service account.</p>
<p>Every container in a cluster is populated with a token that can be used for authenticating to the API server. To verify, Inside the container run:</p>
<pre><code>cat /var/run/secrets/kubernetes.io/serviceaccount/token
</code></pre>
<p>To make request to apiserver, inside the container run:</p>
<pre><code>curl -ik \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
</code></pre>
|
<p>I'm trying to expose my Deployment to a port which I can access through my local computer via Minikube.</p>
<p>I have tried two YAML configurations (one a load balancer, one just a service exposing a port).
I: <a href="http://pastebin.com/gL5ZBZg7" rel="nofollow noreferrer">http://pastebin.com/gL5ZBZg7</a></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
</code></pre>
<p>II: <a href="http://pastebin.com/sSuyhzC5" rel="nofollow noreferrer">http://pastebin.com/sSuyhzC5</a></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
ports:
- port: 8000
targetPort: 8000
protocol: TCP
selector:
app: bot
</code></pre>
<p>The deployment and the docker container image both expose port 8000, and the Pod is tagged with app:bot.</p>
<p>The first results in a service with a port which never finishes, and the external IP never gets assigned.
The second results in a port of bot:8000 TCP, bot:0 TCP in my dashboard and when I try "minikube service bot" nothing happens. The same happens if I type in "kubectl expose service bot".</p>
<p>I am on Mac OS X.</p>
<p>How can I set this up properly?</p>
| <p>The <code>LoadBalancer</code> service is meant for Cloud providers and not really relevant for minikube.</p>
<p>From the <a href="https://kubernetes.io/docs/user-guide/services/#type-loadbalancer" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service.</p>
</blockquote>
<p>Using a <code>Service</code> of type <code>NodePort</code> (see <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="nofollow noreferrer">documentation</a>) as mentioned in the <a href="https://kubernetes.io/docs/getting-started-guides/minikube/#networking" rel="nofollow noreferrer">Networking part of the minikube documentation</a> is the intended way of exposing services on minikube.</p>
<p>So your configuration should look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bot
labels:
app: bot
spec:
type: NodePort
ports:
- port: 8000
targetPort: 8000
nodePort: 30356
protocol: TCP
selector:
app: bot
</code></pre>
<p>And access your application through:</p>
<pre><code>> IP=$(minikube ip)
> curl "http://$IP:30356"
</code></pre>
<p>Hope that helps.</p>
|
<p>My company uses it's own root CA and when I'm trying to pull images. Even from a private registry I'm getting error:</p>
<blockquote>
<p>1h 3m 22 {kubelet minikube} Warning FailedSync Error syncing
pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull:
"image pull failed for gcr.io/google_containers/pause-amd64:3.0, this
may be because there are no credentials on this request. </p>
<p>details:
(Error response from daemon: Get <a href="https://gcr.io/v1/_ping" rel="noreferrer">https://gcr.io/v1/_ping</a>: x509:
certificate signed by unknown authority)"
1h 10s 387 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with
ImagePullBackOff: "Back-off pulling image
\"gcr.io/google_containers/pause-amd64:3.0\""</p>
</blockquote>
<p>How I can install root CA to minkube or avoid this message i.e. Use only private registry, and don't pull anything from <code>gcr.io</code> ?</p>
| <p>The only solution I've found so far is adding --insecure-registry gcr.io option to the minikube.</p>
|
<p>I'm working on setting up a kubernetes pod for a service.There is a need to use persistent volumes for logs,certs etc which should be available at some sub-path at host like service/log.Now i want to make this folder unique by changing to something like service/log_podID or log_podName.How can i get a pod name or pod id within in k8s deployments yamls. </p>
| <p>Using something like this implies some kind of state which is contrary to the stateless nature of a <code>Deployment</code>. You could use the <code>hostname</code> within the container to have a unique ID, but that can't be used in the YAML.</p>
<p>If you need reliable IDs of your PODs you should use a <code>StatefulSets</code> (<a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">documentation</a>) which bring predictable pod names which you could then use within your yaml.</p>
|
<p>Is it possible to have my development machine to be part of Minikube's network? </p>
<p>Ideally, it should work both ways:</p>
<ul>
<li>While developing an application in my IDE, I can access k8s resources inside Minikube using the same addressing that pods would use.</li>
<li>Pods running in Minikube can access my application running in the IDE, for example via HTTP requests.</li>
</ul>
<p><a href="https://stackoverflow.com/questions/35738500/google-container-engine-and-vpn">This</a> sounds like the first part is feasible on GCE using network routes, so I wonder if it's doable locally using Minikube.</p>
| <p>There is an issue open upstream (<a href="https://github.com/kubernetes/minikube/issues/38" rel="noreferrer">kubernetes/minikube#38</a>) in order to discuss that particular use case. </p>
<p><code>kube-proxy</code> already adds the IPtables rules needed for IP forwarding inside the <code>minikube</code> VM (this is not specific to minikube), so all you have to do is add a static route to the container network via the IP of minikube's eth1 interface on your local machine:</p>
<pre><code>ip route add 10.0.0.0/24 via 192.168.42.58 (Linux)
route -n add 10.0.0.0/24 192.168.42.58 (macOS)
</code></pre>
<p>Where <code>10.0.0.0/24</code> is the container network CIDR and <code>192.168.42.58</code> is the IP of your minikube VM (obtained with the <code>minikube ip</code> command).</p>
<p>You can then reach Kubernetes services from your local environment using their cluster IP. Example:</p>
<pre><code>❯ kubectl get svc -n kube-system kubernetes-dashboard
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.0.0.56 <nodes> 80:30000/TCP 35s
</code></pre>
<p><a href="https://i.stack.imgur.com/N01di.png" rel="noreferrer"><img src="https://i.stack.imgur.com/N01di.png" alt="kubedash"></a></p>
<p>This also allows you to resolve names in the <code>cluster.local</code> domain via the cluster DNS (<code>kube-dns</code> addon):</p>
<pre><code>❯ nslookup kubernetes-dashboard.kube-system.svc.cluster.local 10.0.0.10
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: kubernetes-dashboard.kube-system.svc.cluster.local
Address: 10.0.0.56
</code></pre>
<p>If you also happen to have a local <code>dnsmasq</code> running on you local machine you can easily take advantage of this and forward all DNS requests for the <code>cluster.local</code> domain to <code>kube-dns</code>:</p>
<pre><code>server=/cluster.local/10.0.0.10
</code></pre>
|
<p>I have Container Linux by CoreOS alpha (1325.1.0) Installed on a pc at home. </p>
<p>I played with kubernetes for a couple of month, but now after reinstalling ContainerOS and trying to install kubernetes using my fork at <a href="https://github.com/kfirufk/coreos-kubernetes" rel="noreferrer">https://github.com/kfirufk/coreos-kubernetes</a> I fail to properly install kubernetes. </p>
<p>I use hyperkube image <code>v1.6.0-beta.0_coreos.0</code>. </p>
<p>the problem is that it seems that hyperkube doesn't try to initiate any manifests from <code>/etc/kubernetes/manifests</code>. I configured kubelet to run with rkt.</p>
<p>when I run <code>journalctl -xef -u kubelet</code> after restarting kubelet, I get the following output:</p>
<pre><code>Feb 26 20:17:33 coreos-2.tux-in.com kubelet-wrapper[3673]: + exec /usr/bin/rkt run --uuid-file-save=/var/run/kubelet-pod.uuid --volume dns,kind=host,source=/run/systemd/resolve/resolv.conf --mount volume=dns,target=/etc/resolv.conf --volume rkt,kind=host,source=/opt/bin/host-rkt --mount volume=rkt,target=/usr/bin/rkt --volume var-lib-rkt,kind=host,source=/var/lib/rkt --mount volume=var-lib-rkt,target=/var/lib/rkt --volume stage,kind=host,source=/tmp --mount volume=stage,target=/tmp --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log --volume cni-bin,kind=host,source=/opt/cni/bin --mount volume=cni-bin,target=/opt/cni/bin --trust-keys-from-https --volume etc-kubernetes,kind=host,source=/etc/kubernetes,readOnly=false --volume etc-ssl-certs,kind=host,source=/etc/ssl/certs,readOnly=true --volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true --volume var-lib-docker,kind=host,source=/var/lib/docker,readOnly=false --volume var-lib-kubelet,kind=host,source=/var/lib/kubelet,readOnly=false --volume os-release,kind=host,source=/usr/lib/os-release,readOnly=true --volume run,kind=host,source=/run,readOnly=false --mount volume=etc-kubernetes,target=/etc/kubernetes --mount volume=etc-ssl-certs,target=/etc/ssl/certs --mount volume=usr-share-certs,target=/usr/share/ca-certificates --mount volume=var-lib-docker,target=/var/lib/docker --mount volume=var-lib-kubelet,target=/var/lib/kubelet --mount volume=os-release,target=/etc/os-release --mount volume=run,target=/run --stage1-from-dir=stage1-fly.aci quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0 --exec=/kubelet -- --require-kubeconfig --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml --register-schedulable=true --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=kubenet --container-runtime=rkt --rkt-path=/usr/bin/rkt --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=192.168.1.2 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: Flag --register-schedulable has been deprecated, will be removed in a future version
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.260305 3673 feature_gate.go:170] feature gates: map[]
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.332539 3673 manager.go:143] cAdvisor running in container: "/system.slice/kubelet.service"
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.355270 3673 fs.go:117] Filesystem partitions: map[/dev/mapper/usr:{mountpoint:/usr/lib/os-release major:254 minor:0 fsType:ext4 blockSize:0} /dev/sda9:{mountpoint:/var/lib/docker major:8 minor:9 fsType:ext4 blockSize:0} /dev/sdb1:{mountpoint:/var/lib/rkt major:8 minor:17 fsType:ext4 blockSize:0}]
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.359173 3673 manager.go:198] Machine: {NumCores:8 CpuFrequency:3060000 MemoryCapacity:4145344512 MachineID:b07a180a2c8547f7956e9a6f93a452a4 SystemUUID:00000000-0000-0000-0000-1C6F653E6F72 BootID:c03de69b-c9c8-4fb7-a3df-de4f70a74218 Filesystems:[{Device:/dev/mapper/usr Capacity:1031946240 Type:vfs Inodes:260096 HasInodes:true} {Device:/dev/sda9 Capacity:113819422720 Type:vfs Inodes:28536576 HasInodes:true} {Device:/dev/sdb1 Capacity:984373800960 Type:vfs Inodes:61054976 HasInodes:true} {Device:overlay Capacity:984373800960 Type:vfs Inodes:61054976 HasInodes:true}] DiskMap:map[254:0:{Name:dm-0 Major:254 Minor:0 Size:1065345024 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:120034123776 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1000204886016 Scheduler:cfq} 8:32:{Name:sdc Major:8 Minor:32 Size:3000592982016 Scheduler:cfq} 8:48:{Name:sdd Major:8 Minor:48 Size:2000398934016 Scheduler:cfq} 8:64:{Name:sde Major:8 Minor:64 Size:1000204886016 Scheduler:cfq}] NetworkDevices:[{Name:enp3s0 MacAddress:1c:6f:65:3e:6f:72 Speed:1000 Mtu:1500} {Name:flannel.1 MacAddress:be:f8:31:12:15:f5 Speed:0 Mtu:1450}] Topology:[{Id:0 Memory:4145344512 Cores:[{Id:0 Threads:[0 4] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1 5] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[2 6] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[3 7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:8388608 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.359768 3673 manager.go:204] Version: {KernelVersion:4.9.9-coreos-r1 ContainerOsVersion:Container Linux by CoreOS 1325.1.0 (Ladybug) DockerVersion:1.13.1 CadvisorVersion: CadvisorRevision:}
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.362754 3673 kubelet.go:253] Adding manifest file: /etc/kubernetes/manifests
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.362800 3673 kubelet.go:263] Watching apiserver
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: W0226 20:17:41.366369 3673 kubelet_network.go:63] Hairpin mode set to "promiscuous-bridge" but container runtime is "rkt", ignoring
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.366427 3673 kubelet.go:494] Hairpin mode set to "none"
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.379778 3673 server.go:790] Started kubelet v1.6.0-beta.0+coreos.0
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.379803 3673 kubelet.go:1143] Image garbage collection failed: unable to find data for container /
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.379876 3673 server.go:125] Starting to listen on 0.0.0.0:10250
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.380252 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.381083 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.381120 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.381658 3673 server.go:288] Adding debug handlers to kubelet server.
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382281 3673 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382310 3673 status_manager.go:140] Starting to sync pod status with apiserver
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382326 3673 kubelet.go:1711] Starting kubelet main sync loop.
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382354 3673 kubelet.go:1722] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.382616 3673 volume_manager.go:248] Starting Kubelet Volume Manager
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.386643 3673 kubelet.go:2028] Container runtime status is nil
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436430 3673 event.go:208] Unable to write event: 'Post https://coreos-2.tux-in.com:443/api/v1/namespaces/default/events: dial tcp 192.168.1.2:443: getsockopt: connection refused' (may retry after sleeping)
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436547 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436547 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:380: Failed to list *v1.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.436557 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:388: Failed to list *v1.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.482996 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.483717 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.483774 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.483907 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.556064 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.756398 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.757047 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.757087 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:41.757152 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2
Feb 26 20:17:41 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:41.833244 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:42.233574 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.234232 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.234266 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:42.234324 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.306213 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.512768 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:388: Failed to list *v1.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.512810 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:42 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:42.512905 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:380: Failed to list *v1.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:43.106559 3673 kubelet_node_status.go:238] Setting node annotation to enable volume controller attach/detach
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.107210 3673 kubelet.go:1631] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.107244 3673 kubelet.go:1639] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: I0226 20:17:43.107304 3673 kubelet_node_status.go:78] Attempting to register node 192.168.1.2
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.186848 3673 kubelet_node_status.go:102] Unable to register node "192.168.1.2" with API server: Post https://coreos-2.tux-in.com:443/api/v1/nodes: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.580259 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:380: Failed to list *v1.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.580286 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:388: Failed to list *v1.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
Feb 26 20:17:43 coreos-2.tux-in.com kubelet-wrapper[3673]: E0226 20:17:43.580285 3673 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://coreos-2.tux-in.com:443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.1.2&resourceVersion=0: dial tcp 192.168.1.2:443: getsockopt: connection refused
</code></pre>
<p>my kubelet.service content (I tried with --network-plugin=kubenet and cni, makes no difference:</p>
<pre><code>[Service]
Environment=KUBELET_IMAGE_TAG=v1.6.0-beta.0_coreos.0
Environment=KUBELET_IMAGE_URL=quay.io/coreos/hyperkube
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/run/systemd/resolve/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume rkt,kind=host,source=/opt/bin/host-rkt \
--mount volume=rkt,target=/usr/bin/rkt \
--volume var-lib-rkt,kind=host,source=/var/lib/rkt \
--mount volume=var-lib-rkt,target=/var/lib/rkt \
--volume stage,kind=host,source=/tmp \
--mount volume=stage,target=/tmp \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume cni-bin,kind=host,source=/opt/cni/bin --mount volume=cni-bin,target=/opt/cni/bin"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--require-kubeconfig \
--kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml \
--register-schedulable=true \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=kubenet \
--container-runtime=rkt \
--rkt-path=/usr/bin/rkt \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override=192.168.1.2 \
--cluster_dns=10.3.0.10 \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
<p>my <code>/var/lib/coreos-install/user_data</code> file:</p>
<pre><code>#cloud-config
hostname: "coreos-2.tux-in.com"
write_files:
- path: "/etc/ssh/sshd_config"
permissions: 0600
owner: root:root
content: |
# Use most defaults for sshd configuration.
UsePrivilegeSeparation sandbox
Subsystem sftp internal-sftp
ClientAliveInterval 180
UseDNS no
UsePAM no
PrintLastLog no # handled by PAM
PrintMotd no # handled by PAMa
PasswordAuthentication no
- path: "/etc/kubernetes/ssl/ca.pem"
permissions: "0666"
content: |
XXXX
- path: "/etc/kubernetes/ssl/apiserver.pem"
permissions: "0666"
content: |
XXXX
- path: "/etc/kubernetes/ssl/apiserver-key.pem"
permissions: "0666"
content: |
XXXX
- path: "/etc/ssl/etcd/ca.pem"
permissions: "0666"
owner: "etcd:etcd"
content: |
XXXX
- path: "/etc/ssl/etcd/etcd1.pem"
permissions: "0666"
owner: "etcd:etcd"
content: |
XXXX
- path: "/etc/ssl/etcd/etcd1-key.pem"
permissions: "0666"
owner: "etcd:etcd"
content: |
XXXX
ssh_authorized_keys:
- "XXXX ufk@ufk-osx-music"
users:
- name: "ufk"
passwd: "XXXX"
groups:
- "sudo"
ssh-authorized-keys:
- "ssh-rsa XXXX ufk@ufk-osx-music"
coreos:
etcd2:
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
# specify the initial size of your cluster with ?size=X
discovery: https://discovery.etcd.io/XXXX
advertise-client-urls: https://coreos-2.tux-in.com:2379
initial-advertise-peer-urls: https://coreos-2.tux-in.com:2380
# listen on both the official ports and the legacy ports
# legacy ports can be omitted if your application doesn't depend on them
listen-client-urls: https://0.0.0.0:2379,http://127.0.0.1:4001
listen-peer-urls: https://coreos-2.tux-in.com:2380
locksmith:
endpoint: "http://127.0.0.1:4001"
update:
reboot-strategy: etcd-lock
units:
- name: 00-enp3s0.network
runtime: true
content: |
[Match]
Name=enp3s0
[Network]
Address=192.168.1.2/16
Gateway=192.168.1.1
DNS=8.8.8.8
- name: mnt-storage.mount
enable: true
command: start
content: |
[Mount]
What=/dev/disk/by-uuid/e9df7e62-58da-4db2-8616-8947ac835e2c
Where=/mnt/storage
Type=btrfs
Options=loop,discard
- name: var-lib-rkt.mount
enable: true
command: start
content: |
[Mount]
What=/dev/sdb1
Where=/var/lib/rkt
Type=ext4
- name: etcd2.service
command: start
drop-ins:
- name: 30-certs.conf
content: |
[Service]
Restart=always
Environment="ETCD_CERT_FILE=/etc/ssl/etcd/etcd1.pem"
Environment="ETCD_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem"
Environment="ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ca.pem"
Environment="ETCD_CLIENT_CERT_AUTH=true"
Environment="ETCD_PEER_CERT_FILE=/etc/ssl/etcd/etcd1.pem"
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem"
Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ca.pem"
Environment="ETCD_PEER_CLIENT_CERT_AUTH=true"
</code></pre>
<p>welp.. I'm pretty lost. it's the first time this kind of problem happens to me. any information regarding the issue would be greatly appreciated.</p>
<p>just in case.. these are the manifests in <code>/etc/kubernetes/manifests</code> that aren't being executed. <code>rkt list --full</code> doesn't show that any type of pod is starting besides the regular hyperkube.</p>
<p>kube-apiserver.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers=http://127.0.0.1:4001
- --allow-privileged=true
- --service-cluster-ip-range=10.3.0.0/24
- --secure-port=443
- --advertise-address=192.168.1.2
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --runtime-config=extensions/v1beta1/networkpolicies=true,batch/v2alpha1=true
- --anonymous-auth=false
livenessProbe:
httpGet:
host: 127.0.0.1
port: 8080
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>kube-controller-manager.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0
command:
- /hyperkube
- controller-manager
- --master=http://127.0.0.1:8080
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
resources:
requests:
cpu: 200m
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>kube-proxy.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
- --cluster-cidr=10.2.0.0/16
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /var/run/dbus
name: dbus
readOnly: false
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- hostPath:
path: /var/run/dbus
name: dbus
</code></pre>
<p>kube-scheduler.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.6.0-beta.0_coreos.0
command:
- /hyperkube
- scheduler
- --master=http://127.0.0.1:8080
- --leader-elect=true
resources:
requests:
cpu: 100m
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 15
</code></pre>
| <p>thanks to @AntoineCotten the problem was easily resolved. </p>
<p>first, I downgraded hyperkube from <code>v1.6.0-beta.0_coreos.0</code> to <code>v1.5.3_coreos.0</code>. then I noticed an error in the kubelet log that made me understand that I had a major typo in <code>/opt/bin/host-rkt</code>.</p>
<p>I had <code>exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "\$@"</code> instead of <code>exec nsenter -m -u -i -n -p -t 1 -- /usr/bin/rkt "$@"</code>.</p>
<p>I escaped the <code>$</code> when trying to paste the command line arguments, which then.. didn't. so.. not using 1.6.0-beta0 for now, that's ok! and fixed the script. now everything works again. thanks</p>
|
<p>I'm trying to set node affinity using nodeSelector as discussed here: <a href="https://kubernetes.io/docs/user-guide/node-selection/" rel="nofollow noreferrer">https://kubernetes.io/docs/user-guide/node-selection/</a></p>
<p>However, no matter if I use a Pod, a Replication Controller or a Deployment, I can't get the kubectl create to work properly. This is the error I get, and it happens with everything similarly:</p>
<blockquote>
<p>Error from server (BadRequest): error when creating "test-pod.yaml": Pod in version "v1" cannot be handled as a Pod: [pos 222]: json: expect char '"' but got char 't'</p>
</blockquote>
<p>Substitute "Deployment" or "ReplicationController" for "Pod" and it's the same error everywhere. Here is my yaml file for the test pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
ingress: yes
</code></pre>
<p>If I remove the nodeSelector part of the file, the pod builds successfully. This also works with Deployments and Replication Controllers as well. I made sure that the proper label was added to the node.</p>
<p>Any help would be appreciated!</p>
| <p>In yaml, the token <code>yes</code> evaluates to a boolean <code>true</code> (<a href="http://yaml.org/type/bool.html" rel="nofollow noreferrer">http://yaml.org/type/bool.html</a>)</p>
<p>Internally, <code>kubectl</code> converts yaml to json as a preprocessing step. Your node selector is converting to <code>"nodeSelector":{"ingress":true}</code>, which fails when trying to decode into a string-to-string map.</p>
<p>You can quote the string like this to force it to be treated as a string:
<code>ingress: "yes"</code></p>
|
<p>I am trying to use the cinder plugin for kubernetes to create both statically defined PVs as well as StorageClasses, but I see no activity between my cluster and cinder for creating/mounting the devices.</p>
<p>Kubernetes Version:</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:19:49Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"33cf7b9acbb2cb7c9c72a10d6636321fb180b159", GitTreeState:"clean", BuildDate:"2016-10-10T18:13:36Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>The command kubelet was started with and its status:</p>
<pre><code>systemctl status kubelet -l
● kubelet.service - Kubelet service
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-10-20 07:43:07 PDT; 3h 53min ago
Process: 2406 ExecStartPre=/usr/local/bin/install-kube-binaries (code=exited, status=0/SUCCESS)
Process: 2400 ExecStartPre=/usr/local/bin/create-certs (code=exited, status=0/SUCCESS)
Main PID: 2408 (kubelet)
CGroup: /system.slice/kubelet.service
├─2408 /usr/local/bin/kubelet --pod-manifest-path=/etc/kubernetes/manifests --api-servers=https://172.17.0.101:6443 --logtostderr=true --v=12 --allow-privileged=true --hostname-override=jk-kube2-master --pod-infra-container-image=pause-amd64:3.0 --cluster-dns=172.31.53.53 --cluster-domain=occloud --cloud-provider=openstack --cloud-config=/etc/cloud.conf
</code></pre>
<p>Here is my cloud.conf file:</p>
<pre><code># cat /etc/cloud.conf
[Global]
username=<user>
password=XXXXXXXX
auth-url=http://<openStack URL>:5000/v2.0
tenant-name=Shadow
region=RegionOne
</code></pre>
<p>It appears that k8s is able to communicate successfully with openstack. From /var/log/messages:</p>
<pre><code>kubelet: I1020 11:43:51.770948 2408 openstack_instances.go:41] openstack.Instances() called
kubelet: I1020 11:43:51.836642 2408 openstack_instances.go:78] Found 39 compute flavors
kubelet: I1020 11:43:51.836679 2408 openstack_instances.go:79] Claiming to support Instances
kubelet: I1020 11:43:51.836688 2408 openstack_instances.go:124] NodeAddresses(jk-kube2-master) called
kubelet: I1020 11:43:52.274332 2408 openstack_instances.go:131] NodeAddresses(jk-kube2-master) => [{InternalIP 172.17.0.101} {ExternalIP 10.75.152.101}]
</code></pre>
<p>My PV/PVC yaml files, and cinder list output:</p>
<pre><code># cat persistentVolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: jk-test
labels:
type: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
cinder:
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
fsType: ext4
# cat persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
type: "test"
# cinder list | grep jk-cinder
| 48d2d1e6-e063-437a-855f-8b62b640a950 | available | jk-cinder | 10 | - | false |
</code></pre>
<p>As seen above, cinder reports the device with the ID referenced in the pv.yaml file is available. When I create them, things seem to work:</p>
<pre><code>NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv/jk-test 10Gi RWO Retain Bound default/myclaim 5h
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/myclaim Bound jk-test 10Gi RWO 5h
</code></pre>
<p>Then I try to create a pod using the pvc, but it fails to mount the volume:</p>
<pre><code># cat testPod.yaml
kind: Pod
apiVersion: v1
metadata:
name: jk-test3
labels:
name: jk-test
spec:
containers:
- name: front-end
image: example-front-end:latest
ports:
- hostPort: 6000
containerPort: 3000
volumes:
- name: jk-test
persistentVolumeClaim:
claimName: myclaim
</code></pre>
<p>And here is the state of the pod:</p>
<pre><code> 3h 46s 109 {kubelet jk-kube2-master} Warning FailedMount Unable to mount volumes for pod "jk-test3_default(0f83368f-96d4-11e6-8243-fa163ebfcd23)": timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
3h 46s 109 {kubelet jk-kube2-master} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "jk-test3"/"default". list of unattached/unmounted volumes=[jk-test]
</code></pre>
<p>I've verified that my openstack provider is exposing cinder v1 and v2 APIs and the previous logs from openstack_instances show the nova API is accessible. Despite that, I never see any attempts on k8s part to communicate with cinder or nova to mount the volume.</p>
<p>Here are what I think are the relevant log messages regarding the failure to mount:</p>
<pre><code>kubelet: I1020 06:51:11.840341 24027 desired_state_of_world_populator.go:323] Extracted volumeSpec (0x23a45e0) from bound PV (pvName "jk-test") and PVC (ClaimName "default"/"myclaim" pvcUID 51919dfb-96c9-11e6-8243-fa163ebfcd23)
kubelet: I1020 06:51:11.840424 24027 desired_state_of_world_populator.go:241] Added volume "jk-test" (volSpec="jk-test") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.840474 24027 desired_state_of_world_populator.go:241] Added volume "default-token-js40f" (volSpec="default-token-js40f") for pod "f957f140-96cb-11e6-8243-fa163ebfcd23" to desired state.
kubelet: I1020 06:51:11.896176 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896330 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896361 24027 reconciler.go:201] Attempting to start VerifyControllerAttachedVolume for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896390 24027 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/f957f140-96cb-11e6-8243-fa163ebfcd23-default-token-js40f" (spec.Name: "default-token-js40f") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23")
kubelet: I1020 06:51:11.896420 24027 config.go:98] Looking for [api file], have seen map[file:{} api:{}]
kubelet: E1020 06:51:11.896566 24027 nestedpendingoperations.go:253] Operation for "\"kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950\"" failed. No retries permitted until 2016-10-20 06:53:11.896529189 -0700 PDT (durationBeforeRetry 2m0s). Error: Volume "kubernetes.io/cinder/48d2d1e6-e063-437a-855f-8b62b640a950" (spec.Name: "jk-test") pod "f957f140-96cb-11e6-8243-fa163ebfcd23" (UID: "f957f140-96cb-11e6-8243-fa163ebfcd23") has not yet been added to the list of VolumesInUse in the node's volume status.
</code></pre>
<p>Is there a piece I am missing? I've followed the instructions here: <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/mysql-cinder-pd" rel="nofollow">k8s - mysql-cinder-pd example</a> But haven't been able to get any communication. As another datapoint I tried defining a Storage class as provided by k8s, here are the associated StorageClass and PVC files:</p>
<pre><code># cat cinderStorage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
availability: nova
# cat dynamicPVC.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: dynamicclaim
annotations:
volume.beta.kubernetes.io/storage-class: "gold"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
</code></pre>
<p>The StorageClass reports success, but when I try to create the PVC it gets stuck in the 'pending' state and reports 'no volume plugin matched':</p>
<pre><code># kubectl get storageclass
NAME TYPE
gold kubernetes.io/cinder
# kubectl describe pvc dynamicclaim
Name: dynamicclaim
Namespace: default
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 15s 5867 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
</code></pre>
<p>This contradicts whats in the logs for plugins that were loaded:</p>
<pre><code>grep plugins /var/log/messages
kubelet: I1019 11:39:41.382517 22435 plugins.go:56] Registering credential provider: .dockercfg
kubelet: I1019 11:39:41.382673 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/aws-ebs"
kubelet: I1019 11:39:41.382685 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/empty-dir"
kubelet: I1019 11:39:41.382691 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/gce-pd"
kubelet: I1019 11:39:41.382698 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/git-repo"
kubelet: I1019 11:39:41.382705 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/host-path"
kubelet: I1019 11:39:41.382712 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/nfs"
kubelet: I1019 11:39:41.382718 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/secret"
kubelet: I1019 11:39:41.382725 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/iscsi"
kubelet: I1019 11:39:41.382734 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/glusterfs"
jk-kube2-master kubelet: I1019 11:39:41.382741 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/rbd"
kubelet: I1019 11:39:41.382749 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cinder"
kubelet: I1019 11:39:41.382755 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/quobyte"
kubelet: I1019 11:39:41.382762 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/cephfs"
kubelet: I1019 11:39:41.382781 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/downward-api"
kubelet: I1019 11:39:41.382798 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/fc"
kubelet: I1019 11:39:41.382804 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/flocker"
kubelet: I1019 11:39:41.382822 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-file"
kubelet: I1019 11:39:41.382839 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/configmap"
kubelet: I1019 11:39:41.382846 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/vsphere-volume"
kubelet: I1019 11:39:41.382853 22435 plugins.go:355] Loaded volume plugin "kubernetes.io/azure-disk"
</code></pre>
<p>And I have the nova and cinder clients installed on my machine:</p>
<pre><code># which nova
/usr/bin/nova
# which cinder
/usr/bin/cinder
</code></pre>
<p>Any help is appreciated, I'm sure I'm missing something simple here.</p>
<p>Thanks!</p>
| <p>The cinder volumes work for sure with Kubernetes 1.5.0 and 1.5.3 (I think they also worked on 1.4.6 on which I was first experimenting, I don't know about previous versions).</p>
<h2>Short answer</h2>
<p>In your Pod yaml file you were missing: <code>volumeMounts:</code> section.</p>
<h2>Longer answer</h2>
<h3>First possibility: no PV or PVC</h3>
<p>Actually, when you already have an existing cinder volume, you can just use a Pod (or Deployment), no PV or PVC is needed. Example:
<code>
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
spec:
containers:
- name: nginx
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
cinder:
volumeID: e143368a-440a-400f-b8a4-dd2f46c51888
</code>
This will create a Deployment and a Pod. The cinder volume will be mounted into the nginx container. To verify that you are using a volume, you can edit a file inside nginx container, inside <code>/usr/share/nginx/html/</code> directory and stop the container. Kubernetes will create a new container and inside it, the files in <code>/usr/share/nginx/html/</code> directory will be the same as they were in the stopped container.</p>
<p>After you delete the Deployment resource, the cinder volume is not deleted, but it is detached from a vm.</p>
<h3>Second possibility: with PV and PVC</h3>
<p>Other possibility, if you already have an existing cinder volume, you can use PV and PVC resources. You said you want to use a storage class, though Kubernetes docs allow not using it: </p>
<blockquote>
<p>A PV with no annotation or its class annotation set to "" has no class and can only be bound to PVCs that request no particular class</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/user-guide/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">source</a></p>
<p>An example storage-class is:
<code>
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
# to be used as value for annotation:
# volume.beta.kubernetes.io/storage-class
name: cinder-gluster-hdd
provisioner: kubernetes.io/cinder
parameters:
# openstack volume type
type: gluster_hdd
# openstack availability zone
availability: nova
</code></p>
<p>Then, you use your existing cinder volume with ID 48d2d1e6-e063-437a-855f-8b62b640a950 in a PV:
<code>
apiVersion: v1
kind: PersistentVolume
metadata:
# name of a pv resource visible in Kubernetes, not the name of
# a cinder volume
name: pv0001
labels:
pv-first-label: "123"
pv-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: cinder-gluster-hdd
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
cinder:
# ID of cinder volume
volumeID: 48d2d1e6-e063-437a-855f-8b62b640a950
</code>
Then create a PVC, which labels selector matches the labels of the PV:
<code>
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: vol-test
labels:
pvc-first-label: "123"
pvc-second-label: abc
annotations:
volume.beta.kubernetes.io/storage-class: "cinder-gluster-hdd"
spec:
accessModes:
# the volume can be mounted as read-write by a single node
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
selector:
matchLabels:
pv-first-label: "123"
pv-second-label: abc
</code>
and then a Deployment:
<code>
kind: Deployment
metadata:
name: vol-test
labels:
fullname: vol-test
environment: testing
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
fullname: vol-test
environment: testing
spec:
nodeSelector:
"is_worker": "true"
containers:
- name: nginx-exist-vol
image: "nginx:1.11.6-alpine"
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- echo "heey-testing" > /usr/share/nginx/html/index.html && nginx "-g daemon off;"
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html/
volumes:
- name: data
persistentVolumeClaim:
claimName: vol-test
</code></p>
<p>After you delete the k8s resources, the cinder volume is not deleted, but it is detached from a vm.</p>
<p>Using a PV lets you set <code>persistentVolumeReclaimPolicy</code>.</p>
<h3>Third possibility: no cinder volume created</h3>
<p>If you don't have a cinder volume created, Kubernetes can create it for you. You have to then provide a PVC resource. I won't describe this variant, since it was not asked for.</p>
<h3>Disclaimer</h3>
<p>I suggest that anyone interested in finding the best option should experiment themselves and compare the methods. Also, I used the labels names like <code>pv-first-label</code> and <code>pvc-first-label</code> only for better understanding reasons. You can use e.g. <code>first-label</code> everywhere.</p>
|
<p>I'm unable to delete the kubernetes pod, it keeps recreating it.<br>
There's no service or deployment associated with the pod. There's a label on the pod thou, is that the root cause?<br>
If I edit the label out with <code>kubectl edit pod podname</code> it removes the label from the pod, but creates a new pod with the same label at the same time. ¿?</p>
| <p>Pod can be created by ReplicationControllers or ReplicaSets. The latter one might be created by an Deployment. The described behavior strongly indicates, that the Pod is managed by either of these two.</p>
<p>You can check for these with this commands:</p>
<pre><code>kubectl get rs
kubectl get rc
</code></pre>
|
<p>I've the following ingress configuration but when I call <code>www.domain.com/api</code> it always open my UI service instead of API and the same thing happens if I call something else after <strong>api</strong>, for example <code>www.domain.com/api/v1/projects</code>.</p>
<p>How can I fix that?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
tls:
- secretName: tls
backend:
serviceName: ui
servicePort: 5003
rules:
- host: www.domain.com
http:
paths:
- path: /
backend:
serviceName: ui
servicePort: 5003
- path: /api
backend:
serviceName: api
servicePort: 5000
</code></pre>
| <p>Here is the way I fixed this problem. I hope this can help others.</p>
<p>Thanks @aleks!!</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kronus
spec:
tls:
- secretName: tls
backend:
serviceName: ui
servicePort: 5003
rules:
- host: domain.com
http:
paths:
- path: /api
backend:
serviceName: api
servicePort: 5000
- path: /api/*
backend:
serviceName: api
servicePort: 5000
- host: www.domain.com
http:
paths:
- path: /api
backend:
serviceName: api
servicePort: 5000
- path: /api/*
backend:
serviceName: api
servicePort: 5000
</code></pre>
|
<p>I'm trying to configure kubernetes and in my project I've separeted UI and API.
I created one Pod and I exposed both as services.</p>
<p>How can I set API_URL inside pod.yaml configuration in order to send requests from user's browser?</p>
<p>I can't use localhost because the communication isn't between containers.</p>
<p>pod.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: <how can I set the API address here?>
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url
</code></pre>
<p>services.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api
labels:
name: api
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 5000
targetPort: 5000
nodePort: 30001
selector:
name: project
---
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
name: ui
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 80
targetPort: 5003
nodePort: 30003
selector:
name: project
</code></pre>
| <p>I created an Ingress to solve this issue and point to DNS instead of IP.</p>
<p>ingres.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: project
spec:
tls:
- secretName: tls
backend:
serviceName: ui
servicePort: 5003
rules:
- host: www.project.com
http:
paths:
- backend:
serviceName: ui
servicePort: 5003
- host: api.project.com
http:
paths:
- backend:
serviceName: api
servicePort: 5000
</code></pre>
<p>deployment.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: https://api.project.com
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url
</code></pre>
|
<p><a href="https://i.stack.imgur.com/TmNQ8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TmNQ8.png" alt="enter image description here"></a></p>
<p>I have installed kubernetes on my ubuntu machine when I tried to run the following command I see external-IP as "pending", where svc.yml is service object in linkerd, link below</p>
<p>link: <a href="https://linkerd.io/getting-started/k8s/" rel="nofollow noreferrer">https://linkerd.io/getting-started/k8s/</a></p>
| <p>The yaml you specified for the Service has <code>type: LoadBalancer</code>. That will only work with a cloud provider - so not on your bare metal Ubuntu installation.</p>
<p>Modify the Service definition and specify a different type, for example <a href="https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types" rel="noreferrer">clusterIP</a></p>
|
<p>Is it possible to map, the device port(USB port) of a worker node, to a POD?
Similar to <code>docker create --device=/dev/ttyACM0:/dev/ttyACM0</code></p>
<p>Is it possible? I checked the refence doc, but could not find anything.</p>
<p>In Docker service, is it possible to map <code>--device port</code> to service container(if I am running only 1 container)? </p>
| <p>You can actually get this to work. You need to run the container privileged and use a hostPath like this:</p>
<pre><code> containers:
- name: acm
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev/ttyACM0
name: ttyacm
volumes:
- name: ttyacm
hostPath:
path: /dev/ttyACM0
</code></pre>
|
<p>When provisioning a kubernetes cluster with <code>kubeadmin init</code> it creates a cluster which keeps the <code>kube-apiserver</code>, <code>etcd</code>, <code>kube-controller-manager</code> and <code>kube-scheduler</code> processes within docker containers.</p>
<p>Whenever some configuration (e.g. access tokens) for the <code>kube-apiserver</code> is changed, I've to restart the related server. While I could usually run <code>systemctl restart kube-apiserver.service</code> on other installations, I've kill the docker container on that installation or restart the system to restart it.</p>
<p>So is there a better way to restart the <code>kube-apiserver</code>?</p>
| <p>You can delete the kube-apiserver Pod. It's a <a href="https://kubernetes.io/docs/admin/static-pods/" rel="noreferrer">static Pod</a> (in case of a kubeadm installation) and will be recreated immediately.</p>
<p>If I recall correctly the manifest directory for that installation is /etc/kubernetes/manifest, but I will check later and edit this answer. Just doing a touch on the kube-apiserver.json will also recreate the Pod.</p>
|
<p>What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.</p>
| <p>In addition to the additional API entities, as mentioned by @SteveS, Openshift also has advanced security concepts.</p>
<p>This can be very helpful when running in an Enterprise context with specific requirements regarding security.
As much as this can be a strength for real-world applications in production, it can be a source of much frustration in the beginning.
One notable example is the fact that, by default, containers run as <code>root</code> in Kubernetes, but run under an <code>arbitrary user</code> with a high ID (e.g. 1000090000) in Openshift. <em>This means that many containers from DockerHub do not work as expected</em>. For some popular applications, The <a href="https://access.redhat.com/containers" rel="noreferrer" title="Red Hat Container Catalog">Red Hat Container Catalog</a> supplies images with this feature/limitation in mind. However, this catalog contains only a subset of popular containers.</p>
<p>To get an idea of the system, I strongly suggest starting out with Kubernetes. <a href="https://github.com/kubernetes/minikube" rel="noreferrer" title="Minikube">Minikube</a> is an excellent way to quickly setup a local, one-node Kubernetes cluster to play with. When you are familiar with the basic concepts, you will better understand the implications of the Openshift features and design decisions.</p>
|
<p>What does DiskPressure really means and how it can be avoided in kubernetes during container creation?</p>
<p>Seemingly when I creating a new container on a node there is a high chance that it is crashing the whole node because of the pressure...</p>
| <p>From the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions" rel="nofollow noreferrer">documentation</a> you'll find that <code>DiskPressure</code> raises when:</p>
<blockquote>
<p>Available disk space and inodes on either the node’s root filesytem or image filesystem has satisfied an eviction threshold</p>
</blockquote>
<p>Learning about the conditions of your nodes whenever these issues occur is somewhat important (how much space/inodes are left, ...) and also learning about the related container images is important. This can be done with some basic system monitoring (see the <a href="https://kubernetes.io/docs/user-guide/monitoring/" rel="nofollow noreferrer">Resource Usage Monitoring</a>).</p>
<p>Once you know about the conditions, you should consider to adjust the <code>--low-diskspace-threshold-mb</code>, <code>--image-gc-high-threshold</code> and <code>--image-gc-low-threshold</code> parameters of you <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">kubelet</a>, so that there's always enough space for normal operation, or consider to provision more space for your nodes, depending on the requirements.</p>
|
<p>I am on Suse 12.01 Enterprise and trying to get Minikube working. The VM works already and the minikube shell tool can communicate.</p>
<p>But the kubectl still can't talk to the kubernetes master. I'm trying to debug that, and the best way to get additional info seems to be running a random command with <code>-v 9</code>. Doing that I get the following output:</p>
<pre><code>$ kubectl get pots -v 9
I0310 14:02:27.727767 29330 loader.go:354] Config loaded from file /home/D069407/.kube/config
I0310 14:02:27.728479 29330 round_trippers.go:299] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.5.3 (linux/amd64) kubernetes/029c3a4" https://192.168.99.104:8443/api
I0310 14:03:42.704009 29330 round_trippers.go:318] GET https://192.168.99.104:8443/api in 74975 milliseconds
I0310 14:03:42.704037 29330 round_trippers.go:324] Response Headers:
I0310 14:03:42.704103 29330 helpers.go:221] Connection error: Get https://192.168.99.104:8443/api: Service Unavailable
F0310 14:03:42.704111 29330 helpers.go:116] Unable to connect to the server: Service Unavailable
</code></pre>
<p>Not much info, but my guess would be that <code>curl -k -vvvv ....</code> would give me more info. However, just doing the same curl as in the log results in an authentication error, since the api server does client auth (right?). So that doesn't work.</p>
<p>How can I continue to debug now? Is there a secret ninja way to give curl auth params without adding them to the shell call? Is kubectl actually doing another request and just prints the curl to the log to give some hints about what it's calling?</p>
<p>*edit: Discussing with a colleage we both agreed that this must be a minikube vm internal problem.</p>
<p><code>minikube logs</code> are starting like this:</p>
<pre><code>-- Logs begin at Fri 2017-03-10 12:43:34 UTC, end at Fri 2017-03-10 14:44:11 UTC. --
Mar 10 12:45:45 minikube systemd[1]: Starting Localkube...
Mar 10 12:45:45 minikube localkube[3496]: I0310 12:45:45.977140 3496 start.go:77] Feature gates:%!(EXTRA string=)
Mar 10 12:45:45 minikube localkube[3496]: localkube host ip address: 10.0.2.15
Mar 10 12:45:45 minikube localkube[3496]: I0310 12:45:45.981395 3496 server.go:215] Using iptables Proxier.
Mar 10 12:45:45 minikube localkube[3496]: W0310 12:45:45.981764 3496 server.go:468] Failed to retrieve node info: Get http://127.0.0.1:8080/api/v1/nodes/minikube: dial tcp 127.0.0.1:8080:
getsockopt: connection refused
Mar 10 12:45:45 minikube localkube[3496]: W0310 12:45:45.981879 3496 proxier.go:249] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
Mar 10 12:45:45 minikube localkube[3496]: W0310 12:45:45.981947 3496 proxier.go:254] clusterCIDR not specified, unable to distinguish between internal and external traffic
Mar 10 12:45:45 minikube localkube[3496]: I0310 12:45:45.982082 3496 server.go:227] Tearing down userspace rules.
Mar 10 12:45:45 minikube localkube[3496]: Starting etcd...
Mar 10 12:45:45 minikube localkube[3496]: E0310 12:45:45.991070 3496 reflector.go:188] pkg/proxy/config/api.go:30: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?r
esourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 10 12:45:45 minikube localkube[3496]: E0310 12:45:45.991108 3496 reflector.go:188] pkg/proxy/config/api.go:33: Failed to list *api.Endpoints: Get http://127.0.0.1:8080/api/v1/endpoint
s?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 10 12:45:45 minikube localkube[3496]: name = kubeetcd
Mar 10 12:45:45 minikube localkube[3496]: data dir = /var/lib/localkube/etcd
Mar 10 12:45:45 minikube localkube[3496]: member dir = /var/lib/localkube/etcd/member
Mar 10 12:45:45 minikube localkube[3496]: heartbeat = 100ms
Mar 10 12:45:45 minikube localkube[3496]: election = 1000ms
Mar 10 12:45:45 minikube localkube[3496]: snapshot count = 10000
Mar 10 12:45:45 minikube localkube[3496]: advertise client URLs = http://0.0.0.0:2379
Mar 10 12:45:45 minikube localkube[3496]: initial advertise peer URLs = http://0.0.0.0:2380
Mar 10 12:45:45 minikube localkube[3496]: initial cluster = kubeetcd=http://0.0.0.0:2380
Mar 10 12:45:45 minikube localkube[3496]: starting member fcf2ad36debdd5bb in cluster 7f055ae3b0912328
Mar 10 12:45:45 minikube localkube[3496]: fcf2ad36debdd5bb became follower at term 0
Mar 10 12:45:45 minikube localkube[3496]: newRaft fcf2ad36debdd5bb [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Mar 10 12:45:45 minikube localkube[3496]: fcf2ad36debdd5bb became follower at term 1
Mar 10 12:45:46 minikube localkube[3496]: starting server... [version: 3.0.14, cluster version: to_be_decided]
Mar 10 12:45:46 minikube localkube[3496]: Starting apiserver...
Mar 10 12:45:46 minikube localkube[3496]: Starting controller-manager...
Mar 10 12:45:46 minikube localkube[3496]: Starting scheduler...
Mar 10 12:45:46 minikube localkube[3496]: Starting kubelet...
Mar 10 12:45:46 minikube localkube[3496]: added member fcf2ad36debdd5bb [http://0.0.0.0:2380] to cluster 7f055ae3b0912328
Mar 10 12:45:46 minikube localkube[3496]: Starting proxy...
Mar 10 12:45:46 minikube localkube[3496]: Starting storage-provisioner...
</code></pre>
<p>inside <code>minikube ssh</code> the service api works, though. Checked with <code>curl 127.0.0.1:8080/api</code> and receiving nice json.</p>
<p>*edit: based on feedback some more infos.</p>
<p>curl <em>inside</em> the minikube vm:</p>
<pre><code>$ curl localhost:8080
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1beta1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v2alpha1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1alpha1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1alpha1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/extensions/third-party-resources",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/logs",
"/metrics",
"/swaggerapi/",
"/ui/",
"/version"
]
}$ curl localhost:8080/api
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.2.15:8443"
}
]
}
</code></pre>
<p>kube config (outside vm):</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/<user>/.minikube/ca.crt
server: https://192.168.99.104:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/<user>/.minikube/apiserver.crt
client-key: /home/<user>/.minikube/apiserver.key
</code></pre>
| <p>You said you tried <code>curl 127.0.0.1:8080/api</code>, but regarding to the logs it tries to connect via https. So you should try <code>curl https://127.0.0.1:8080/api</code>.</p>
<p>I searched the source code for the term <a href="https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&q=%22Service+Unavailable%22&type=Code" rel="nofollow noreferrer">Service Unavailable</a> and it <a href="https://github.com/kubernetes/kubernetes/blob/8fd414537b5143ab039cb910590237cabf4af783/vendor/github.com/go-openapi/runtime/statuses.go#L79" rel="nofollow noreferrer">for example</a> the error description for a HTTP 503 return code.</p>
<p>Out of my guts, I assume you will get an HTTP 503, when you try to <code>curl https://127.0.0.1:8080/api</code>.</p>
<p><strong>Edit:</strong> Since you use minikube, we can assume that this is running correctly. In this case it is more likely that there is a configuration issue. Your logs say that <code>kubectl</code> tries to connect to <code>localhost</code>. AFAIK this is the default host. When you start minikube it doesn't run on the host, rather than in a virtual machine. Therefore <code>localhost</code> looks wrong and you should take a look at the kube config.</p>
|
<p>We wanted podnames to be resolved to IP's to configure the seed nodes in an akka cluster. This was happenning by using the concept of a headless service and stateful sets in Kubernetes. But, how do I expose a headless service externally to hit an endpoint from outside?</p>
| <p>It is hard to expose a Kubernetes service to the outside, since this would require some complex TCP proxies. The reason for this is, that the headless services is only a DNS record with an IP for each pod. But these IPs are only reachable from within the cluster.</p>
<p>One solution is to expose this via Node ports, which means the ports are opened on the host itself. Unfortunately this makes the service discovery harder, because you don't know which host has a scheduled pod on it.</p>
<p>You can setup node ports via:</p>
<ul>
<li>the services: <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/user-guide/services/#type-nodeport</a></li>
<li>or directly in the Pod by defining <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="nofollow noreferrer"><code>spec.containers[].ports[].hostPort</code></a></li>
</ul>
<p>Another alternative is to use a <a href="https://kubernetes.io/docs/user-guide/services/#type-loadbalancer" rel="nofollow noreferrer">LoadBalancer</a>, if your cloud provider supports that. Unfortunately you cannot address each instance itself, since they share the same IP. This might not be suitable for your application.</p>
|
<p>I have 4 nodes kubernetes cluster. My application running with 2 replica instances. I am using deployment resource with replica set. As per the documentation , replica set always ensure that specified no. Of application instances will be running.If I delete the one pod instance, then it will be restarted on the same or different node.But when I simulated the failure of a pod instance by stopping docker engine on one node. Kubectl shows status as error for the pod instance but do not restart the pod on another node. Is it the expected behaviour or am I missing something. </p>
| <p>AFAIS Kubernetes changed that behavior with version 1.5. If I interpret the <a href="https://github.com/kubernetes/kubernetes/blob/b0ce93f9be25c762fd4b746077fcda2aaa5b12bd/CHANGELOG.md#notable-changes-to-existing-behavior" rel="nofollow noreferrer">docs</a> correctly, the Pods of the failed node is still registered in the apiserver, since it abruptly died and wasn't able to unregister the pods. Since the Pod is still registered, the ReplicaSet doesn't replace it.</p>
<p>The reason for this is, that Kubernetes cannot tell if it is a network error (eg split-brain) or a node failure. With StatefulSets being introduced, Kubernetes needs to make sure that no Pod is started more than one time.</p>
<p>This maybe sounds like a bug, but if you have a properly configured cloud-provider (eg for GCE or AWS), Kubernetes can see if that Node is still running. When you would shut down that node, the controller should unregister the Node and its Pods and then create a new Pod on another Node. Together with a Node health check and a Node replacement, the cluster is able to heal itself.</p>
<p>How the cloud-provider is configured depends highly on your Kubernetes setup.</p>
|
<p>I annotate my Kubernetes objects with things like version and whom to contact when there are failures. How would I relay this information to Prometheus, knowing that these annotation values will frequently change? I can't capture this information in Prometheus labels, as they serve as the primary key for a target (e.g. if the version changes, it's a new target altogether, which I don't want). Thanks!</p>
| <p>I just wrote a blog post about this exact topic! <a href="https://www.weave.works/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/" rel="nofollow noreferrer">https://www.weave.works/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/</a></p>
<p>The trick is Kubelet/cAdvisor doesn't expose them directly, so I run a little exporter which does, and join this with the pod name in PromQL. The exporter is: <a href="https://github.com/tomwilkie/kube-api-exporter" rel="nofollow noreferrer">https://github.com/tomwilkie/kube-api-exporter</a></p>
<p>You can do a join in Prometheus like this:</p>
<pre><code>sum by (namespace, name) (
sum(rate(container_cpu_usage_seconds_total{image!=""}[5m])) by (pod_name, namespace)
* on (pod_name) group_left(name)
k8s_pod_labels{job="monitoring/kube-api-exporter"}
)
</code></pre>
<p>Here I'm using a label called "name", but it could be any label. </p>
<p>We use the same trick to get metrics (such as error rate) by version, which we then use to drive our continuous deployment system. kube-api-exporter exports a bunch of useful meta-information about Kubernetes objects to Prometheus.</p>
<p>Hope this helps!</p>
|
<p>I have Kubernetes installed on Container Linux by CoreOS alpha (1353.1.0)
using hyperkube v1.5.5_coreos.0 using my fork of coreos-kubernetes install scripts at <a href="https://github.com/kfirufk/coreos-kubernetes" rel="noreferrer">https://github.com/kfirufk/coreos-kubernetes</a>.</p>
<p>I have two ContainerOS machines.</p>
<ul>
<li>coreos-2.tux-in.com resolved as 192.168.1.2 as controller</li>
<li>coreos-3.tux-in.com resolved as 192.168.1.3 as worker</li>
</ul>
<p><code>kubectl get pods --all-namespaces</code> returns</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
ceph ceph-mds-2743106415-rkww4 0/1 Pending 0 1d
ceph ceph-mon-check-3856521781-bd6k5 1/1 Running 0 1d
kube-lego kube-lego-3323932148-g2tf4 1/1 Running 0 1d
kube-system calico-node-xq6j7 2/2 Running 0 1d
kube-system calico-node-xzpp2 2/2 Running 4560 1d
kube-system calico-policy-controller-610849172-b7xjr 1/1 Running 0 1d
kube-system heapster-v1.3.0-beta.0-2754576759-v1f50 2/2 Running 0 1d
kube-system kube-apiserver-192.168.1.2 1/1 Running 0 1d
kube-system kube-controller-manager-192.168.1.2 1/1 Running 1 1d
kube-system kube-dns-3675956729-r7hhf 3/4 Running 3924 1d
kube-system kube-dns-autoscaler-505723555-l2pph 1/1 Running 0 1d
kube-system kube-proxy-192.168.1.2 1/1 Running 0 1d
kube-system kube-proxy-192.168.1.3 1/1 Running 0 1d
kube-system kube-scheduler-192.168.1.2 1/1 Running 1 1d
kube-system kubernetes-dashboard-3697905830-vdz23 1/1 Running 1246 1d
kube-system monitoring-grafana-4013973156-m2r2v 1/1 Running 0 1d
kube-system monitoring-influxdb-651061958-2mdtf 1/1 Running 0 1d
nginx-ingress default-http-backend-150165654-s4z04 1/1 Running 2 1d
</code></pre>
<p>so I can see that <code>kube-dns-782804071-h78rf</code> keeps restarting.</p>
<p><code>kubectl describe pod kube-dns-3675956729-r7hhf --namespace=kube-system</code> returns:</p>
<pre><code>Name: kube-dns-3675956729-r7hhf
Namespace: kube-system
Node: 192.168.1.2/192.168.1.2
Start Time: Sat, 11 Mar 2017 17:54:14 +0000
Labels: k8s-app=kube-dns
pod-template-hash=3675956729
Status: Running
IP: 10.2.67.243
Controllers: ReplicaSet/kube-dns-3675956729
Containers:
kubedns:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:kubedns
Image: gcr.io/google_containers/kubedns-amd64:1.9
Image ID: rkt://sha512-c7b7c9c4393bea5f9dc5bcbe1acf1036c2aca36ac14b5e17fd3c675a396c4219
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-map=kube-dns
--v=2
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: False
Restart Count: 981
Liveness: http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables:
PROMETHEUS_PORT: 10055
dnsmasq:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:dnsmasq
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
Image ID: rkt://sha512-8c5f8b40f6813bb676ce04cd545c55add0dc8af5a3be642320244b74ea03f872
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
--log-facility=-
Requests:
cpu: 150m
memory: 10Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Liveness: http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
dnsmasq-metrics:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:dnsmasq-metrics
Image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1
Image ID: rkt://sha512-ceb3b6af1cd67389358be14af36b5e8fb6925e78ca137b28b93e0d8af134585b
Port: 10054/TCP
Args:
--v=2
--logtostderr
Requests:
memory: 10Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
healthz:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:healthz
Image: gcr.io/google_containers/exechealthz-amd64:v1.2.0
Image ID: rkt://sha512-3a85b0533dfba81b5083a93c7e091377123dac0942f46883a4c10c25cf0ad177
Port: 8080/TCP
Args:
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
--url=/healthz-dnsmasq
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
--url=/healthz-kubedns
--port=8080
--quiet
Limits:
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-zqbdp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbdp
QoS Class: Burstable
Tolerations: CriticalAddonsOnly=:Exists
No events.
</code></pre>
<p>which shows that <code>kubedns-amd64:1.9</code> is in <code>Ready: false</code></p>
<p>this is my <code>kude-dns-de.yaml</code> file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.9
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-map=kube-dns
# This should be set to v=2 only after the new image (cut from 1.5) has
# been released, otherwise we will flood the logs.
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 10Mi
- name: dnsmasq-metrics
image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:v1.2.0
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default
</code></pre>
<p>and this is my <code>kube-dns-svc.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.3.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
</code></pre>
<p>any information regarding the issue would be greatly appreciated!</p>
<h1>update</h1>
<p><code>rkt list --full 2> /dev/null | grep kubedns</code> shows:</p>
<pre><code>744a4579-0849-4fae-b1f5-cb05d40f3734 kubedns gcr.io/google_containers/kubedns-amd64:1.9 sha512-c7b7c9c4393b running 2017-03-22 22:14:55.801 +0000 UTC 2017-03-22 22:14:56.814 +0000 UTC
</code></pre>
<p><code>journalctl -m _MACHINE_ID=744a45790849b1f5cb05d40f3734</code> provides:</p>
<pre><code>Mar 22 22:17:58 kube-dns-3675956729-sthcv kubedns[8]: E0322 22:17:58.619254 8 reflector.go:199] pkg/dns/dns.go:145: Failed to list *api.Endpoints: Get https://10.3.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.3.0.1:443: connect: network is unreachable
</code></pre>
<p>I tried to add <code> - --proxy-mode=userspace</code> to <code>/etc/kubernetes/manifests/kube-proxy.yaml</code> but the results are the same.</p>
<p><code>kubectl get svc --all-namespaces</code> provides:</p>
<pre><code>NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ceph ceph-mon None <none> 6789/TCP 1h
default kubernetes 10.3.0.1 <none> 443/TCP 1h
kube-system heapster 10.3.0.2 <none> 80/TCP 1h
kube-system kube-dns 10.3.0.10 <none> 53/UDP,53/TCP 1h
kube-system kubernetes-dashboard 10.3.0.116 <none> 80/TCP 1h
kube-system monitoring-grafana 10.3.0.187 <none> 80/TCP 1h
kube-system monitoring-influxdb 10.3.0.214 <none> 8086/TCP 1h
nginx-ingress default-http-backend 10.3.0.233 <none> 80/TCP 1h
</code></pre>
<p><code>kubectl get cs</code> provides:</p>
<pre><code>NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
</code></pre>
<p>my <code>kube-proxy.yaml</code> has the following content:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- proxy
- --cluster-cidr=10.2.0.0/16
- --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/controller-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
- mountPath: /var/run/dbus
name: dbus
readOnly: false
volumes:
- hostPath:
path: "/usr/share/ca-certificates"
name: "ssl-certs"
- hostPath:
path: "/etc/kubernetes/controller-kubeconfig.yaml"
name: "kubeconfig"
- hostPath:
path: "/etc/kubernetes/ssl"
name: "etc-kube-ssl"
- hostPath:
path: /var/run/dbus
name: dbus
</code></pre>
<p>this is all the valuable information I could find. any ideas? :)</p>
<h1>update 2</h1>
<p>output of <code>iptables-save</code> on the controller ContainerOS at <a href="http://pastebin.com/2GApCj0n" rel="noreferrer">http://pastebin.com/2GApCj0n</a></p>
<h1>update 3</h1>
<p>I ran curl on the controller node</p>
<pre><code># curl https://10.3.0.1 --insecure
Unauthorized
</code></pre>
<p>means it can access it properly, i didn't add enough parameters for it to be authorized right ?</p>
<h1>update 4</h1>
<p>thanks to @jaxxstorm I removed calico manifests, updated their quay/cni and quay/node versions and reinstalled them.</p>
<p>now kubedns keeps restarting, but I think that now calico works. because for the first time it tries to install kubedns on the worker node and not on the controller node, and also when I <code>rkt enter</code> the kubedns pod and try to <code>wget https://10.3.0.1</code> I get:</p>
<pre><code># wget https://10.3.0.1
Connecting to 10.3.0.1 (10.3.0.1:443)
wget: can't execute 'ssl_helper': No such file or directory
wget: error getting response: Connection reset by peer
</code></pre>
<p>which clearly shows that there is some kind of response. which is good right ?</p>
<p>now <code>kubectl get pods --all-namespaces</code> shows:</p>
<pre><code>kube-system kube-dns-3675956729-ljz2w 4/4 Running 88 42m
</code></pre>
<p>so.. 4/4 ready but it keeps restarting.</p>
<p><code>kubectl describe pod kube-dns-3675956729-ljz2w --namespace=kube-system</code> output at <a href="http://pastebin.com/Z70U331G" rel="noreferrer">http://pastebin.com/Z70U331G</a></p>
<p>so it can't connect to <a href="http://10.2.47.19:8081/readiness" rel="noreferrer">http://10.2.47.19:8081/readiness</a>, i'm guessing this is the ip of kubedns since it uses port 8081. don't know how to continue investigating this issue further.</p>
<p>thanks for everything!</p>
| <p>kube-dns has a readiness probe that tries resolving trough the Service IP of kube-dns. Is it possible that there is a problem with your Service network?</p>
<p>Check out the answer and solution here:
<a href="https://stackoverflow.com/questions/42705432/kubernetes-service-ips-not-reachable">kubernetes service IPs not reachable</a></p>
|
<p>So I've got a Kubernetes cluster up and running using the <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="nofollow noreferrer">Kubernetes on CoreOS Manual Installation Guide</a>.</p>
<pre><code>$ kubectl get no
NAME STATUS AGE
coreos-master-1 Ready,SchedulingDisabled 1h
coreos-worker-1 Ready 54m
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default curl-2421989462-h0dr7 1/1 Running 1 53m 10.2.26.4 coreos-worker-1
kube-system busybox 1/1 Running 0 55m 10.2.26.3 coreos-worker-1
kube-system kube-apiserver-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-controller-manager-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
kube-system kube-proxy-coreos-worker-1 1/1 Running 0 58m 192.168.0.204 coreos-worker-1
kube-system kube-scheduler-coreos-master-1 1/1 Running 0 1h 192.168.0.200 coreos-master-1
$ kubectl get svc --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.3.0.1 <none> 443/TCP 1h
</code></pre>
<p>As with the guide, I've setup a service network <code>10.3.0.0/16</code> and a pod network <code>10.2.0.0/16</code>. Pod network seems fine as busybox and curl containers get IPs. But the services network has problems. Originally, I've encountered this when deploying <code>kube-dns</code>: the service IP <code>10.3.0.1</code> couldn't be reached, so kube-dns couldn't start all containers and DNS was ultimately not working.</p>
<p>From within the curl pod, I can reproduce the issue:</p>
<pre><code>[ root@curl-2421989462-h0dr7:/ ]$ curl https://10.3.0.1
curl: (7) Failed to connect to 10.3.0.1 port 443: No route to host
[ root@curl-2421989462-h0dr7:/ ]$ ip route
default via 10.2.26.1 dev eth0
10.2.0.0/16 via 10.2.26.1 dev eth0
10.2.26.0/24 dev eth0 src 10.2.26.4
</code></pre>
<p>It seems ok that there's only a default route in the container. As I understood it, the request (to default route) should be intercepted by the <code>kube-proxy</code> on the worker node, forwarded to the the proxy on the master node where the IP is translated via iptables to the masters public IP.</p>
<p>There seems to be a common problem with a bridge/netfilter sysctl setting, but that seems fine in my setup:</p>
<pre><code>core@coreos-worker-1 ~ $ sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
</code></pre>
<p>I'm having a real hard time to troubleshoot, as I lack the understanding of what the service IP is used for, how the service network is supposed to work in terms of traffic flow and how to best debug this.</p>
<p>So here're the questions I have:</p>
<ul>
<li>What is the 1st IP of the service network (10.3.0.1 in this case) used for?</li>
<li>Is above description of the traffic flow correct? If not, what steps does it take for a container to reach a service IP?</li>
<li>What are the best ways to debug each step in the traffic flow? (I can't get any idea what's wrong from the logs)</li>
</ul>
<p>Thanks!</p>
| <p>The Sevice network provides fixed IPs for Services. It is not a routeable network (so don't expect <code>ip ro</code> to show anything nor will ping work) but a collection iptables rules managed by kube-proxy on each node (see <code>iptables -L; iptables -t nat -L</code> on the nodes, not Pods). These <a href="https://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies" rel="noreferrer">virtual IPs</a> (see the pics!) act as load balancing proxy for endpoints (<code>kubectl get ep</code>), which are usually ports of Pods (but not always) with a specific set of labels as defined in the Service.</p>
<p>The first IP on the Service network is for reaching the kube-apiserver itself. It's listening on port 443 (<code>kubectl describe svc kubernetes</code>).</p>
<p>Troubleshooting is different on each network/cluster setup. I would generally check:</p>
<ul>
<li>Is kube-proxy running on each node? On some setups it's run via systemd and on others there is a DeamonSet that schedules a Pod on each node. On your setup it is deployed as static Pods created by the kubelets thrmselves from <code>/etc/kubernetes/manifests/kube-proxy.yaml</code></li>
<li>Locate logs for kube-proxy and find clues (can you post some?)</li>
<li>Change kube-proxy into <code>userspace</code> mode. Again, the details depend on your setup. For you it's in the file I mentioned above. Append <code>--proxy-mode=userspace</code> as a parameter <strong>on each node</strong></li>
<li>Is the overlay (pod) network functional?</li>
</ul>
<p>If you leave comments I will get back to you..</p>
|
<p>I have just started with Kubernetes and I have some queries about kubernetes Load balancing approaches and unable to find clear answers in the kubernetes documentation.</p>
<p>First, Lets say we have created a deployment "iis", and scaled it to 3 replicas. Now without creating a service how can we access these endpoints?</p>
<p>Now, we have created a service for this deployment (having 3 replicas) using ClusterIP so its only exposed within the cluster. Now, how will the service loadbalances the traffic coming to this service inside the cluster? Does it uses round robin or random selection of endpoints? According to the documentation of kubernetes, there are 2 service proxies, userspace or iptables, How can I know which one my service is using?</p>
<p>Next, we exposed the service publicly using LoadBalancer. It creates a load balancer on the cloud provider and uses that. My question is that how this external loadbalancer balances the traffic to the pods? Does it balances traffic to the services and services redirects it to the endpoint, OR it balances traffic directly to the endpoints (pods)? Also, in this LoadBalancer case, how the internal traffic (coming from inside the cluster) to this service is load balanced?</p>
<p>Kindly try to give detailed answers.</p>
| <blockquote>
<p>First, Lets say we have created a deployment "iis", and scaled it to 3 replicas. Now without creating a service how can we access these endpoints?</p>
</blockquote>
<p>Unless you have an out of band solution for this (like a standard load balancer which you register the POD ips in) <em>you can't.</em>. Services are there to ease connections between pods. Use them!</p>
<blockquote>
<p>Now, how will the service loadbalances the traffic coming to this service inside the cluster?</p>
</blockquote>
<p>In order to understand this, it's worth knowing how services work in Kubernetes.</p>
<p>Services are handled by kube-proxy. Kube-proxy (by default now) creates iptables rules which look a little bit like this:</p>
<pre><code>-A KUBE-SERVICES ! -s 192.168.0.0/16 -d <svc-ip>/32 -p tcp -m comment --comment "namespace/service-name: cluster IP" -m tcp --dport svc-port -j KUBE-MARK-MASQ
</code></pre>
<p>What happens is, iptables looks at all the packets destined for the svc-ip and then <em>directs</em> them to the pod IP which is producing the service</p>
<p>If you take a look further at the iptables rules, and search for "probability" - you'll see something like this:</p>
<pre><code>-A KUBE-SVC-AGR3D4D4FQNH4O33 -m comment --comment "default/redis-slave:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-25QOXGEMBWAVOAG5
-A KUBE-SVC-GYQQTB6TY565JPRW -m comment --comment "default/frontend:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-JZXZFA7HRDMGR3BA
-A KUBE-SVC-GYQQTB6TY565JPRW -m comment --comment "default/frontend:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-ZW5YSZGA33MN3O6G
</code></pre>
<p>So the answer is, it's random with some probability weighting. A more thorough explanation of how the probability is weighted can be seen in this <a href="https://github.com/kubernetes/kubernetes/issues/37932#issuecomment-264950788" rel="nofollow noreferrer">github comment</a></p>
<blockquote>
<p>According to the documentation of kubernetes, there are 2 service proxies, userspace or iptables, How can I know which one my service is using?</p>
</blockquote>
<p>Again, this is determined by kube-proxy, and is decided when kube-proxy starts up. It's a command line flag on the kube-proxy process. By default, it'll use iptables, and it's highly recommended you stick with that unless you know what you're doing.</p>
<blockquote>
<p>My question is that how this external loadbalancer balances the traffic to the pods?</p>
</blockquote>
<p>This is entirely dependent on your cloud provider and the LoadBalance you've chosen.
What the LoadBalancer service type does it expose the service on a NodePort and then map an external port on the loadbalancer back to that.
All the LoadBalancer type does differently is register the node IP's serving the service in the external provider's load balancer, eg: ELB, rather than in an internal clusterIP service. I would recommend reading the docs for your cloud provider to determine this.</p>
<blockquote>
<p>Also, in this LoadBalancer case, how the internal traffic (coming from inside the cluster) to this service is load balanced?</p>
</blockquote>
<p>Again, see the docs for your cloud provider. </p>
|
<p>I deployed <code>kubernetes</code> 1.5 cluster on the aws, when I want to deploy a <code>LNMP</code> stack on the k8s cluster, but I find that it's difficult to wirte the <code>manifest</code> file, I can't find any examples on k8s official docs. so ,is there any detailed list of <code>manifest</code> parameters and also explan what the parameters meanings?</p>
<p>sorry for my poor english, hope somebody can help me ,thx </p>
| <p>There is a full reference on their website:</p>
<ul>
<li>k8s.io -> Documentation -> Reference -> Select your "Kubernetes API Versions"</li>
<li>For 1.10: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/</a></li>
</ul>
<p>Unfortunately this is hard to read for a beginner. Also, there are many examples on the GitHub repo:</p>
<ul>
<li>For 1.9: <a href="https://github.com/kubernetes/kubernetes/tree/release-1.9/examples" rel="noreferrer">https://github.com/kubernetes/kubernetes/tree/release-1.9/examples</a></li>
<li>Since 1.10: <a href="https://github.com/kubernetes/examples" rel="noreferrer">https://github.com/kubernetes/examples</a></li>
</ul>
|
<p>Is there a way to specify a specific instance instead of a pool of instances using <code>nodeSelector</code> in Kubernetes?</p>
<p>If not, what would be the best way to provision a Redis Cluster with each node having at least 30GB of memory. Can this be accomplished using the <code>resources</code> attribute?</p>
<p>By the way, I'm currently creating 6 pools with 1 instance in each and then specifying that in the config but it doesn't look right:</p>
<pre><code>nodeSelector:
cloud.google.com/gke-nodepool: my-pool-1
</code></pre>
| <p>The kubelet should automatically add a label for the hostname, using <code>kubernetes.io/hostname</code>.</p>
<p>With that in mind, you can pin a pod to a specific host using:</p>
<pre><code>nodeSelector:
kubernetes.io/hostname: "<hostname>"
</code></pre>
<p>I would question if this is a good idea, however.</p>
|
<p>We have a docker image and a corresponding yaml file for the deployment using kubernetes. The application we have built is in scala with akka-http. We have used akka-cluster. We have a particular variable(seed-nodes in our case - akka cluster) in the configuration file which is used in our application code that uses the pod ip. But, we will not get the pod ip unless the deployment is done. How should we go about tackling the issue? Will environment variables help, if yes how? </p>
<p>More specifically, Once the docker images is deployed in the container in a pod, and when the container starts, the pod ip is already assigned. So, can we programmatically or otherwise configure the pod ips in our code or config file before the process starts in the container? </p>
<p>For reference, this is our configuration file : </p>
<pre><code>akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://[email protected]:3000",
"akka.tcp://[email protected]:3001",
],
metrics {
enabled = off
}
}
}
service {
validateTokenService {
ml.pubkey.path = "<filePath>"
}
ml_repository {
url = <url address>
}
replication.factor = 3
http.service.interface = "0.0.0.0"
http.service.port = 8080
}
</code></pre>
<p>In the above file, instead of having akka.remote.netty.tcp.hostname as "127.0.0.1", we need to have the pod-ip.
So, that we can use it in the seed nodes as : </p>
<pre><code>seed-nodes = [
"akka.tcp://our-system@hostname:3000",
"akka.tcp://our-system@hostname:3001",
],
</code></pre>
<p>How can we do so?
Thanks in advance.</p>
| <p>I have tried something very similar. What you can do is use stateful sets <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">Kubernetes-Stateful sets</a>. Stateful sets have a naming convention and the pods will be named accordingly - you can read more about that in the link. This way, you can hard-code the values for the seed-nodes, since you know how the pods are going to be named. But, they have a drawback, stateful sets don't yet support rolling update(primarily because they are designed that way.) This article has a great explanation step by step : <a href="https://medium.com/google-cloud/clustering-akka-in-kubernetes-with-statefulset-and-deployment-459c0e05f2ea#.as2fq3qsw" rel="nofollow noreferrer">stateful set - akka clustering</a>
This article explains everything together - so posting a link instead of copying the steps here. Hope this helps</p>
|
<p>What would be the best setup to run <code>sonatype\nexus3</code> in Kubernetes that allows using the Docker repositories? </p>
<p>Currently I have a basic setup:</p>
<ul>
<li>Deployment of <code>sonatype\nexus3</code></li>
<li>Internal service exposing port 80 and 5000</li>
<li>Ingress + kube-lego provides HTTPS access to the Nexus UI</li>
</ul>
<p>How do I get around the limitation of ingress that doesn't allow more than one port?</p>
| <h1>tl;dr</h1>
<p>Nexus needs to be served over SSL, otherwise docker won't connect to it. This can be achieved with a k8s ingress + <a href="https://github.com/jetstack/kube-lego/" rel="noreferrer">kube-lego</a> for a <a href="https://letsencrypt.org/" rel="noreferrer">Let's Encrypt</a> certificate. Any other real certificate will work as well. However, in order to serve both the nexus UI and the docker registry through one ingress (thus, one port) one needs a reverse proxy behind the ingress to detect the docker user agent and forward the request to the registry.</p>
<pre><code> --(IF user agent docker) --> [nexus service]nexus:5000 --> docker registry
|
[nexus ingress]nexus.example.com:80/ --> [proxy service]internal-proxy:80 -->|
|
--(ELSE ) --> [nexus service]nexus:80 --> nexus UI
</code></pre>
<hr>
<h2>Start nexus server</h2>
<p><em>nexus-deployment.yaml</em>
This makes use of an azureFile volume, but you can use any volume. Also, the secret is not shown, for obvious reasons.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nexus
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nexus
spec:
containers:
- name: nexus
image: sonatype/nexus3:3.3.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
- containerPort: 5000
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
resources:
requests:
cpu: 440m
memory: 3.3Gi
limits:
cpu: 440m
memory: 3.3Gi
volumes:
- name: nexus-data
azureFile:
secretName: azure-file-storage-secret
shareName: nexus-data
</code></pre>
<p>It is always a good idea to add health and readiness probes, so that kubernetes can detect when the app goes down. Hitting the <code>index.html</code> page doesn't always work very well, so I'm using the REST API instead. This requires adding the Authorization header for a user with the <code>nx-script-*-browse</code> permission. Obviously you'll have to first bring the system up without probes to set up the user, then update your deployment later.</p>
<pre><code> readinessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
# The authorization token is simply the base64 encoding of the `healthprobe` user's credentials:
# $ echo -n user:password | base64
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
livenessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
</code></pre>
<p>Because nexus can sometimes take a long time to start, I use a very generous initial delay and timeout.</p>
<p><em>nexus-service.yaml</em> Expose port 80 for the UI, and port 5000 for the registry. This must correspond to the port configured for the registry through the UI.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: nexus
name: nexus
namespace: default
selfLink: /api/v1/namespaces/default/services/nexus
spec:
ports:
- name: http
port: 80
targetPort: 8081
- name: docker
port: 5000
targetPort: 5000
selector:
app: nexus
type: ClusterIP
</code></pre>
<h2>Start reverse proxy (nginx)</h2>
<p><em>proxy-configmap.yaml</em> The <em>nginx.conf</em> is added as ConfigMap data volume. This includes a rule for detecting the docker user agent. This relies on the kubernetes DNS to access the <code>nexus</code> service as upstream.</p>
<pre><code>apiVersion: v1
data:
nginx.conf: |
worker_processes auto;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:80;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
server_name nexus.example.com;
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: internal-proxy-conf
namespace: default
selfLink: /api/v1/namespaces/default/configmaps/internal-proxy-conf
</code></pre>
<p><em>proxy-deployment.yaml</em> </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: internal-proxy
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
proxy: internal
spec:
containers:
- name: nginx
image: nginx:1.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: internal-proxy-conf
mountPath: /etc/nginx/
env:
# This is a workaround to easily force a restart by incrementing the value (numbers must be quoted)
# NGINX needs to be restarted for configuration changes, especially DNS changes, to be detected
- name: RESTART_
value: "0"
volumes:
- name: internal-proxy-conf
configMap:
name: internal-proxy-conf
items:
- key: nginx.conf
path: nginx.conf
</code></pre>
<p><em>proxy-service.yaml</em> The proxy is deliberately of type <code>ClusterIP</code> because the ingress will forward traffic to it. Port 443 is not used in this example.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: internal-proxy
namespace: default
spec:
selector:
proxy: internal
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
type: ClusterIP
</code></pre>
<h2>Create Ingress</h2>
<p><em>nexus-ingress.yaml</em> This step assumes you have an nginx ingress controller. If you have a certificate you don't need an ingress and can instead expose the proxy service, but you won't have the automation benefits of kube-lego.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nexus
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- nexus.example.com
secretName: nexus-tls
rules:
- host: nexus.example.com
http:
paths:
- path: /
backend:
serviceName: internal-proxy
servicePort: 80
</code></pre>
|
<p>I have deployed my <a href="https://github.com/CrunchyData/crunchy-containers/tree/master/examples/kubehelm/crunchy-postgresql-cluster" rel="noreferrer">crunchy db</a> postgresq on my Kubernetes cluster. </p>
<p>However, I am not sure how to connect to the database remotely. </p>
<p>What command can I use to connect remotely so I can create a new database?</p>
<p>Is there a kubectl command to go with psql?</p>
| <p>I was able to look at another forum and found what I needed. I executed with the pod name and gets me to a bash prompt.</p>
<pre><code>kubectl exec -it <POD_NAME> bash
</code></pre>
|
<p>I am working with windows + kubernetes cluster on ACS (Azure Container Service). I have few queries about the auto-scaling of agents/worker nodes in ACS. </p>
<p>Suppose I have a cluster of 1 master and 5 worker nodes. I have 200 running pods and these pods are distributed along the 5 worker nodes and the resources of these 5 nodes are used. Now, if I deploy a new pod or scale the running pods which will requires more resources, so is there any way ACS can auto-scale the worker nodes to like 7 worker nodes based on the resource usage?</p>
<p>Same case, if resource usage is reduced, can ACS descale the worker nodes to 3 worker nodes from 7 nodes?</p>
<p>My question is not related to auto-scaling of pods as provided by kubernetes, I am talking about auto-scaling of worker/agent nodes which are managed by ACS.</p>
| <blockquote>
<p>My question is not related to auto-scaling of pods as provided by
kubernetes, I am talking about auto-scaling of worker/agent nodes
which are managed by ACS</p>
</blockquote>
<p>Currently, autoscaling of agent nodes in a container service cluster is <strong>not</strong> supported.</p>
<p>For now, we can use Azure CLI2.0 to scale down or up by command <code>az acs scale</code></p>
<p>For example:<br>
<code>azure acs scale -g myResourceGroup -n containerservice-myACSName --new-agent-count 10</code></p>
<p>More information about az acs scale command, please refer to this <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-scale#scale-with-the-azure-cli-20" rel="noreferrer">link</a>.</p>
|
<p>Is it possible to use the python API to create a "custom object" in kubernetes?</p>
<p>Edit:</p>
<p>By custom objects I'm refering to <a href="https://kubernetes.io/docs/user-guide/thirdpartyresources/#creating-custom-objects" rel="nofollow noreferrer">this</a>.</p>
<p>Thank you.</p>
| <p>There isn't a ready-to-use API yet. Here is a rudimentary example using <a href="https://github.com/kubernetes-incubator/client-python" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/client-python</a></p>
<pre><code>import json
from kubernetes import client, config
class ThingyApi(object):
def __init__(self):
config = client.Configuration()
if not config.api_client:
config.api_client = client.ApiClient()
self.api_client = config.api_client
def create_thingy(self, body, namespace='default'):
resource_path = '/apis/example.com/v1/namespaces/' + namespace + '/thingies'
header_params = {}
header_params['Accept'] = self.api_client.select_header_accept(['application/json'])
header_params['Content-Type'] = self.api_client.select_header_content_type(['*/*'])
(resp, code, header) = self.api_client.call_api(
resource_path, 'POST', {'namespace': namespace}, {}, header_params, body, [], _preload_content=False)
return json.loads(resp.data.decode('utf-8'))
config.load_kube_config()
# Create the ThirdPartyResource (Thingy.example.com)
resp = client.ExtensionsV1beta1Api().create_third_party_resource(body={
'apiVersion': 'extensions/v1beta1',
'kind': 'ThirdPartyResource',
'metadata': {'name': 'thingy.example.com'},
'description': 'For storage of Thingy objects',
'versions': [{'name': 'v1'}]})
print("ThirdPartyResource created")
print(str(resp))
# Create 3 Thingy objects (mything-{1,2,3})
thingyapi = ThingyApi()
for i in range(1, 4):
obj = {'apiVersion': 'example.com/v1',
'metadata': {'name': 'mythingy-'+str(i)},
'kind': 'Thingy',
# Arbitrary contents
'key': 'value',
'array': [40, 2],
'object': {'foo': 'bar'}}
resp = thingyapi.create_thingy(body=obj, namespace='default')
print(str(resp))
</code></pre>
<p>The output will be something like this:</p>
<pre><code>$ bin/python test.py
ThirdPartyResource created
{'api_version': 'extensions/v1beta1',
'description': 'For storage of Thingy objects',
'kind': 'ThirdPartyResource',
'metadata': {'annotations': None,
'cluster_name': None,
'creation_timestamp': u'2017-03-14T13:57:07Z',
'deletion_grace_period_seconds': None,
'deletion_timestamp': None,
'finalizers': None,
'generate_name': None,
'generation': None,
'labels': None,
'name': 'thingy.example.com',
'namespace': None,
'owner_references': None,
'resource_version': '59942',
'self_link': '/apis/extensions/v1beta1/thirdpartyresourcesthingy.example.com',
'uid': '1c596824-08be-11e7-9a5f-5254000f561a'},
'versions': [{'name': 'v1'}]}
{u'kind': u'Thingy', u'object': {u'foo': u'bar'}, u'apiVersion': u'example.com/v1', u'key': u'value', u'array': [40, 2], u'metadata': {u'name': u'mythingy-1', u'namespace': u'default', u'resourceVersion': u'59943', u'creationTimestamp': u'2017-03-14T13:57:07Z', u'selfLink': u'/apis/example.com/v1/namespaces/default/thingies/mythingy-1', u'uid': u'1c59f7ae-08be-11e7-9a5f-5254000f561a'}}
{u'kind': u'Thingy', u'object': {u'foo': u'bar'}, u'apiVersion': u'example.com/v1', u'key': u'value', u'array': [40, 2], u'metadata': {u'name': u'mythingy-2', u'namespace': u'default', u'resourceVersion': u'59944', u'creationTimestamp': u'2017-03-14T13:57:07Z', u'selfLink': u'/apis/example.com/v1/namespaces/default/thingies/mythingy-2', u'uid': u'1c5be2a7-08be-11e7-9a5f-5254000f561a'}}
{u'kind': u'Thingy', u'object': {u'foo': u'bar'}, u'apiVersion': u'example.com/v1', u'key': u'value', u'array': [40, 2], u'metadata': {u'name': u'mythingy-3', u'namespace': u'default', u'resourceVersion': u'59945', u'creationTimestamp': u'2017-03-14T13:57:07Z', u'selfLink': u'/apis/example.com/v1/namespaces/default/thingies/mythingy-3', u'uid': u'1c5c390e-08be-11e7-9a5f-5254000f561a'}}
</code></pre>
<p>Don't forget to run this after:</p>
<pre><code>kubectl delete thingy --all
kubectl delete thirdpartyresource thingy.example.com
</code></pre>
|
<p>I have one pod running with name 'jenkins-app-2843651954-4zqdp'. I want to install few softwares temporarily on this pod. How can I do this?</p>
<p>I am trying this- <code>kubectl exec -it jenkins-app-2843651954-4zqdp -- /bin/bash</code>
and then running apt-get install commands but since the user I am accessing with doesn't have sudo access I am not able to run commands</p>
| <ul>
<li>Use <code>kubectl describe pod ...</code> to find the node running your Pod and the container ID (<code>docker://...</code>)</li>
<li>SSH into the node</li>
<li>Run <code>docker exec -it -u root CONTAINER_ID /bin/bash</code></li>
</ul>
|
<p>i setup a kubernetes cluster on azure with the azure-container-service cli (az acs create). The cluster is up and running and it seems to work fine. Now I want to sign client certificates with my kubernetes CA which was created on installation. In my understanding i need the ca certificate (which is hand over to the kubernetes api server with --client-ca-file=) and the private key from this ca file to sign a new client certificate. The Problem is I can't find the private key for my CA file.</p>
<p>Where can i find the private key?</p>
<p>Can i sign client certs for my developer without this private key?</p>
<p>Is the setup process of azure-container-service broken when the private key is lost?</p>
| <p>Are these the one that you are looking for??</p>
<pre><code>azureuser@k8s-master-9XXXXX-0:~$ ls -la /etc/kubernetes/certs/
total 28
drwxr-xr-x 2 root root 4096 Mar 14 20:59 .
drwxr-xr-x 5 root root 4096 Mar 14 20:59 ..
-rw-r--r-- 1 root root 1600 Mar 14 20:58 apiserver.crt
-rw-r--r-- 1 root root 2048 Mar 14 20:59 apiserver.key
-rw-r--r-- 1 root root 1182 Mar 14 20:58 ca.crt
-rw-r--r-- 1 root root 1202 Mar 14 20:58 client.crt
-rw-r--r-- 1 root root 2048 Mar 14 20:59 client.key
</code></pre>
|
<p>I set up the <code>kubernetes</code> cluster, and I found the <code>pause-amd64:3.0</code> container on master or minion like this:</p>
<pre><code>[root@k8s-minion1 kubernetes]# docker ps |grep pause
c3026adee957 gcr.io/google_containers/pause-amd64:3.0 "/pause" 22 minutes ago Up 22 minutes k8s_POD.d8dbe16c_redis-master-343230949-04glm_default_ce3f60a9-095d-11e7-914b-0a77ecd65f3e_66c108d5
202df18d636e gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 hours ago Up 24 hours k8s_POD.d8dbe16c_kube-proxy-js0z0_kube-system_2866cfc2-0891-11e7-914b-0a77ecd65f3e_c8e1a667
072d3414d33a gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 hours ago Up 24 hours k8s_POD.d8dbe16c_kube-flannel-ds-tsps5_default_2866e3fb-0891-11e7-914b-0a77ecd65f3e_be4b719e
[root@k8s-minion1 kubernetes]#
</code></pre>
<p>so what does <code>k8s</code> use this for ? </p>
| <p>It's part of the infrastructure. This container is started first in all Pods to setup the network for the Pod.</p>
<p>It does nothing after the Pod has started.</p>
<p>Here is the <a href="https://github.com/kubernetes/kubernetes/tree/master/build/pause" rel="noreferrer">source code</a>.</p>
|
<p>I am in an environment where we are still using petsets. When I run the following command : </p>
<pre><code>kubectl delete petsets <petset-name>
</code></pre>
<p>The petset is deleted but the pods(pets) associated with it are not deleted. When I tried this locally on minikube using statefulsets, the statefulsets as well as pods are getting deleted. Is it the behaviour of petsets? or Is it a bug with petsets or my environment? Is there a way to debug this? Because when I checked the documentation here : <a href="https://kubernetes.io/docs/user-guide/petset/#deleting-a-petset" rel="nofollow noreferrer">petsets</a>, it seems the pods should have been deleted!? </p>
| <p>You are using an old version of kubernetes that is affected by <a href="https://github.com/kubernetes/kubernetes/pull/27438" rel="nofollow noreferrer">issue 27438</a>. Upgrade to 1.5. (See <code>kubectl version</code>)</p>
|
<p>I'm setting up Jenkins on kubernetes. My current deployment looks like this:</p>
<pre><code> spec:
containers:
- name: jenkins-master
image: jenkins:1.0
ports:
- containerPort: 8080
name: http
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
volumes:
- name: jenkins-home
emptyDir: {}
</code></pre>
<p>It works fine, but the data disappears if the pod gets destroyed, because the folder that jenkins uses to store data is mounted as an <code>emptyDir</code>, which means the data lives as long as the pod lives.</p>
<p>This is obviously not an optimal scenario, since the pod can be destroyed for many reasons, even during normal operations. I know I have to replace the <code>emptyDir</code> with something else, however, I'm not sure what should I use. I could provision a GCE disk, however, if you provision a disk under 200GB, you get a warning that your disk will be throttled. Since GCE disks can only by claimed by one writer at the time, it seems overkill, we would end up with an expensive bill if I used a GCE disk for every service that needs to hold persistent data. </p>
<p>To generalize the question:</p>
<p>You run several services on kubernetes, that need to hold a small amount of persistent, on-disk, data. What kind of storage do you provision for them?</p>
| <p>What you're looking for is <a href="https://kubernetes.io/docs/user-guide/persistent-volumes/" rel="nofollow noreferrer">Persistent Volumes</a> which essentially model an underlying persistent volume (<em>say, AWS EBS or OpenStack Cinder volume</em>) as resource.</p>
<p>This resource, named simply <a href="https://kubernetes.io/docs/user-guide/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer"><code>PersistentVolume</code></a> defines the specification for the volume including e.g. its size and name. To actually use it you need to add <a href="https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer"><code>PersistentVolumeClaim</code></a> to your deployment yml which defines what your deployment <em>expects</em> from the persistent volume you want to attach to it - it is possible to have multiple matching persistent volumes for each claim just as it is possible that there won't be any volumes left to claim which is why this distinction exists.</p>
|
<p>How can I generate ConfigMap from directory without create it?</p>
<p>If I use:</p>
<pre><code>$ kubectl create configmap my-config --from-file=configuration/ -o yaml
apiVersion: v1
data:
...
</code></pre>
<p>ConfigMap yaml output is displayed but <code>my-config</code> is created on current Kubernetes project context.</p>
<p>I would like only generate ConfigMap file, it is possible? Are there a <code>kubectl create</code> "dry" mode?</p>
<p>Best regards,<br/>
Stéphane</p>
| <p>Just add <code>--dry-run</code> so:</p>
<pre><code>$ kubectl create configmap my-config --from-file=configuration/ -o yaml --dry-run
</code></pre>
<p>Source: <a href="https://kubernetes.io/docs/user-guide/kubectl/kubectl_create_configmap/" rel="noreferrer">https://kubernetes.io/docs/user-guide/kubectl/kubectl_create_configmap/</a></p>
|
<p>I am new to Kubernetes. I have setup kubernetes with 1 master and 3 slave nodes. I have created mysql and nginx pods and it is running on different nodes successfully. Now I want to deploy my application. It has nginx,php-fpm, PHPmodule and MySQL combination. How can I achieve this in Kubernetes? </p>
<p>Here is what I have tried for php-fpm,</p>
<p>cat php-fpm.yaml :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: php-fpm
labels:
name: php-fpm
spec:
containers:
- resources:
limits :
cpu: 0.5
image: php-fpm
name: php-fpm
env:
- name: PHPFPM_SERVICE
# change this
ports:
- containerPort: 9000
name: php-fpm
</code></pre>
<p>cat php-fpm-service.yaml : </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: php-fpm
name: php-fpm
spec:
externalIPs:
- 10.128.0.3
ports:
# the port that this service should serve on
- port: 9000
# label keys and values that must match in order to receive traffic for this service
selector:
name: php-fpm
</code></pre>
<p>But php-fpm Pod is not running; the output is below.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 1d
mysql1 1/1 Running 0 18h
nginx 1/1 Running 0 18h
php-fpm 0/1 ErrImagePull 0 1m
</code></pre>
<p>How can I get it running?</p>
| <p>Your Pod fails to start because it can't find the image you specified on docker hub (<code>ErrImagePull</code>).</p>
<p>Change <code>php-fpm.yaml</code> like this:</p>
<pre><code>...
image: php:fpm
...
</code></pre>
<p>See the full list of <a href="https://hub.docker.com/_/php/" rel="nofollow noreferrer">official php image tags</a>.</p>
<p>To get a better overall idea I suggest read the <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/mysql-wordpress-pd/README.md" rel="nofollow noreferrer">tutorial on running WordPress on kubernetes (using mysql)</a> before you try to roll your own solution.</p>
|
<p>I have a running GCP Kubernetes cluster. I managed to deploy some services and exposed them successfully using kubectl expose ... type="LoadBalancer"... However, one particular new service is not working. I know there may be a thousand causes to check, but the Docker images I build are very compact so I can't find useful tools to run via kubectl exec inside a pod or container.</p>
<p><strong>Question</strong>: what might be my diagnosis options using any possible cluster tool only? What kind of logs can I inspect or which environmental variables can I read? </p>
<p>UPDATED:</p>
<p><strong>$ kubectl get pods</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
helianto-mailer-1024769093-6407d 2/2 Running 0 6d
helianto-spring-2246525676-l54p9 2/2 Running 0 6d
iservport-shipfo-12873703-wrh37 2/2 Running 0 13h
</code></pre>
<p><strong>$ kubectl describe pod iservport-shipfo-12873703-wrh37</strong></p>
<pre><code>Name: iservport-shipfo-12873703-wrh37
Namespace: default
Node: gke-iservport01-default-pool-xxx/xx.xx.xx.xx
Start Time: Tue, 14 Mar 2017 17:28:18 -0300
Labels: app=SHIPFO
pod-template-hash=12873703
Status: Running
IP: yy.yy.yy.yy
Controllers: ReplicaSet/iservport-shipfo-12873703
Containers:
iservport-shipfo:
Container ID: docker://...
Image: us.gcr.io/mvps-156214/iservport-xxx
Image ID: docker://...
Port: 8085/TCP
Requests:
cpu: 100m
State: Running
Started: Tue, 14 Mar 2017 17:28:33 -0300
Ready: True
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mmeza (ro)
Environment Variables:
SPRING_PROFILES_ACTIVE: gcp
HELIANTO_MAILER_URL: http://10.35.254.197:8082
cloudsql-proxy:
Container ID: docker://...
Image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
Image ID: docker://...
Port:
Command:
/cloud_sql_proxy
--dir=/cloudsql
-instances=mvps-156214:us-east1-b:helianto01=tcp:3306
-credential_file=/secrets/cloudsql/credentials.json
Requests:
cpu: 100m
State: Running
Started: Tue, 14 Mar 2017 17:28:33 -0300
Ready: True
Restart Count: 0
Volume Mounts:
/cloudsql from cloudsql (rw)
/etc/ssl/certs from ssl-certs (rw)
/secrets/cloudsql from cloudsql-oauth-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mmeza (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
cloudsql-oauth-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-oauth-credentials
ssl-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
cloudsql:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-mmeza:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mmeza
QoS Class: Burstable
Tolerations: <none>
No events.
</code></pre>
<p><strong>$ kubectl get svc</strong></p>
<pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helianto-mailer-service 10.35.254.197 <nodes> 443:32178/TCP,80:30771/TCP 12d
helianto-spring 10.35.241.27 xxx.xxx.xxx.xxx 80:30974/TCP 52d
iservport-shipfo 10.35.240.129 xx.xxx.xxx.xxx 80:32598/TCP 14h
kubernetes 10.35.240.1 <none> 443/TCP 53d
</code></pre>
<p><strong>$ kubectl describe svc iservport-shipfo</strong></p>
<pre><code>Name: iservport-shipfo
Namespace: default
Labels: app=SHIPFO
Selector: app=SHIPFO
Type: LoadBalancer
IP: 10.35.240.129
LoadBalancer Ingress: xx.xxx.xxx.xxx
Port: <unset> 80/TCP
NodePort: <unset> 32598/TCP
Endpoints: 10.32.4.26:8085
Session Affinity: None
No events.
</code></pre>
| <p>You need make sure if your service is responding in http port. Maybe you could do a port-forward from your pod to your desktop local. Please, replace the values pod_name, pod_port and local_port in command bellow.</p>
<p><code>kubectl port-forward <pod_name> <local_port>:<pod_port>
</code></p>
<p>After this, access <a href="http://localhost:local_port" rel="noreferrer">http://localhost:local_port</a> and verify if return something. This way, you can make sure if your application is responding.</p>
|
<p>I have installed influxdb in docker container (Kubernetes) and I have mounted a persistent volume to that container. But influxdb is not writing data to that volume.
Can anyone please tell me steps, so that influxdb will write data in particular volume.
Thanks</p>
| <p><strong>Short Answer:</strong> </p>
<pre><code> $ docker run -p 8083:8083 -p 8086:8086 \
-v $PWD:/var/lib/influxdb \
influxdb
</code></pre>
<p>Modify $PWD with the path to external volume.</p>
<p><strong>Long answer:</strong></p>
<pre><code>docker run -p 8083:8083 -p 8086:8086 influxdb
</code></pre>
<p>By default this will store the data in /var/lib/influxdb. All InfluxDB data lives in there. To make that a persistent volume (recommended):</p>
<pre><code>$ docker run -p 8083:8083 -p 8086:8086 \
-v $PWD:/var/lib/influxdb \
influxdb
</code></pre>
<p>Modify $PWD to the directory where you want to store data associated with the InfluxDB container.</p>
<p>For example,</p>
<pre><code> $ docker run -p 8083:8083 -p 8086:8086 \
-v /your/home:/var/lib/influxdb \
influxdb
</code></pre>
<p>This will store the influx data in /your/home on the host.</p>
|
<p>I was wondering if someone could give me a brief overview of the differences/ advantages between all of the different Kubernetes network overlays. The getting started guide (<a href="http://kubernetes.io/docs/getting-started-guides/scratch/#network" rel="noreferrer">http://kubernetes.io/docs/getting-started-guides/scratch/#network</a>) mentions the following:</p>
<ul>
<li>Flannel</li>
<li>Calico</li>
<li>Weave</li>
<li>Romana</li>
<li>Open vSwitch (OVS)</li>
</ul>
<p>But doesn't really explain what the differences between them are or what the advantages and disadvantages each one has. I was wondering if someone could give me an idea of which one of these solutions I should be using for a bare metal CentOS 7 cluster.</p>
<p>Thanks!</p>
| <p><a href="https://docs.google.com/spreadsheets/d/1polIS2pvjOxCZ7hpXbra68CluwOZybsP1IYfr-HrAXc" rel="nofollow noreferrer">This comparison matrix</a> was shared several times on Kubernetes' Slack and may be useful.</p>
<p>However, beware potentially out-of-date information, keep in mind the "devil is in the details" so the reality may not be as simple as it would seem according to this document. All available solutions will have pros and cons, but will also be more suitable for some use-cases than others, so as always, it is a question of trade-offs and YMMV.</p>
|
<p>All</p>
<p>have kubernetes on Google Computing Engine running, launching pods etc.</p>
<p>Recently learned that in Kubernetes Jobs were added as first class construct.</p>
<p>Could someone enlighten me why they were added? Looks like another level of indirection, but to solve exactly WHAT kind of problems which pods and replication controller were unable to solve?</p>
| <p>Jobs are for workloads designed for tasks that will run to completion, then exit.</p>
<p>It is technically still a pod, just another kind. Where a pod while start up and run continuously until it's terminated or it crashes, a job will run a workload, then it will exit with a "Completed" status.</p>
<p>There is also a cronJob type, which will run periodically on set times. If you think of how cronJobs work on a standard Linux system, it should give you an idea of how you might utilize jobs (or cronJobs) on your k8s cluster</p>
|
<p>Here's the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/storage/redis" rel="nofollow noreferrer">example</a> I have modeled after.</p>
<p>In the Readme's "Delete our manual pod" section:</p>
<blockquote>
<ol start="3">
<li>The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.</li>
</ol>
</blockquote>
<p>How do I select the new master? All 3 Redis server pods controlled by the <code>redis</code> replication controller from <code>redis-controller.yaml</code> still have the same</p>
<pre><code>labels:
name: redis
</code></pre>
<p>which is what I currently use in my Service to select them. How will the 3 pods be distinguishable so that from Kubernetes I know which one is the master?</p>
| <blockquote>
<p>How will the 3 pods be distinguishable so that from Kubernetes I know
which one is the master?</p>
</blockquote>
<p>Kubernetes isnt aware of the master nodes. You can find the pod manually by connecting to it and using:</p>
<pre><code>redis-cli info
</code></pre>
<p>You will get lots of information about the server but we need role for our purpose:</p>
<pre><code>redis-cli info | grep ^role
Output:
role: Master
</code></pre>
<p>Please note <code>Replication controllers</code> are replaced by <code>Deployments</code> for stateless services. For stateful services use <code>Statefulsets</code>. </p>
|
<p>We have a Kubernetes cluster which spins up 4 instances of our application. We'd like to have it share a Hazelcast data grid and keep in synch between these nodes. According to <a href="https://github.com/hazelcast/hazelcast-kubernetes" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast-kubernetes</a> the configuration is straightforward. We'd like to use the DNS approach rather than the kubernetes api.</p>
<p>With DNS we are supposed to be able to add the DNS name of our app as described <a href="https://github.com/kubernetes/kubernetes/tree/v1.0.6/cluster/addons/dns" rel="nofollow noreferrer">here</a>. So this would be something like myservice.mynamespace.svc.cluster.local.</p>
<p>The problem is that although we have 4 VMs spun up, only one Hazelcast network member is found; thus we see the following in the logs:</p>
<pre><code>Members [1] {
Member [192.168.187.3]:5701 - 50056bfb-b710-43e0-ad58-57459ed399a5 this
}
</code></pre>
<p>It seems that there aren't any errors, it just doesn't see any of the other network members.</p>
<p>Here's my configuration. I've tried both using an xml file, like the example on the hazelcast-kubernetes git repo, as well as programmatically. Neither attempt appear to work.</p>
<p>I'm using hazelcast 3.8.</p>
<hr>
<p>Using hazelcast.xml:</p>
<pre><code><hazelcast>
<properties>
<!-- only necessary prior Hazelcast 3.8 -->
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<!-- deactivate normal discovery -->
<multicast enabled="false"/>
<tcp-ip enabled="false" />
<!-- activate the Kubernetes plugin -->
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.HazelcastKubernetesDiscoveryStrategy">
<properties>
<!-- configure discovery service API lookup -->
<property name="service-dns">myapp.mynamespace.svc.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</hazelcast>
</code></pre>
<p>Using the XmlConfigBuilder to construct the instance.</p>
<pre><code>Properties properties = new Properties();
XmlConfigBuilder builder = new XmlConfigBuilder();
builder.setProperties(properties);
Config config = builder.build();
this.instance = Hazelcast.newHazelcastInstance(config);
</code></pre>
<hr>
<p>And Programmatically (personal preference if I can get it to work):</p>
<pre><code>Config cfg = new Config();
NetworkConfig networkConfig = cfg.getNetworkConfig();
networkConfig.setPort(hazelcastNetworkPort);
networkConfig.setPortAutoIncrement(true);
networkConfig.setPortCount(100);
JoinConfig joinConfig = networkConfig.getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
joinConfig.getTcpIpConfig().setEnabled(false);
DiscoveryConfig discoveryConfig = joinConfig.getDiscoveryConfig();
HazelcastKubernetesDiscoveryStrategyFactory factory = new HazelcastKubernetesDiscoveryStrategyFactory();
DiscoveryStrategyConfig strategyConfig = new DiscoveryStrategyConfig(factory);
strategyConfig.addProperty("service-dns", kubernetesSvcsDnsName);
strategyConfig.addProperty("service-dns-timeout", kubernetesSvcsDnsTimeout);
discoveryConfig.addDiscoveryStrategyConfig(strategyConfig);
this.instance = Hazelcast.newHazelcastInstance(cfg);
</code></pre>
<hr>
<p>Is anyone farmiliar with this setup? I have ports 5701 - 5800 open. It seems kubernetes starts up and recognizes that discovery mode is on, but only finds the one (local) node.</p>
<p>Here's a snippet from the logs for what it's worth. This was while using the xml file for config:</p>
<pre><code>2017-03-15 08:15:33,688 INFO [main] c.h.c.XmlConfigLocator [StandardLoggerFactory.java:49] Loading 'hazelcast-default.xml' from classpath.
2017-03-15 08:15:33,917 INFO [main] c.g.a.c.a.u.c.HazelcastCacheClient [HazelcastCacheClient.java:112] CONFIG: Config{groupConfig=GroupConfig [name=dev, password=********], properties={}, networkConfig=NetworkConfig{publicAddress='null', port=5701, portCount=100, portAutoIncrement=true, join=JoinConfig{multicastConfig=MulticastConfig [enabled=true, multicastGroup=224.2.2.3, multicastPort=54327, multicastTimeToLive=32, multicastTimeoutSeconds=2, trustedInterfaces=[], loopbackModeEnabled=false], tcpIpConfig=TcpIpConfig [enabled=false, connectionTimeoutSeconds=5, members=[127.0.0.1, 127.0.0.1], requiredMember=null], awsConfig=AwsConfig{enabled=false, region='us-west-1', securityGroupName='hazelcast-sg', tagKey='type', tagValue='hz-nodes', hostHeader='ec2.amazonaws.com', iamRole='null', connectionTimeoutSeconds=5}, discoveryProvidersConfig=com.hazelcast.config.DiscoveryConfig@3c153a1}, interfaces=InterfacesConfig{enabled=false, interfaces=[10.10.1.*]}, sslConfig=SSLConfig{className='null', enabled=false, implementation=null, properties={}}, socketInterceptorConfig=SocketInterceptorConfig{className='null', enabled=false, implementation=null, properties={}}, symmetricEncryptionConfig=SymmetricEncryptionConfig{enabled=false, iterationCount=19, algorithm='PBEWithMD5AndDES', key=null}}, mapConfigs={default=MapConfig{name='default', inMemoryFormat=BINARY', backupCount=1, asyncBackupCount=0, timeToLiveSeconds=0, maxIdleSeconds=0, evictionPolicy='NONE', mapEvictionPolicy='null', evictionPercentage=25, minEvictionCheckMillis=100, maxSizeConfig=MaxSizeConfig{maxSizePolicy='PER_NODE', size=2147483647}, readBackupData=false, hotRestart=HotRestartConfig{enabled=false, fsync=false}, nearCacheConfig=null, mapStoreConfig=MapStoreConfig{enabled=false, className='null', factoryClassName='null', writeDelaySeconds=0, writeBatchSize=1, implementation=null, factoryImplementation=null, properties={}, initialLoadMode=LAZY, writeCoalescing=true}, mergePolicyConfig='com.hazelcast.map.merge.PutIfAbsentMapMergePolicy', wanReplicationRef=null, entryListenerConfigs=null, mapIndexConfigs=null, mapAttributeConfigs=null, quorumName=null, queryCacheConfigs=null, cacheDeserializedValues=INDEX_ONLY}}, topicConfigs={}, reliableTopicConfigs={default=ReliableTopicConfig{name='default', topicOverloadPolicy=BLOCK, executor=null, readBatchSize=10, statisticsEnabled=true, listenerConfigs=[]}}, queueConfigs={default=QueueConfig{name='default', listenerConfigs=null, backupCount=1, asyncBackupCount=0, maxSize=0, emptyQueueTtl=-1, queueStoreConfig=null, statisticsEnabled=true}}, multiMapConfigs={default=MultiMapConfig{name='default', valueCollectionType='SET', listenerConfigs=null, binary=true, backupCount=1, asyncBackupCount=0}}, executorConfigs={default=ExecutorConfig{name='default', poolSize=16, queueCapacity=0}}, semaphoreConfigs={default=SemaphoreConfig{name='default', initialPermits=0, backupCount=1, asyncBackupCount=0}}, ringbufferConfigs={default=RingbufferConfig{name='default', capacity=10000, backupCount=1, asyncBackupCount=0, timeToLiveSeconds=0, inMemoryFormat=BINARY, ringbufferStoreConfig=RingbufferStoreConfig{enabled=false, className='null', properties={}}}}, wanReplicationConfigs={}, listenerConfigs=[], partitionGroupConfig=PartitionGroupConfig{enabled=false, groupType=PER_MEMBER, memberGroupConfigs=[]}, managementCenterConfig=ManagementCenterConfig{enabled=false, url='http://localhost:8080/mancenter', updateInterval=3}, securityConfig=SecurityConfig{enabled=false, memberCredentialsConfig=CredentialsFactoryConfig{className='null', implementation=null, properties={}}, memberLoginModuleConfigs=[], clientLoginModuleConfigs=[], clientPolicyConfig=PermissionPolicyConfig{className='null', implementation=null, properties={}}, clientPermissionConfigs=[]}, liteMember=false}
2017-03-15 08:15:33,949 INFO [main] c.h.i.DefaultAddressPicker [StandardLoggerFactory.java:49] [LOCAL] [dev] [3.8] Prefer IPv4 stack is true.
2017-03-15 08:15:33,960 INFO [main] c.h.i.DefaultAddressPicker [StandardLoggerFactory.java:49] [LOCAL] [dev] [3.8] Picked [192.168.187.3]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2017-03-15 08:15:34,000 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Hazelcast 3.8 (20170217 - d7998b4) starting at [192.168.187.3]:5701
2017-03-15 08:15:34,001 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
2017-03-15 08:15:34,001 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Configured Hazelcast Serialization version : 1
2017-03-15 08:15:34,507 INFO [main] c.h.s.i.o.i.BackpressureRegulator [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Backpressure is disabled
2017-03-15 08:15:35,170 INFO [main] c.h.i.Node [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Creating MulticastJoiner
2017-03-15 08:15:35,339 INFO [main] c.h.s.i.o.i.OperationExecutorImpl [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Starting 8 partition threads
2017-03-15 08:15:35,342 INFO [main] c.h.s.i.o.i.OperationExecutorImpl [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Starting 5 generic threads (1 dedicated for priority tasks)
2017-03-15 08:15:35,351 INFO [main] c.h.c.LifecycleService [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] [192.168.187.3]:5701 is STARTING
2017-03-15 08:15:37,463 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Cluster version set to 3.8
2017-03-15 08:15:37,466 INFO [main] c.h.i.c.i.MulticastJoiner [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8]
Members [1] {
Member [192.168.187.3]:5701 - 50056bfb-b710-43e0-ad58-57459ed399a5 this
}
</code></pre>
| <p>Could you try with service dns name as:</p>
<p>myapp.mynamespace.endpoints.cluster.local</p>
<p>Please reply it's work or not and also post your full log.</p>
|
<p>I'm using KOPs to launch a Kubernetes cluster in the AWS environment. </p>
<ol>
<li>Is there a way to set a predefined SSH key when calling <code>create cluster</code>?</li>
<li>If KOPs autogenerates the SSH key when running <code>create cluster</code>, is there a way to download this key to access the cluster nodes?</li>
</ol>
| <p>Please read the Kops <a href="https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access" rel="noreferrer">SSH docs</a>:</p>
<blockquote>
<p>When using the default images, the SSH username will be admin, and the SSH private key is be the private key corresponding to the public key in kops get secrets --type sshpublickey admin. When creating a new cluster, the SSH public key can be specified with the --ssh-public-key option, and it defaults to ~/.ssh/id_rsa.pub.</p>
</blockquote>
<p>So to answer your questions:</p>
<ol>
<li>Yes, you can set the key using <code>--ssh-public-key</code></li>
<li>When <code>--ssh-public-key</code> is not specified Kops does not autogenerate a key, but rather uses the key found in <code>~.ssh/id_rsa.pub</code></li>
</ol>
|
<p>So I was able to use <code>minikube mount /my-directory</code> to mount a volume in my minikube vm to reflect my local directory. However I found out that I wasn't able to make any changes to this directory inside minikube (called <code>/mount-9p</code>).</p>
<p>I'm attempting to create a container that would rsync the <code>/mount-9p</code> directory with another directory that I can run my executables in, but am running into this error: <code>Couldn't watch /mount-9p/src: Unknown error 526</code></p>
<p>Is there anyway to override this or a workaround?</p>
| <p>This is a known issue and there is currently a PR to fix it here:
<a href="https://github.com/kubernetes/minikube/pull/1293" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/pull/1293</a></p>
<p>If you build minikube from that PR you should be good to go. This patch should also be in minikube's next release v0.18.0.</p>
|
<p>I am following this tutorial at <a href="https://gettech1.wordpress.com/2016/05/26/setting-up-kubernetes-cluster-on-ubuntu-14-04-lts/" rel="nofollow noreferrer">https://gettech1.wordpress.com/2016/05/26/setting-up-kubernetes-cluster-on-ubuntu-14-04-lts/</a> to setup kubernet multi node with 2 minions and 1 master node on remote ubuntu machines, after following all the steps it goes OK. But when I am trying to run the ./kube-up.sh bash file. It returns the following errors</p>
<blockquote>
<p>ubuntu@ip-XXX-YYY-ZZZ-AAA:~/kubernetes/cluster</p>
<p>$ ./kube-up.sh</p>
<p>Starting cluster in us-central1-b using provider gce ... calling</p>
<p>verify-prereqs Can't find gcloud in PATH, please fix and retry. The</p>
<p>Google Cloud SDK can be downloaded from
<a href="https://cloud.google.com/sdk/" rel="nofollow noreferrer">https://cloud.google.com/sdk/</a>.</p>
</blockquote>
<p><strong>Edit :</strong> I have fixed above issue after exporting different environment variables like</p>
<pre><code>$ export KUBE_VERSION=2.2.1
$ export FLANNEL_VERSION=0.5.5
$ export ETCD_VERSION=1.1.8
</code></pre>
<p>but after that it is generating this issue</p>
<blockquote>
<p>kubernet gzip: stdin: not in gzip format tar: Child returned status 1
tar: Error is not recoverable: exiting now</p>
</blockquote>
| <p>The command you should be executing is <code>KUBERNETES_PROVIDER=ubuntu ./kube-up.sh</code></p>
<p>Without setting that environment variable kube-up.sh tries to deploy VMs on Google Compute Engine and to do so it needs the gcloud binary that you don't have installed.</p>
|
<p>Although I reserved a static IP I got the following warning, while not having load balancer created :</p>
<pre><code>> kubectl describe svc --namespace=api-1dt-dc
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
48m 2m 15 {service-controller } Normal CreatingLoadBalancer Creating load balancer
48m 2m 15 {service-controller } Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): Failed to create load balancer for service api-1dt-dc/review-k8s-4yl6zk: requested ip 35.186.202.220 is neither static nor assigned to LB ad3c982840d0311e7b45942010a84004(api-1dt-dc/review-k8s-4yl6zk): <nil>
</code></pre>
| <p>OK, it seems to work only with regional IPs...</p>
|
<p>GitLab's running in kubernetes cluster. Runner can't build docker image with build artifacts. I've already tried several approaches to fix this, but no luck. Here are some configs snippets:</p>
<p>.gitlab-ci.yml</p>
<pre><code>image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B --settings settings.xml"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t gitlab.my.com/group/app .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
- docker push gitlab.my.com/group/app
</code></pre>
<p>config.toml</p>
<pre><code>concurrent = 1
check_interval = 0
[[runners]]
name = "app"
url = "https://gitlab.my.com/ci"
token = "xxxxxxxx"
executor = "kubernetes"
[runners.kubernetes]
privileged = true
disable_cache = true
</code></pre>
<p>Package stage log:</p>
<pre><code>running with gitlab-ci-multi-runner 1.11.1 (a67a225)
on app runner (6265c5)
Using Kubernetes namespace: default
Using Kubernetes executor with image docker:latest ...
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Running on runner-6265c5-project-4-concurrent-0h9lg9 via gitlab-runner-3748496643-k31tf...
Cloning repository...
Cloning into '/group/app'...
Checking out 10d5a680 as master...
Skipping Git submodules setup
Downloading artifacts for maven-build (61)...
Downloading artifacts from coordinator... ok id=61 responseStatus=200 OK token=ciihgfd3W
$ docker build -t gitlab.my.com/group/app .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
</code></pre>
<p>What am I doing wrong?</p>
| <p>Don't need to use this: </p>
<pre><code>DOCKER_DRIVER: overlay
</code></pre>
<p>cause it seems like OVERLAY isn't supported, so svc-0 container is unable to start with it:</p>
<pre><code>$ kubectl logs -f `kubectl get pod |awk '/^runner/{print $1}'` -c svc-0
time="2017-03-20T11:19:01.954769661Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
time="2017-03-20T11:19:01.955720778Z" level=info msg="libcontainerd: new containerd process, pid: 20"
time="2017-03-20T11:19:02.958659668Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded."
</code></pre>
<p>Also, add <code>export DOCKER_HOST="tcp://localhost:2375"</code> to the docker-build:</p>
<pre><code> docker-build:
stage: package
script:
- export DOCKER_HOST="tcp://localhost:2375"
- docker build -t gitlab.my.com/group/app .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
- docker push gitlab.my.com/group/app
</code></pre>
|
<p>I created an ACS (Azure Container Service) using Kubernetes by following this link : <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-windows-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-windows-walkthrough</a> & I deployed my .net 4.5 app by following this link : <a href="https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-ui" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-ui</a> . My app needs to access Azure SQL and other resources that are part of some other resource groups in my account, but my container is not able to make any outbound calls to network - both inside azure and to internet. I opened some ports to allow outbound connections, that is not helping either. </p>
<p>When I create an ACS does it come with a gateway or should I create one ? How can I configure ACS so that it allows outbound network calls ? </p>
<p>Thanks,</p>
<p>Ashok.</p>
| <p>Outbound internet access works from an Azure Container Service (ACS) Kubernetes Windows cluster if you are connecting to IP Addresses other than the range 10.0.0.0/16 (that is you are not connecting to another service on your VNET). </p>
<p>Before Feb 22,2017 there was a bug where Internet access was not available. </p>
<p>Please try the latest deployment from ACS-Engine: <a href="https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.windows.md" rel="nofollow noreferrer">https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.windows.md</a>., and open an issue there if you still see this, and we (Azure Container Service) can help you debug.</p>
|
<p>I deployed a few VMs using Vagrant to test kubernetes:<br>
master: 4 CPUs, 4GB RAM<br>
node-1: 4 CPUs, 8GB RAM<br>
Base image: Centos/7.<br>
Networking: Bridged.<br>
Host OS: Centos 7.2 </p>
<p>Deployed kubernetes using kubeadm by following <a href="http://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="noreferrer">kubeadm getting started guide</a>. After adding the node to the cluster and installing Weave Net, I'm unfortunately not able to get kube-dns up and running as it stays in a ContainerCreating state:</p>
<blockquote>
<pre><code>[vagrant@master ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 0 1h
kube-system kube-apiserver-master 1/1 Running 0 1h
kube-system kube-controller-manager-master 1/1 Running 0 1h
kube-system kube-discovery-982812725-0tiiy 1/1 Running 0 1h
kube-system kube-dns-2247936740-46rcz 0/3 ContainerCreating 0 1h
kube-system kube-proxy-amd64-4d8s7 1/1 Running 0 1h
kube-system kube-proxy-amd64-sqea1 1/1 Running 0 1h
kube-system kube-scheduler-master 1/1 Running 0 1h
kube-system weave-net-h1om2 2/2 Running 0 1h
kube-system weave-net-khebq 1/2 CrashLoopBackOff 17 1h
</code></pre>
</blockquote>
<p>I assume the problem is somehow related to the weave-net pod in CrashloopBackoff state which resides on node-1:</p>
<pre><code>[vagrant@master ~]$ kubectl describe pods --namespace=kube-system weave-net-khebq
Name: weave-net-khebq
Namespace: kube-system
Node: node-1/10.0.2.15
Start Time: Wed, 05 Oct 2016 07:10:39 +0000
Labels: name=weave-net
Status: Running
IP: 10.0.2.15
Controllers: DaemonSet/weave-net
Containers:
weave:
Container ID: docker://4976cd0ec6f971397aaf7fbfd746ca559322ab3d8f4ee217dd6c8bd3f6ed4f76
Image: weaveworks/weave-kube:1.7.0
Image ID: docker://sha256:1ac5304168bd9dd35c0ecaeb85d77d26c13a7d077aa8629b2a1b4e354cdffa1a
Port:
Command:
/home/weave/launch.sh
Requests:
cpu: 10m
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 05 Oct 2016 08:18:51 +0000
Finished: Wed, 05 Oct 2016 08:18:51 +0000
Ready: False
Restart Count: 18
Liveness: http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/etc from cni-conf (rw)
/host_home from cni-bin2 (rw)
/opt from cni-bin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kir36 (ro)
/weavedb from weavedb (rw)
Environment Variables:
WEAVE_VERSION: 1.7.0
weave-npc:
Container ID: docker://feef7e7436d2565182d99c9021958619f65aff591c576a0c240ac0adf9c66a0b
Image: weaveworks/weave-npc:1.7.0
Image ID: docker://sha256:4d7f0bd7c0e63517a675e352146af7687a206153e66bdb3d8c7caeb54802b16a
Port:
Requests:
cpu: 10m
State: Running
Started: Wed, 05 Oct 2016 07:11:04 +0000
Ready: True
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kir36 (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
weavedb:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
cni-bin:
Type: HostPath (bare host directory volume)
Path: /opt
cni-bin2:
Type: HostPath (bare host directory volume)
Path: /home
cni-conf:
Type: HostPath (bare host directory volume)
Path: /etc
default-token-kir36:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kir36
QoS Class: Burstable
Tolerations: dedicated=master:Equal:NoSchedule
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 3m 19 {kubelet node-1} spec.containers{weave} Normal Pulling pulling image "weaveworks/weave-kube:1.7.0"
1h 3m 19 {kubelet node-1} spec.containers{weave} Normal Pulled Successfully pulled image "weaveworks/weave-kube:1.7.0"
55m 3m 11 {kubelet node-1} spec.containers{weave} Normal Created (events with common reason combined)
55m 3m 11 {kubelet node-1} spec.containers{weave} Normal Started (events with common reason combined)
1h 14s 328 {kubelet node-1} spec.containers{weave} Warning BackOff Back-off restarting failed docker container
1h 14s 300 {kubelet node-1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "weave" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=weave pod=weave-net-khebq_kube-system(d1feb9c1-8aca-11e6-8d4f-525400c583ad)"
</code></pre>
<p>Listing the containers running on node-1 gives</p>
<pre><code>[vagrant@node-1 ~]$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
feef7e7436d2 weaveworks/weave-npc:1.7.0 "/usr/bin/weave-npc" About an hour ago Up About an hour k8s_weave-npc.e6299282_weave-net-khebq_kube-system_d1feb9c1-8aca-11e6-8d4f-525400c583ad_0f0517cf
762cd80d491e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD.d8dbe16c_weave-net-khebq_kube-system_d1feb9c1-8aca-11e6-8d4f-525400c583ad_cda766ac
8c3395959d0e gcr.io/google_containers/kube-proxy-amd64:v1.4.0 "/usr/local/bin/kube-" About an hour ago Up About an hour k8s_kube-proxy.64a0bb96_kube-proxy-amd64-4d8s7_kube-system_909e6ae1-8aca-11e6-8d4f-525400c583ad_48e7eb9a
d0fbb716bbf3 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD.d8dbe16c_kube-proxy-amd64-4d8s7_kube-system_909e6ae1-8aca-11e6-8d4f-525400c583ad_d6b232ea
</code></pre>
<p>The logs for the first container show some connection errors:</p>
<pre><code>[vagrant@node-1 ~]$ sudo docker logs feef7e7436d2
E1005 08:46:06.368703 1 reflector.go:214] /home/awh/workspace/weave-npc/cmd/weave-npc/main.go:154: Failed to list *api.Pod: Get https://100.64.0.1:443/api/v1/pods?resourceVersion=0: dial tcp 100.64.0.1:443: getsockopt: connection refused
E1005 08:46:06.370119 1 reflector.go:214] /home/awh/workspace/weave-npc/cmd/weave-npc/main.go:155: Failed to list *extensions.NetworkPolicy: Get https://100.64.0.1:443/apis/extensions/v1beta1/networkpolicies?resourceVersion=0: dial tcp 100.64.0.1:443: getsockopt: connection refused
E1005 08:46:06.473779 1 reflector.go:214] /home/awh/workspace/weave-npc/cmd/weave-npc/main.go:153: Failed to list *api.Namespace: Get https://100.64.0.1:443/api/v1/namespaces?resourceVersion=0: dial tcp 100.64.0.1:443: getsockopt: connection refused
E1005 08:46:07.370451 1 reflector.go:214] /home/awh/workspace/weave-npc/cmd/weave-npc/main.go:154: Failed to list *api.Pod: Get https://100.64.0.1:443/api/v1/pods?resourceVersion=0: dial tcp 100.64.0.1:443: getsockopt: connection refused
E1005 08:46:07.371308 1 reflector.go:214] /home/awh/workspace/weave-npc/cmd/weave-npc/main.go:155: Failed to list *extensions.NetworkPolicy: Get https://100.64.0.1:443/apis/extensions/v1beta1/networkpolicies?resourceVersion=0: dial tcp 100.64.0.1:443: getsockopt: connection refused
E1005 08:46:07.474991 1 reflector.go:214] /home/awh/workspace/weave-npc/cmd/weave-npc/main.go:153: Failed to list *api.Namespace: Get https://100.64.0.1:443/api/v1/namespaces?resourceVersion=0: dial tcp 100.64.0.1:443: getsockopt: connection refused
</code></pre>
<p>I lack the experience with kubernetes and container networking to troubleshoot these issues further, so some hints are very much appreciated.
Observation: All pods/nodes report their IP as 10.0.2.15 which is the local Vagrant NAT address, not the actual IP address of the VMs.</p>
| <p>Here is the recipe that worked for me (as of March 19th 2017 using Vagrant and VirtualBox). The cluster is made of 3 nodes, 1 Master and 2 Nodes.</p>
<p><strong>1)</strong> Make sure you explicitly set the IP of your master node on init</p>
<pre><code>kubeadm init --api-advertise-addresses=10.30.3.41
</code></pre>
<p><strong>2)</strong> Manually or during provisioning, add to each node's <code>/etc/hosts</code> the exact IP that you are configuring it to have. Here is a line you can add in your Vagrant file (node naming convention I use: k8node-$i) :</p>
<pre><code>config.vm.provision :shell, inline: "sed 's/127\.0\.0\.1.*k8node.*/10.30.3.4#{i} k8node-#{i}/' -i /etc/hosts"
</code></pre>
<p>Example:</p>
<pre><code>vagrant@k8node-1:~$ cat /etc/hosts
10.30.3.41 k8node-1
127.0.0.1 localhost
</code></pre>
<p><strong>3)</strong> Finally, all Nodes will try to use the public IP of the cluster to connect to the master (not sure why this is happening ...). Here is the fix for that.</p>
<p>First, find the public IP by running the following on master.</p>
<pre><code>kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 1h
</code></pre>
<p>In each node, make sure that any process using 10.96.0.1 (in my case) is routed to master that is on 10.30.3.41. </p>
<p>So on each Node (you can skip master) use <code>route</code> to set the redirect.</p>
<pre><code>route add 10.96.0.1 gw 10.30.3.41
</code></pre>
<p>After that, everything should work ok:</p>
<pre><code>vagrant@k8node-1:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-rnl2f 1/1 Running 0 1h
kube-system etcd-k8node-1 1/1 Running 0 1h
kube-system kube-apiserver-k8node-1 1/1 Running 0 1h
kube-system kube-controller-manager-k8node-1 1/1 Running 0 1h
kube-system kube-discovery-1769846148-g8g85 1/1 Running 0 1h
kube-system kube-dns-2924299975-7wwm6 4/4 Running 0 1h
kube-system kube-proxy-9dxsb 1/1 Running 0 46m
kube-system kube-proxy-nx63x 1/1 Running 0 1h
kube-system kube-proxy-q0466 1/1 Running 0 1h
kube-system kube-scheduler-k8node-1 1/1 Running 0 1h
kube-system weave-net-2nc8d 2/2 Running 0 46m
kube-system weave-net-2tphv 2/2 Running 0 1h
kube-system weave-net-mp6s0 2/2 Running 0 1h
vagrant@k8node-1:~$ kubectl get nodes
NAME STATUS AGE
k8node-1 Ready,master 1h
k8node-2 Ready 1h
k8node-3 Ready 48m
</code></pre>
|
<p>Say, the pod has one container which hosts a webserver serving static page on port 80.</p>
<p>When creating the pod, how to set a subdomain <code>x.example.com</code> to a pod? Is a service necessary here?</p>
<p>What role does kube-dns play here?</p>
<p>I don't want to do a use nodePort binding. The pod should be accessible to the public via <code>x.example.com</code>. Is it possible to access it at <code>example.com</code> with query param as CIDR?</p>
| <p>Assuming you aren't deploying to a cloud environment, you would use an <a href="https://kubernetes.io/docs/user-guide/ingress/#ingress-controllers" rel="noreferrer">Ingress Controller</a>
Deploy an ingress controller as a standard pod, with a service that uses NodePort or HostPort.</p>
<p>Once you've deployed your ingress controller, you can add an Ingress resource.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: x.example.com
http:
paths:
- path: /
backend:
serviceName: web-app-service
servicePort: 80
</code></pre>
<p>Point DNS to the host your ingress controller pod was scheduled on, and you can access the pod on x.example.com</p>
<p>If you're deploying to GKE or AWS etc, you can use a L<a href="https://kubernetes.io/docs/user-guide/load-balancer/" rel="noreferrer">oad Balancer resource</a></p>
|
<p>I'm trying to create redis cluster using kubernetes on centos. I have my kubernetes master running on one host and kubernetes slaves on 2 different hosts.</p>
<blockquote>
<p>etcdctl get /kube-centos/network/config</p>
</blockquote>
<pre><code>{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
</code></pre>
<p>Here is my replication controller</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
replicas: 6
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
command:
- "redis-server"
args:
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
</code></pre>
<blockquote>
<p>kubectl create -f rc.yaml</p>
</blockquote>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
redis-master-149tt 1/1 Running 0 8s 172.30.96.4 centos-minion-1
redis-master-14j0k 1/1 Running 0 8s 172.30.79.3 centos-minion-2
redis-master-3wgdt 1/1 Running 0 8s 172.30.96.3 centos-minion-1
redis-master-84jtv 1/1 Running 0 8s 172.30.96.2 centos-minion-1
redis-master-fw3rs 1/1 Running 0 8s 172.30.79.4 centos-minion-2
redis-master-llg9n 1/1 Running 0 8s 172.30.79.2 centos-minion-2
</code></pre>
<p>Redis-config file used</p>
<pre><code>appendonly yes
cluster-enabled yes
cluster-config-file /redis-master/nodes.conf
cluster-node-timeout 5000
dir /redis-master
port 6379
</code></pre>
<p>I used the following command to create the kubernetes service.</p>
<blockquote>
<p>kubectl expose rc redis-master --name=redis-service --port=6379 --target-port=6379 --type=NodePort</p>
</blockquote>
<pre><code>Name: redis-service
Namespace: default
Labels: app=redis
role=master
tier=backend
Selector: app=redis,role=master,tier=backend
Type: NodePort
IP: 10.254.229.114
Port: <unset> 6379/TCP
NodePort: <unset> 30894/TCP
Endpoints: 172.30.79.2:6379,172.30.79.3:6379,172.30.79.4:6379 + 3 more...
Session Affinity: None
No events.
</code></pre>
<p>Now I have all the pods and service up and running. I'm using redis-trib pod to create redis cluster.</p>
<blockquote>
<p>kubectl exec -it redis-trib bash</p>
<p>./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379</p>
</blockquote>
<p>Redis Cluster created as expected with the below message.</p>
<pre><code>[OK] All 16384 slots covered.
</code></pre>
<p>Now I should be able to access my redis-cluster on kubernetes node IP(192.168.240.116) and nodePort(30894) from any host within my network. Everything works as expected when I execute the below command from one of the kubernetes node.</p>
<blockquote>
<p>redis-cli -p 30894 -h 192.168.240.116 -c</p>
</blockquote>
<pre><code>192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
OK
172.30.79.4:6379>
</code></pre>
<p>When I run the same command from different (non-kubernetes) node within the same network, I see the connected timed out error.</p>
<blockquote>
<p>redis-cli -c -p 30894 -h 192.168.240.116</p>
</blockquote>
<pre><code>192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
Could not connect to Redis at 172.30.79.4:6379: Connection timed out
</code></pre>
<p>Is it not possible to access the redis-cluster outside the kubernetes cluster network when exposed using <strong>NodePort</strong> service type?</p>
<p>Also I cannot use <strong>LoadBalancer</strong> service type as I'm not hosting it on cloud.</p>
<p>I have been stuck with this issue for quite a while. Can someone suggest on what approach I should use to access my redis-cluster outside my network ?</p>
<p>Thanks</p>
| <p>Running <code>./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379</code> doesn't make sense with this setup.</p>
<p>The port <code>6379</code> is only accessible through the service which you brough up, but never directly as you try. That's why you run into issues when you try to use your setup.</p>
<p>What you can do is to expose each POD with it's own service and have one additional cluster services to loadbalance external requests. As shown in the <a href="https://github.com/kelseyhightower/kubernetes-redis-cluster" rel="nofollow noreferrer">example repository from Kelsey Hightower</a>. This way the PODs can communicate though the internally exposed ports and (external) clients can use the loadbalanced cluster port. The implication then is also that each POD requires it's own ReplicaSet (or Deployment). There's a long talk available on YouTube from Kelsey explaining the setup - <a href="https://www.youtube.com/watch?v=0hlv9iIncik" rel="nofollow noreferrer">YouTube</a> / <a href="https://de.slideshare.net/RedisLabs/managing-redis-with-kubernetes-kelsey-hightower" rel="nofollow noreferrer">Slideshare</a>.</p>
<p>An alternative would be to use a single redis master as shown in other examples.</p>
|
<p>I am fairly new to the Google Cloud platform and Docker and set-up a cluster of nodes, made a Dockerfile that copies a repo and runs a Clojure REPL on a public port. I can connect to it from my IDE and play around with my code, awesome!</p>
<p>That REPL should however probably tunneled through SSH, but here is where my problem starts. <strong>I can't find a suitable place to SSH into</strong> for making changes to the repo that Docker runs the REPL on:</p>
<ul>
<li>The exposed IP just exposes the REPL service (correct kubernetes term?) and does not allow me to SSH in. </li>
<li>Neither does the cluster master endpoint, it gives me a public key error even though I've followed the <code>Adding or removing SSH keys for all of the instances in your project</code> part <a href="https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys" rel="noreferrer">here</a>.</li>
</ul>
<p>I would like to edit the source files via SSH but I would need to access the docker code repo. I don't know how to proceed.</p>
<p>I understand this isn't exactly a typical way to deploy applications so I am not even sure it's possible to have multiple nodes work with a modified docker codebase (do the nodes share the JVM somehow?).</p>
<p>Concretely my question is how do I SSH into the docker container to access the codebase?</p>
| <p>For more recent Kubernetes versions the shell command should be separated by the <code>--</code>:</p>
<pre><code>kubectl exec -it <POD NAME> -c <CONTAINER NAME> -- bash
</code></pre>
<p>Please note that <code>bash</code> needs to be availalble for execution inside of the container. For different OS flavours you might need to use <code>/bin/sh</code>, <code>/bin/bash</code> (or others) instead.</p>
<p>The command format for Kubernetes 1.5.0:</p>
<pre><code>kubectl exec -it <POD NAME> -c <CONTAINER NAME> bash
</code></pre>
|
<p>I run Jenkins and my app is dockerized, i.e. when I run the container it exposes port 3000 and I can point my browser there. On every Github PR I would like to deploy that git commit to a running container somewhere and have Jenkins post back to the PR the link where it can be accessed. On any PR updates it gets auto re-deployed and on PR close/resolve it gets torn down. </p>
<p>I have looked at kubernetes and a little rancher, but what's the easiest way to get this going assuming I can only deploy to one box?</p>
| <p>There is a jenkins plugin <a href="https://plugins.jenkins.io/github-pullrequest" rel="nofollow noreferrer">github-pullrequest</a> can resolve your problem.</p>
<p><strong>Prerequisites:</strong></p>
<ol>
<li>You have a jenkins server can access by internet if you want trigger your build by a webhook.</li>
<li>Your have a github API token to access/admin your git repository, it can be generate by yourself in <a href="https://github.com/settings/tokens" rel="nofollow noreferrer">settings</a>.</li>
</ol>
<p>Please follow the guide <a href="https://github.com/KostyaSha/github-integration-plugin/blob/master/docs/Configuration.adoc" rel="nofollow noreferrer">configuration</a> to setup your jenkins integration with github.</p>
<p>After configuration:</p>
<ul>
<li>you can trigger your build by PR events: <strong>opened</strong>/<strong>commit changed</strong>/<strong>closed</strong>, or <strong>comment with specific pattern</strong>.</li>
<li>you can get a PR status via environment variable <strong>${GITHUB_PR_STATE}</strong>, so you can start or stop a container on specific value.</li>
<li>you can publish a comment to a PR to tell the address of your web service after you started docker container.</li>
</ul>
<p>About expose port of cotainer with multi PR, your can just run container with <code>-p 3000</code>, it will auto expose a port in a range on the docker host, <code>docker port <container></code> will show the specific port number, so for example:</p>
<ul>
<li><code>container1</code> with address <code><host>:32667</code> for <code>PR1</code></li>
<li><code>container2</code> with address <code><host>:35989</code> for <code>PR2</code></li>
</ul>
|
<p>I have a load balancer service for a deployment having 3 pods. When I do a rolling udpate(changing the image) by the following command :</p>
<blockquote>
<p>kubectl set image deployment/< deployment name > contname=< image-name ></p>
</blockquote>
<p>and hit the service continuously, it gives a few connection refused in between. I want to check which pods it is related to. In other words, is it possible to see which request is served by which pods (without going inside the pods and checking the logs in them)? Also, Is this because of a race condition, as in when a pod might have got a request and had just been terminated before receiving that(almost simultaneously - resulting in no response)?</p>
| <p>Have you <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">configured liveness and readiness probes</a> for you Pods? The service will not serve traffic to a Pod unless it thinks it is healthy, but without health checks it won't know for certain if it is ready.</p>
|
<p>I'm deploying my test Kubernetes cluster on AWS using Rancher 1.5.1.</p>
<p>I first deploy Rancher and ensure that there are no errors after hosts registration.
I then install Kubernetes using "Kubernetes" entry in the Rancher's catalog.
That succeeds seemingly without any errors and I get access to my 1.5.0 Kubernetes:</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-08T02:50:34Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-115+611cbb22703182", GitCommit:"611cbb22703182611863beda17bf9f3e90afa148", GitTreeState:"clean", BuildDate:"2017-01-13T18:03:00Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>As I understand, the Heapster with InfluxDB and Grafana dashboard is part of the default Kubernetes installation now. All Heapster, InfluxDB and Grafana related pods show no errors nether in logs nor in status and seem to run successfully:</p>
<pre><code>kubectl -n kube-system describe rs heapster-3467702493
Name: heapster-3467702493
Namespace: kube-system
Image(s): gcr.io/google_containers/heapster:v1.2.0
Selector: k8s-app=heapster,pod-template-hash=3467702493,version=v6
Labels: k8s-app=heapster
pod-template-hash=3467702493
version=v6
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
No events.
kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9td0 1/1 Running 0 1d
heapster-3467702493-b28jm 1/1 Running 0 1d
influxdb-grafana-876329878-5qpsd 2/2 Running 0 1d
kube-dns-1208858260-zb7d9 4/4 Running 0 1d
kubernetes-dashboard-2492700511-8g3bj 1/1 Running 0 1d
kubectl -n kube-system describe pod influxdb-grafana-876329878-5qpsd
Name: influxdb-grafana-876329878-5qpsd
Namespace: kube-system
Node: euir1a-dclus11.qiotec-internal.com/10.11.4.172
Start Time: Tue, 14 Mar 2017 14:48:05 +0100
Labels: name=influx-grafana
pod-template-hash=876329878
Status: Running
IP: 10.42.35.83
Controllers: ReplicaSet/influxdb-grafana-876329878
Containers:
influxdb:
Container ID: docker://49ad7e2033d9116cc98d1e7c8cd6e20c305179d68804b762bb19592fefa59b3e
Image: docker.io/kubernetes/heapster_influxdb:v0.5
Image ID: docker-pullable://kubernetes/heapster_influxdb@sha256:24de37030e0da01c39b8863231b70f359e1fe6d4449505da03e2e7543bb068cb
Port:
State: Running
Started: Tue, 14 Mar 2017 14:48:29 +0100
Ready: True
Restart Count: 0
Volume Mounts:
/data from influxdb-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from io-rancher-system-token-zgrrs (ro)
Environment Variables: <none>
grafana:
Container ID: docker://bdb4e381f0cd05df0a2d1c7dffb52b3e6e724a27999e039c5399fef391fd6d32
Image: gcr.io/google_containers/heapster_grafana:v2.6.0-2
Image ID: docker-pullable://gcr.io/google_containers/heapster_grafana@sha256:208c98b77d4e18ad7759c0958bf87d467a3243bf75b76f1240a577002e9de277
Port:
State: Running
Started: Tue, 14 Mar 2017 14:48:41 +0100
Ready: True
Restart Count: 0
Volume Mounts:
/var from grafana-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from io-rancher-system-token-zgrrs (ro)
Environment Variables:
INFLUXDB_SERVICE_URL: http://monitoring-influxdb.kube-system.svc.cluster.local:8086
GF_AUTH_BASIC_ENABLED: false
GF_AUTH_ANONYMOUS_ENABLED: true
GF_AUTH_ANONYMOUS_ORG_ROLE: Admin
GF_SERVER_ROOT_URL: /
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
influxdb-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
grafana-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
io-rancher-system-token-zgrrs:
Type: Secret (a volume populated by a Secret)
SecretName: io-rancher-system-token-zgrrs
QoS Class: BestEffort
Tolerations: <none>
No events.
</code></pre>
<p>The strange thing I can only see here, is that "Port:" entries are empty (instead of having some value as I'd expected).</p>
<p>Here is part of the influxdb-grafana-876329878-5qpsd log. It seems that it starts successfully on port 3000:</p>
<pre><code>....
2017/03/14 13:48:42 [I] Migrator: exec migration id: create index UQE_dashboard_snapshot_delete_key - v5
2017/03/14 13:48:42 [I] Migrator: exec migration id: create index IDX_dashboard_snapshot_user_id - v5
2017/03/14 13:48:42 [I] Migrator: exec migration id: alter dashboard_snapshot to mediumtext v2
2017/03/14 13:48:42 [I] Migrator: exec migration id: create quota table v1
2017/03/14 13:48:42 [I] Migrator: exec migration id: create index UQE_quota_org_id_user_id_target - v1
2017/03/14 13:48:42 [I] Created default admin user: admin
2017/03/14 13:48:42 [I] Listen: http://0.0.0.0:3000
.Grafana is up and running.
Creating default influxdb datasource...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 272 100 37 100 235 2972 18877 --:--:-- --:--:-- --:--:-- 19583
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Set-Cookie: grafana_sess=3063bf504a9ec00a; Path=/; HttpOnly
Date: Tue, 14 Mar 2017 13:48:43 GMT
Content-Length: 37
{"id":1,"message":"Datasource added"}
Importing default dashboards...
...
</code></pre>
<p>The services are also up:</p>
<pre><code>kubectl -n kube-system get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.43.159.75 <none> 80/TCP 1d
heapster 10.43.77.135 <none> 80/TCP 1d
kube-dns 10.43.0.10 <none> 53/UDP,53/TCP 1d
kubernetes-dashboard 10.43.202.63 <none> 9090/TCP 1d
monitoring-grafana 10.43.186.82 <none> 80/TCP 1d
monitoring-influxdb 10.43.61.17 <none> 8086/TCP,8083/TCP 1d
</code></pre>
<p>At the end, I cannot connect to the Grafana dashboard neither via Rancher load balancer mapping to the monitoring-grafana port 80 (I get 503 "Service not found" error) nor via Kubernetes port forwarding (connection times out)</p>
<p>My cluster-info output:</p>
<pre><code>kubectl cluster-info
Kubernetes master is running at https://my-rancher-host:8080/r/projects/1a5/kubernetes
KubeDNS is running at https://my-rancher-host:8080/r/projects/1a5/kubernetes/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://my-rancher-host:8080/r/projects/1a5/kubernetes/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
</code></pre>
<p>is different from the one demonstrated in this Heapster video:
<a href="https://www.youtube.com/watch?v=xSMNR2fcoLs" rel="nofollow noreferrer">https://www.youtube.com/watch?v=xSMNR2fcoLs</a></p>
<p>Can anybody give a hint about what can be wrong and how can I connect to the Heapster's Grafana dashboard?</p>
| <p>It appears I had to add a correct ingress to the system:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: monitoring-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: monitoring.kube-system.mykubernetes.host
http:
paths:
- path:
backend:
serviceName: monitoring-grafana
servicePort: 80
</code></pre>
|
<p>I'm trying to setup a continuous integration job that will deploy to kubernetes / google container engine from a Jenkins job. The jenkins server is relatively tightly controlled, so I'm not able to install plugins.</p>
<p>I have a JSON key file for a server account from Google Cloud IAM.</p>
<p>I'm currently trying to download the google cloud sdk and auth from there, but am not having any luck (this if from a Jenkinsfile):</p>
<pre><code>sh 'export KUBECONFIG=$(pwd)/.kubeconfig'
sh 'export GOOGLE_APPLICATION_CREDENTIALS=$JSON'
sh 'google-cloud-sdk/bin/gcloud auth activate-service-account --key-file=$JSON'
sh 'google-cloud-sdk/bin/gcloud config set core/project proj-1'
sh 'google-cloud-sdk/bin/gcloud container clusters list'
sh 'google-cloud-sdk/bin/gcloud container clusters get-credentials clust-1 --zone us-east1-c'
sh 'kubectl get pods'
</code></pre>
<p>I'm getting the error message:
error: google: could not find default credentials. See <a href="https://developers.google.com/accounts/docs/application-default-credentials" rel="nofollow noreferrer">https://developers.google.com/accounts/docs/application-default-credentials</a> for more information.
I will also need to be able to do a gcloud docker push, so using gcloud is ok.</p>
| <p>There's a cryptic github issue around this behavior:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/30617" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/30617</a></p>
<p>According to that issue, everything I've been doing should work:</p>
<blockquote>
<p>Previously, gcloud would have configured kubectl to use the cluster's static client certificate to authenticate. Now, gcloud is configuring kubectl to use the service account's credentials.</p>
<p>Kubectl is just using the Application Default Credentials library, and it looks like this is part of the ADC flow for using a JSON-key service account. I'll see if there is a way that we could make this nicer.</p>
<p>If you don't want to add the export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json" to your flow, you can reenable the old way (use the client cert) by setting the cloudsdk container/use_client_certificate property to true. Either:</p>
<p>gcloud config set container/use_client_certificate True</p>
<p>or</p>
<p>export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True</p>
</blockquote>
<p>I was using the GOOGLE_APPLICATION_CREDENTIALS variable, but alas, was not working. I tried the "gcloud config set" option up above, but that also didn't work. Finally, I used the env variable CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFCIATE and that actually finally did work.</p>
|
<p>I'm using a cluster setup with multiple apiservers with a loadbalancer in front of them for external access, with an installation on bare metal.</p>
<p>Like mentioned in the <a href="http://kubernetes.io/v1.0/docs/admin/high-availability.html" rel="noreferrer">High Availability Kubernetes Clusters</a> docs, I would like to use internal loadbalancing utilizing the <code>kubernetes</code> service within my cluster. This works fine so far, but I'm not sure what is the best way to set up the <code>kube-proxy</code>. It obviously cannot use the service IP, since it does the proxying to this one based on the data from the apiserver (<code>master</code>). I could use the IP of any one of the apiservers, but this would cause losing the high availability. So, the only viable option I currently see is to utilize my external loadbalancer, but this seems somehow wrong.</p>
<p>Anybody any ideas or best practices?</p>
| <p>This is quite old question, but as the problem persists... here it goes.</p>
<p>There is a bug in the Kubernetes restclient, which does not allow to use more than one IP/URL, as it will pick up always the first IP/URL in the list. This affects to kube-proxy and also to kubelet, leaving a single point of failure in those tools if you don't use a load balancer (as you did) in a multi-master setup. The solution probably is not the most elegant solution ever, but currently (I think) is the easier one.</p>
<p>Other solution (which I prefer, but may not work for everyone and it does not solve all the problems) is to create a DNS entry that will round robin your API servers, but as pointed out in one of the links below, that only solves the load balancing, and not the HA.</p>
<p>You can see the progress of this story in the following links:</p>
<p>The kube-proxy/kubelet issue: <a href="https://github.com/kubernetes/kubernetes/issues/18174" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/18174</a><br>
The restclient PR: <a href="https://github.com/kubernetes/kubernetes/pull/30588" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/30588</a><br>
The "official" solution: <a href="https://github.com/kubernetes/kubernetes/issues/18174#issuecomment-199381822" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/18174#issuecomment-199381822</a></p>
|
<p>Is there a way to specify <code>int</code> values in ConfigMap? When I try to specify a port number I get the following error.</p>
<pre><code>error: error validating "my-deployment.yml": error validating data: expected type int, for field spec.template.spec.containers[0].ports[0].containerPort, got string; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>Here is the sample config file I am using:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 2
template:
metadata:
labels:
app: poweramp-email-service
spec:
containers:
- name: poweramp-email-service
env:
- name: SERVICE_PORT_NUM
valueFrom:
configMapKeyRef:
name: stg-config
key: poweramp-email-service.port
image: nginx
ports:
- containerPort: my-service.port
</code></pre>
<p>And here is the simple <code>stg.properties</code> file I am using to generate the ConfigMap using the command: <code>kubectl create configmap stg-config --from-file stg.properties</code></p>
<pre><code>my-service.port=9513
</code></pre>
| <p>You can't use configmap values in <code>spec.template.spec.containers[0].ports[0].containerPort</code> I'm afraid, it's for configuration values.</p>
<p>The options you can use it for (specified in <a href="https://kubernetes.io/docs/user-guide/configmap/" rel="nofollow noreferrer">this guide</a>) are</p>
<ul>
<li>As an environment variable</li>
<li>As a command line config flag (using environment variables)</li>
<li>Using the volume plugin.</li>
</ul>
<p>If you want to make the port configurable for your deployment/pod, you might consider using <a href="http://helm.sh/" rel="nofollow noreferrer">Helm</a>. Helm allows you to use Go templates in your manifests/definitions, and then you can override them upon invocation. </p>
<p>Take this <a href="https://github.com/kubernetes/charts/blob/master/stable/mysql/templates/deployment.yaml" rel="nofollow noreferrer">MySQL Chart template as an example</a>, you could set the port here as a config option like so:</p>
<pre><code>ports:
- name: mysql
containerPort: {{ default 3306 .Values.mysqlPort }}
</code></pre>
<p>and then from there, set this value in your <a href="https://github.com/kubernetes/charts/blob/master/stable/mysql/values.yaml" rel="nofollow noreferrer">values.yaml</a></p>
|
<p>I create a <code>k8s</code> cluster by <code>kops</code> on <code>aws</code>, the node auto scaling group configuration like:</p>
<pre><code>metadata:
creationTimestamp: "2017-03-21T03:53:26Z"
name: nodes
spec:
associatePublicIp: true
image: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21
machineType: t2.medium
maxSize: 5
minSize: 2
role: Node
zones:
- us-west-1a
- us-west-1c
</code></pre>
<p>the <code>aws</code> console shows current asg like: </p>
<pre><code>desired:2; min:2; max:5
</code></pre>
<p>then I install the <code>add-ons</code> --- <code>cluster-autoscaler</code> by using the <a href="https://github.com/kubernetes/contrib/blob/master/cluster-autoscaler/cloudprovider/aws/README.md" rel="nofollow noreferrer">official doc</a>, then I deploy a pod which current cluster can't supply the resource, but <code>cluster-autoscaler</code> doesn't add the node to the cluster, logs is like below:</p>
<pre><code>admin@ip-10-0-52-252:/var/log$kubectl logs -f cluster-autoscaler-1581749901-fqbzk -n kube-system
I0322 07:08:43.407683 1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"new-mq-test-951523717-trd2s", UID:"9437ac54-0ecd-11e7-8779-0257a5d4c012", APIVersion:"v1", ResourceVersion:"189546", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added)
I0322 07:08:43.407910 1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"new-mq-test-951523717-n986l", UID:"9437a3db-0ecd-11e7-8779-0257a5d4c012", APIVersion:"v1", ResourceVersion:"189543", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added)
</code></pre>
<p>so why the <code>cluster-autoscaler</code> doesn't scale up the cluster by adding the ec2 nodes ? any answers are very appreciated</p>
| <p>finally I found the answer, my default <code>kops</code> configuration of <code>nodes</code> asg is <code>t2.medium</code>, while I deploy the pod which require a <code>5000M</code> for memory, as we all konw <code>t2.medium</code> memory is <code>4GB</code> which can't fit the requirement, so the <code>cluster-autoscaler</code> can't scale up!</p>
|
<p>Is this possible to set up a config, which would include:</p>
<ol>
<li>GitLab project #1 java-container </li>
<li>GitLab project #2 java-container </li>
<li>Nginx container</li>
<li>Redis container</li>
<li>Cassandra container</li>
<li>Nginx exporter (Prometheus)</li>
<li>Redis exporter (Prometheus)</li>
<li>JMX exporter (Prometheus) x2</li>
</ol>
<p>It's important to have all this in one multi-container pod on kubernetes (GKE) and communicating via shared volume and localhost.</p>
<p>I've already done all this in kubernetes with initial containers (to pull the code and compile it), and now I'm looking for the way to make this work with CI/CD.</p>
<p>So, if this could be done with GitLab CI, could you, please, point me to the right documentation or manual pages, as I'm a newbie in GitLab CI and stuff, and have already lost myself in dozens of articles from the internet.</p>
<p>Thanks in advance. </p>
| <p>The first thing is to join all projects, that should be built with maven and(or) docker into the one common project at GitLab.</p>
<p>Next is to add dockerfiles and all files, needed for docker build, into the sub-projects folders.</p>
<p>Next in the root of the common project we should we should place .gitlab-ci.yml and deployment.yml file. </p>
<p>deployment.yml should be common or all the sub-project. </p>
<p>.gitlab-ci.yml should contain all the stages to build every sub-project. As we don't need to build all the stuff every time we make changes in sime files, we should use tags in git to make GitLab CI understand, in which case it should run one or another stage. This could be implemented by <code>only</code> <a href="http://docs.gitlab.com/ce/ci/yaml/README.html#only-and-except" rel="nofollow noreferrer">parameter</a>:</p>
<pre><code>docker-build-akka:
stage: package
only:
- /^akka-.*$/
script:
- export DOCKER_HOST="tcp://localhost:2375"
...
</code></pre>
<p>And so on in every stage. So, if you make changes to needed Dockerfile or java code, you should commit and push to gitlab with a tag like <code>akka-0.1.4</code>, and GitLab CI runner will run only the appropriate stages. </p>
<p>Also, if you make changes to README.md file or any other changes, that don't need to build the project - it wouldn't.</p>
<p>Lots of useful stuff you can find <a href="https://about.gitlab.com/2016/12/14/continuous-delivery-of-a-spring-boot-application-with-gitlab-ci-and-kubernetes/" rel="nofollow noreferrer">here</a> and <a href="http://docs.gitlab.com/ce/ci/yaml/README.html" rel="nofollow noreferrer">here</a>.</p>
<p>Also, look at the <a href="https://stackoverflow.com/a/42903164/3906452">problem</a>, that I faced running docker build stage in kubernetes. It might me helpful. </p>
|
<p>I have a Kubernetes cluster that takes jobs for processing. These jobs are defined as follows:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: process-item-014
labels:
jobgroup: JOB_XXX
spec:
template:
metadata:
name: JOB_XXX
labels:
jobgroup: JOB_XXX
spec:
restartPolicy: OnFailure
containers:
- name: worker
image: gcr.io/.../worker
volumeMounts:
- mountPath: /workspace
name: workspace
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
volumes:
- name: workspace
hostPath:
path: /tmp/client-workspace
</code></pre>
<p>Note that I'm trying to mount a folder in the host into the container (<code>workspace</code>). Note also the memory limits defined.
On my container, I download a number of files into <code>workspace</code>, some of them being pretty large (They are downloaded with gsutil from GCS, but don't think that's too important).</p>
<p>When the files I download exceed the memory limits, my code breaks with a "device out of space" error. This doesn't completely make sense, because I'm storing the files into a mount, that is backed by the host's storage, which is more than enough. It's also mentioned in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-ram-container/#assigning-cpu-and-ram-resources-to-a-container" rel="nofollow noreferrer">docs</a> that <code>memory</code> limit's the ammount of RAM available for the container, not storage. Still, when I set the limit to <code>XGi</code>, it breaks after <code>XGi</code> download pretty consistently. </p>
<p>My container is based on ubuntu:14.04, running a shell script with a line like this:</p>
<pre><code>gsutil -m cp -r gs://some/cloud/location/* /workspace/files
</code></pre>
<p>What am I doing wrong? Will definitely need to have some limits for my containers, so I can't just drop the limits.</p>
| <p>The <code>/tmp</code> filesystem is often backed by <a href="https://en.wikipedia.org/wiki/Tmpfs" rel="nofollow noreferrer">tmpfs</a>, which stores files in memory rather than on disk. My guess is that is the case on your nodes, and the memory is being correctly charged to the container. Can you use an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptydir</a> volume instead?</p>
|
<p>I have a kubernetes cluster running on google container engine that defines a Pod running a NFS-Server, which I want to access in other Pods via various <code>PersistentVolume</code>s.</p>
<p>What is the best way to configure the NFS Service, if it is in the same cluster?</p>
<p>According to various documentation ive found its not possible to rely on kube-dns for this, because the node starting the kubernetes pod is not configured to use it as its DNS.</p>
<p>So this is out of question (and really does not work - ive tested it, with various different hostname/FQDN...)</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: xxx-persistent-storage
labels:
app: xxx
spec:
capacity:
storage: 10Gi
nfs:
path: "/exports/xxx"
server: nfs-service.default.svc.cluster.local # <-- does not work
</code></pre>
<p>I can start the NFS Server and check its ClusterIP via <code>kubectl describe svc nfs-service</code> and then hardcode its Endpoint-IP for the PV (this works):</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: xxx-persistent-storage
labels:
app: xxx
spec:
capacity:
storage: 10Gi
nfs:
path: "/exports/xxx"
server: 10.2.1.7 # <-- does work
</code></pre>
<p>But this feels wrong - as soon as I need to recreate the NFS-Service ill get a new IP and i have to reconfigure all the PVs based on it.</p>
<ul>
<li><p>What is the best-practice here? Im surprised i did not find any example for it, because i supposed thats quite a normal thing to do - isnt it?</p></li>
<li><p>Is it possible to set a kind of static IP for a service, so that i can rely on having always the same IP for the NFS service?</p></li>
</ul>
| <p>You are on the right track. To make sure that your Service is using a static IP just add <code>clusterIP: 1.2.3.3</code> under the <code>spec:</code> section of the Service.</p>
<p>From the canonical <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/nfs/README.md" rel="nofollow noreferrer">example</a>:</p>
<blockquote>
<p>In the future, we'll be able to tie these together using the service names, but for now, you have to hardcode the IP.</p>
</blockquote>
|
<p>I am trying to setup an Ingress in GCE Kubernetes. But when I visit the IP address and path combination defined in the Ingress, I keep getting the following 502 error:</p>
<p><a href="https://i.stack.imgur.com/HVlD1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/HVlD1.png" alt="Ingress 502 error"></a></p>
<hr>
<p>Here is what I get when I run: <code>kubectl describe ing --namespace dpl-staging</code></p>
<pre><code>Name: dpl-identity
Namespace: dpl-staging
Address: 35.186.221.153
Default backend: default-http-backend:80 (10.0.8.5:8080)
TLS:
dpl-identity terminates
Rules:
Host Path Backends
---- ---- --------
*
/api/identity/* dpl-identity:4000 (<none>)
Annotations:
https-forwarding-rule: k8s-fws-dpl-staging-dpl-identity--5fc40252fadea594
https-target-proxy: k8s-tps-dpl-staging-dpl-identity--5fc40252fadea594
url-map: k8s-um-dpl-staging-dpl-identity--5fc40252fadea594
backends: {"k8s-be-31962--5fc40252fadea594":"HEALTHY","k8s-be-32396--5fc40252fadea594":"UNHEALTHY"}
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 15m 1 {loadbalancer-controller } Normal ADD dpl-staging/dpl-identity
15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 35.186.221.153
15m 6m 4 {loadbalancer-controller } Normal Service no user specified default backend, using system default
</code></pre>
<p>I think the problem is <code>dpl-identity:4000 (<none>)</code>. Shouldn't I see the IP address of the <code>dpl-identity</code> service instead of <code><none></code>?</p>
<p>Here is my service description: <code>kubectl describe svc --namespace dpl-staging</code></p>
<pre><code>Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Selector: app=dpl-identity
Type: NodePort
IP: 10.3.254.194
Port: http 4000/TCP
NodePort: http 32396/TCP
Endpoints: 10.0.2.29:8000,10.0.2.30:8000
Session Affinity: None
No events.
</code></pre>
<p>Also, here is the result of executing: <code>kubectl describe ep -n dpl-staging dpl-identity</code></p>
<pre><code>Name: dpl-identity
Namespace: dpl-staging
Labels: app=dpl-identity
Subsets:
Addresses: 10.0.2.29,10.0.2.30
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
http 8000 TCP
No events.
</code></pre>
<hr>
<p>Here is my deployment.yaml:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
namespace: dpl-staging
name: dpl-identity
type: Opaque
data:
tls.key: <base64 key>
tls.crt: <base64 crt>
---
apiVersion: v1
kind: Service
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
type: NodePort
ports:
- port: 4000
targetPort: 8000
protocol: TCP
name: http
selector:
app: dpl-identity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
kind: Ingress
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: dpl-identity
rules:
- http:
paths:
- path: /api/identity/*
backend:
serviceName: dpl-identity
servicePort: 4000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: dpl-staging
name: dpl-identity
labels:
app: dpl-identity
spec:
replicas: 2
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: dpl-identity
spec:
containers:
- image: gcr.io/munpat-container-engine/dpl/identity:0.4.9
name: dpl-identity
ports:
- containerPort: 8000
name: http
volumeMounts:
- name: dpl-identity
mountPath: /data
volumes:
- name: dpl-identity
secret:
secretName: dpl-identity
</code></pre>
| <p>Your backend <code>k8s-be-32396--5fc40252fadea594</code> is showing as <code>"UNHEALTHY"</code>.</p>
<p>Ingress will not forward traffic if the backend is UNHEALTHY, this will result in the 502 error you are seeing.</p>
<p>It will be being marked as UNHEALTHY becuase it is not passing it's health check, you can check the health check setting for k8s-be-32396--5fc40252fadea594 to see if they are appropriate for your pod, it may be polling an URI or port that is not returning a 200 response. You can find these setting under Compute Engine > Health Checks.</p>
<p>If they are correct then there are many steps between your browser and the container that could be passing traffic incorrectly, you could try <code>kubectl exec -it PODID -- bash</code> (or ash if you are using Alpine) and then try curl-ing localhost to see if the container is responding as expected, if it is and the health checks are also configured correctly then this would narrow down the issue to likely be with your service, you could then try changing the service from a NodePort type to a LoadBalancer and see if hitting the service IP directly from your browser works.</p>
|
<p>Now we're using Kubernetes to run users' tasks. We need the features of Kubernetes Jobs to restart the tasks when failure occurs.</p>
<p>But our users may submit the problematic applications which always exit with non-zore code. Kubernetes will restart this task over and over again.</p>
<p>Is is possible to configure the restart times about this? </p>
| <p>You can use the standard pod restart policy: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy</a></p>
<p>Unfortunately, only a policy of "Never" or "OnFailure" is allowed, so if you need to restart X times then fail, that's not possible.</p>
<p>Example:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
</code></pre>
|
<p>I have Kubernetes installed on Container Linux by CoreOS alpha (1353.1.0)
using hyperkube v1.5.5_coreos.0 using my fork of coreos-kubernetes install scripts at <a href="https://github.com/kfirufk/coreos-kubernetes" rel="noreferrer">https://github.com/kfirufk/coreos-kubernetes</a>.</p>
<p>I have two ContainerOS machines.</p>
<ul>
<li>coreos-2.tux-in.com resolved as 192.168.1.2 as controller</li>
<li>coreos-3.tux-in.com resolved as 192.168.1.3 as worker</li>
</ul>
<p><code>kubectl get pods --all-namespaces</code> returns</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
ceph ceph-mds-2743106415-rkww4 0/1 Pending 0 1d
ceph ceph-mon-check-3856521781-bd6k5 1/1 Running 0 1d
kube-lego kube-lego-3323932148-g2tf4 1/1 Running 0 1d
kube-system calico-node-xq6j7 2/2 Running 0 1d
kube-system calico-node-xzpp2 2/2 Running 4560 1d
kube-system calico-policy-controller-610849172-b7xjr 1/1 Running 0 1d
kube-system heapster-v1.3.0-beta.0-2754576759-v1f50 2/2 Running 0 1d
kube-system kube-apiserver-192.168.1.2 1/1 Running 0 1d
kube-system kube-controller-manager-192.168.1.2 1/1 Running 1 1d
kube-system kube-dns-3675956729-r7hhf 3/4 Running 3924 1d
kube-system kube-dns-autoscaler-505723555-l2pph 1/1 Running 0 1d
kube-system kube-proxy-192.168.1.2 1/1 Running 0 1d
kube-system kube-proxy-192.168.1.3 1/1 Running 0 1d
kube-system kube-scheduler-192.168.1.2 1/1 Running 1 1d
kube-system kubernetes-dashboard-3697905830-vdz23 1/1 Running 1246 1d
kube-system monitoring-grafana-4013973156-m2r2v 1/1 Running 0 1d
kube-system monitoring-influxdb-651061958-2mdtf 1/1 Running 0 1d
nginx-ingress default-http-backend-150165654-s4z04 1/1 Running 2 1d
</code></pre>
<p>so I can see that <code>kube-dns-782804071-h78rf</code> keeps restarting.</p>
<p><code>kubectl describe pod kube-dns-3675956729-r7hhf --namespace=kube-system</code> returns:</p>
<pre><code>Name: kube-dns-3675956729-r7hhf
Namespace: kube-system
Node: 192.168.1.2/192.168.1.2
Start Time: Sat, 11 Mar 2017 17:54:14 +0000
Labels: k8s-app=kube-dns
pod-template-hash=3675956729
Status: Running
IP: 10.2.67.243
Controllers: ReplicaSet/kube-dns-3675956729
Containers:
kubedns:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:kubedns
Image: gcr.io/google_containers/kubedns-amd64:1.9
Image ID: rkt://sha512-c7b7c9c4393bea5f9dc5bcbe1acf1036c2aca36ac14b5e17fd3c675a396c4219
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-map=kube-dns
--v=2
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: False
Restart Count: 981
Liveness: http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables:
PROMETHEUS_PORT: 10055
dnsmasq:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:dnsmasq
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
Image ID: rkt://sha512-8c5f8b40f6813bb676ce04cd545c55add0dc8af5a3be642320244b74ea03f872
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
--log-facility=-
Requests:
cpu: 150m
memory: 10Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Liveness: http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
dnsmasq-metrics:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:dnsmasq-metrics
Image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1
Image ID: rkt://sha512-ceb3b6af1cd67389358be14af36b5e8fb6925e78ca137b28b93e0d8af134585b
Port: 10054/TCP
Args:
--v=2
--logtostderr
Requests:
memory: 10Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
healthz:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:healthz
Image: gcr.io/google_containers/exechealthz-amd64:v1.2.0
Image ID: rkt://sha512-3a85b0533dfba81b5083a93c7e091377123dac0942f46883a4c10c25cf0ad177
Port: 8080/TCP
Args:
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
--url=/healthz-dnsmasq
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
--url=/healthz-kubedns
--port=8080
--quiet
Limits:
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-zqbdp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbdp
QoS Class: Burstable
Tolerations: CriticalAddonsOnly=:Exists
No events.
</code></pre>
<p>which shows that <code>kubedns-amd64:1.9</code> is in <code>Ready: false</code></p>
<p>this is my <code>kude-dns-de.yaml</code> file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.9
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-map=kube-dns
# This should be set to v=2 only after the new image (cut from 1.5) has
# been released, otherwise we will flood the logs.
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 10Mi
- name: dnsmasq-metrics
image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:v1.2.0
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default
</code></pre>
<p>and this is my <code>kube-dns-svc.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.3.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
</code></pre>
<p>any information regarding the issue would be greatly appreciated!</p>
<h1>update</h1>
<p><code>rkt list --full 2> /dev/null | grep kubedns</code> shows:</p>
<pre><code>744a4579-0849-4fae-b1f5-cb05d40f3734 kubedns gcr.io/google_containers/kubedns-amd64:1.9 sha512-c7b7c9c4393b running 2017-03-22 22:14:55.801 +0000 UTC 2017-03-22 22:14:56.814 +0000 UTC
</code></pre>
<p><code>journalctl -m _MACHINE_ID=744a45790849b1f5cb05d40f3734</code> provides:</p>
<pre><code>Mar 22 22:17:58 kube-dns-3675956729-sthcv kubedns[8]: E0322 22:17:58.619254 8 reflector.go:199] pkg/dns/dns.go:145: Failed to list *api.Endpoints: Get https://10.3.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.3.0.1:443: connect: network is unreachable
</code></pre>
<p>I tried to add <code> - --proxy-mode=userspace</code> to <code>/etc/kubernetes/manifests/kube-proxy.yaml</code> but the results are the same.</p>
<p><code>kubectl get svc --all-namespaces</code> provides:</p>
<pre><code>NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ceph ceph-mon None <none> 6789/TCP 1h
default kubernetes 10.3.0.1 <none> 443/TCP 1h
kube-system heapster 10.3.0.2 <none> 80/TCP 1h
kube-system kube-dns 10.3.0.10 <none> 53/UDP,53/TCP 1h
kube-system kubernetes-dashboard 10.3.0.116 <none> 80/TCP 1h
kube-system monitoring-grafana 10.3.0.187 <none> 80/TCP 1h
kube-system monitoring-influxdb 10.3.0.214 <none> 8086/TCP 1h
nginx-ingress default-http-backend 10.3.0.233 <none> 80/TCP 1h
</code></pre>
<p><code>kubectl get cs</code> provides:</p>
<pre><code>NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
</code></pre>
<p>my <code>kube-proxy.yaml</code> has the following content:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- proxy
- --cluster-cidr=10.2.0.0/16
- --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/controller-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
- mountPath: /var/run/dbus
name: dbus
readOnly: false
volumes:
- hostPath:
path: "/usr/share/ca-certificates"
name: "ssl-certs"
- hostPath:
path: "/etc/kubernetes/controller-kubeconfig.yaml"
name: "kubeconfig"
- hostPath:
path: "/etc/kubernetes/ssl"
name: "etc-kube-ssl"
- hostPath:
path: /var/run/dbus
name: dbus
</code></pre>
<p>this is all the valuable information I could find. any ideas? :)</p>
<h1>update 2</h1>
<p>output of <code>iptables-save</code> on the controller ContainerOS at <a href="http://pastebin.com/2GApCj0n" rel="noreferrer">http://pastebin.com/2GApCj0n</a></p>
<h1>update 3</h1>
<p>I ran curl on the controller node</p>
<pre><code># curl https://10.3.0.1 --insecure
Unauthorized
</code></pre>
<p>means it can access it properly, i didn't add enough parameters for it to be authorized right ?</p>
<h1>update 4</h1>
<p>thanks to @jaxxstorm I removed calico manifests, updated their quay/cni and quay/node versions and reinstalled them.</p>
<p>now kubedns keeps restarting, but I think that now calico works. because for the first time it tries to install kubedns on the worker node and not on the controller node, and also when I <code>rkt enter</code> the kubedns pod and try to <code>wget https://10.3.0.1</code> I get:</p>
<pre><code># wget https://10.3.0.1
Connecting to 10.3.0.1 (10.3.0.1:443)
wget: can't execute 'ssl_helper': No such file or directory
wget: error getting response: Connection reset by peer
</code></pre>
<p>which clearly shows that there is some kind of response. which is good right ?</p>
<p>now <code>kubectl get pods --all-namespaces</code> shows:</p>
<pre><code>kube-system kube-dns-3675956729-ljz2w 4/4 Running 88 42m
</code></pre>
<p>so.. 4/4 ready but it keeps restarting.</p>
<p><code>kubectl describe pod kube-dns-3675956729-ljz2w --namespace=kube-system</code> output at <a href="http://pastebin.com/Z70U331G" rel="noreferrer">http://pastebin.com/Z70U331G</a></p>
<p>so it can't connect to <a href="http://10.2.47.19:8081/readiness" rel="noreferrer">http://10.2.47.19:8081/readiness</a>, i'm guessing this is the ip of kubedns since it uses port 8081. don't know how to continue investigating this issue further.</p>
<p>thanks for everything!</p>
| <p>Lots of great debugging info here, thanks!</p>
<p>This is the clincher:</p>
<pre><code># curl https://10.3.0.1 --insecure
Unauthorized
</code></pre>
<p>You got an unauthorized response because you didn't pass a client cert, but that's fine, it's not what we're after. This proves that kube-proxy is working as expected and is accessible. Your rkt logs:</p>
<pre><code>Mar 22 22:17:58 kube-dns-3675956729-sthcv kubedns[8]: E0322 22:17:58.619254 8 reflector.go:199] pkg/dns/dns.go:145: Failed to list *api.Endpoints: Get https://10.3.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.3.0.1:443: connect: network is unreachable
</code></pre>
<p>Are indicating that the containers that there's network connectivity issues <em>inside</em> the containers, which indicates to me that you haven't configured container networking/CNI.</p>
<p>Please have a read through this document: <a href="https://coreos.com/rkt/docs/latest/networking/overview.html" rel="nofollow noreferrer">https://coreos.com/rkt/docs/latest/networking/overview.html</a></p>
<p>You may also have to reconfigure calico, there's some more information that here: <a href="http://docs.projectcalico.org/master/getting-started/rkt/" rel="nofollow noreferrer">http://docs.projectcalico.org/master/getting-started/rkt/</a></p>
|
<p>Can anyone suggest how can I setup NODE_ENV variable in dockerfile when running a node app.</p>
| <p>There a two ways, while building the image or when running the container.</p>
<p><strong>For builds:</strong></p>
<p>Add to your Dockerfile</p>
<pre><code>ENV NODE_ENV=whatEver
</code></pre>
<p>Or use build arguments if you don't want them to stick during runtime
Docker build --build-args NODE_ENV whatEver</p>
<p><strong>When running:</strong></p>
<p>Run your container with "-e"</p>
<pre><code>docker run -e NODE_ENV=whatever mycontainer
</code></pre>
|
<p>I was trying to install newest Bluemix container service with Kubenetes plug-in; however, I got below message. Followed by <a href="https://console.ng.bluemix.net/docs/containers/cs_cli_install.html#cs_cli_install" rel="nofollow noreferrer">DOC - container service</a></p>
<pre><code>ocmbpro:~ ochen$ bx plugin install container-service -r Bluemix
Looking up 'container-service' from repository 'Bluemix'...
FAILED
'Bluemix' does not exist as an available plug-in repo. Check the name and try again.
</code></pre>
<p><a href="https://i.stack.imgur.com/5LK4o.png" rel="nofollow noreferrer">Error_msg</a></p>
<p>Has anyone met this issue ?</p>
| <p>You can download the plugin binary directly here:</p>
<p><a href="https://clis.ng.bluemix.net/ui/repository.html#bluemix-plugins" rel="nofollow noreferrer">https://clis.ng.bluemix.net/ui/repository.html#bluemix-plugins</a></p>
<p>Then run <code>bx plugin install ~/Downloads/my-plugin</code>.</p>
<hr>
<p>You could run this command to add <code>Bluemix</code> to your list of available repos:</p>
<pre><code>bx plugin repo-add Bluemix https://plugins.ng.bluemix.net
</code></pre>
<p>After that, when you list your repos, it should be there:</p>
<pre><code>$ bx plugin repos
Listing added plug-in repositories...
Repo Name URL
Bluemix https://plugins.ng.bluemix.net
</code></pre>
<p>Then you can list the plugins:</p>
<pre><code>$ bx plugin repo-plugins
Getting plug-ins from all repositories...
Repository: Bluemix
Name Description Versions
active-deploy Bluemix CLI plugin for Active Deploy to help you update applications and containers with no downtime. Works for Cloud Foundry apps and for IBM Containers. 0.1.97, 0.1.105
auto-scaling Bluemix CLI plug-in for Auto-Scaling service 0.2.1, 0.2.2
vpn Bluemix CLI plug-in for IBM VPN service 1.5.0, 1.5.1, 1.5.2
private-network-peering pnp-plugin-description 0.1.1
IBM-Containers Plugin to operate IBM Containers service 1.0.0
container-registry Plugin for IBM Bluemix Container Registry. 0.1.104, 0.1.107
container-service IBM Bluemix Container Service for management of Kubernetes clusters 0.1.217
sdk-gen Plugin to generate SDKs from Open API specifications 0.1.1
dev IBM Bluemix CLI Developer Plug-in 0.0.5
</code></pre>
<p>Then you can install one:</p>
<pre><code>bluemix plugin install plugin_name -r Bluemix
</code></pre>
|
<p>Is probe frequency customizable in liveness/readiness probe?</p>
<p>Also, how many times readiness probe fails before it removes the pod from service load-balancer? Is it customizable?</p>
| <p>You can easily customise the probes failure threshold and the frequency, all parameters are defined <a href="https://kubernetes.io/docs/api-reference/v1/definitions/#_v1_probe" rel="nofollow noreferrer">here</a>.
For example:</p>
<pre><code> livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9081
scheme: HTTP
initialDelaySeconds: 180
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
</code></pre>
<p>That probe will run the first time after 3 mins, it will run every 10 seconds and the pod will be restarted after 3 consecutives failures.</p>
|
<p>Exposing a service with --type="LoadBalancer" in AWS currently creates a TCP-level AWS ELB with the default timeout of 60 sec. Is there a way to change that timeout other than manually looking up the load balancer and reconfiguring it using AWS tools? (I.e. the laborious kubectl describe service xyz | grep "LoadBalancer Ingress" -> use AWS API to lookup the load balancer with this URL and set its timeout) Or are the good alternatives to using this automatically created ELB?</p>
<p>The problem with the current situation is that (1) 1 min is too short for some of our services and (2) due to load-balancing on the TCP (and not HTTP) level, the client does not get an informative error when the timeout is reached (in the case of curl: "curl: (52) Empty reply from server")</p>
<p>Thank you!</p>
| <p>It's possible to set connection idle timeout for ELB in the recent Kubernetes versions (1.4 or later?) using an annotation on the service. For example:</p>
<pre><code>kubectl annotate service my-service service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout=1200
</code></pre>
<p>Also, you can change the load balancing protocol to HTTP with the below annotation.</p>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol
</code></pre>
<p>See <a href="https://github.com/kubernetes/cloud-provider-aws/blob/54f2b5cd571a6d26f79c22f8b8b977ad5d9c8df7/pkg/providers/v1/aws.go#L144" rel="noreferrer">the AWS provider source</a> for more annotations for AWS ELB.</p>
|
<p>Is it possible to create a node-pool that the scheduler will ignore by default but that can be targeted by node-selector?</p>
| <p>If your node-pool has a static size or at least it's not auto-scaling then this is easy to accomplish.</p>
<p>First, <a href="https://kubernetes.io/docs/user-guide/kubectl/kubectl_taint/" rel="nofollow noreferrer">taint</a> the nodes in that pool:</p>
<pre><code>kubectl taint node \
`kubectl get node -l cloud.google.com/gke-nodepool=my-pool -o name` \
dedicated=my-pool:NoSchedule
</code></pre>
<h2>Kubernetes version >= 1.6</h2>
<p>Then add <code>affinity</code> and <code>tolerations</code> values under <code>spec:</code> in your Pod(templates) that need to be able to run on these nodes:</p>
<pre><code>spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: dedicated
operator: In
values: ["my-pool"]
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
</code></pre>
<h2>Pre 1.6</h2>
<p>Then add these annotations to your Pod(templates) that need to be able to run on these nodes:</p>
<pre><code>annotations:
scheduler.alpha.kubernetes.io/tolerations: >
[{"key":"dedicated", "value":"my-pool"}]
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "dedicated",
"operator": "In",
"values": ["my-pool"]
}
]
}
]
}
}
}
</code></pre>
<p>See the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/taint-toleration-dedicated.md" rel="nofollow noreferrer">design doc</a> for more information.</p>
<h2>Autoscaling group of nodes</h2>
<p>You need to add the <code>--register-with-taints</code> parameter to <a href="http://kubernetes.io/docs/admin/kubelet" rel="nofollow noreferrer"><code>kubelet</code></a>:</p>
<blockquote>
<p>Register the node with the given list of taints (comma separated <code><key>=<value>:<effect></code>). No-op if register-node is false.</p>
</blockquote>
<p>In another <a href="https://stackoverflow.com/questions/43032406/gke-cant-disable-transparent-huge-pages-permission-denied/43081893#43081893">answer</a> I gave some examples on how to persist that setting. GKE now also has specific support for <a href="https://cloud.google.com/container-engine/docs/node-taints" rel="nofollow noreferrer">tainting node pools</a></p>
|
<p>I have a k8s cluster, among other things running an nginx.
when I do <code>curl -v <url></code> I get </p>
<pre><code> HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:25:27 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
</code></pre>
<p>however when I do <code>curl -v <url> -H 'Accept: application/json'</code> I get</p>
<pre><code>< HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:26:10 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
* Could not resolve host: application
* Closing connection 1
curl: (6) Could not resolve host: application
</code></pre>
<p>My task is to get the request to return a json not html.
To my understanding I have to create an ingress-controller and modify the ngnix.conf somehow, I've been trying for a few days now but can't get it right. Any kind of help would be most appreciated.</p>
<p>The following are of the yaml files I've been using:</p>
<p>configmap:</p>
<pre><code>apiVersion: v1
data:
server-tokens: "false"
proxy-body-size: "4110m"
server-name-hash-bucket-size: "128"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
labels:
app: nginx-ingress-lb
</code></pre>
<p>daemonset:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-lb
labels:
app: nginx-ingress-lb
spec:
template:
metadata:
labels:
name: nginx-ingress-lb
app: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
nodeSelector:
NodeType: worker
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.1
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
</code></pre>
<p>deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
replicas: 2
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.2
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 100m
memory: 20Mi
requests:
cpu: 100m
memory: 20Mi
</code></pre>
<p>service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- port: 80
targetPort: 8080
</code></pre>
| <p>There are two things:</p>
<ul>
<li>Exposing your app</li>
<li>Making your app return json</li>
</ul>
<p>The ingess is only relevant to expose your app. And that is not the only option, you can use service (type Load balancer, for example) to achieve that too on most cloud providers. So, I'd keep it simple and not use ingress for now, until you solve the second problem.</p>
<p>As it has been explained, your curl has a syntax problem and that's why it shows <code>curl: (6) Could not resolve host: application</code>.</p>
<p>The other thing is fixing that won't make your app return json. And this is because you are only saying you accept json with that header. But if you want your app to return json, then you need to write it on your app. nginx can't guess how you want to map your HTML to json. There is much no other way than writting it, at least that I know of :-/</p>
|
<p>I have a v6.10.0 Node server on my macOS that is automatically started from the <code>CMD</code> in the <code>Dockerfile</code>. Normally in my local development un-containerized environment I will use CTRL+C to kill the server. Not being able to (or not knowing how to) do this in the container, I resort to <code>ps aux | grep node</code> to try to manually kill the processes. So, I get something like this:</p>
<pre><code>myapp [master] :> kubectl exec -it web-3127363242-xb50k bash
root@web-3127363242-xb50k:/usr/src/app# ps aux | grep node
root 15 0.4 0.9 883000 35804 ? Sl 05:49 0:00 node /usr/src/app/node_modules/.bin/concurrent --kill-others npm run start-prod npm run start-prod-api
root 43 0.1 0.6 743636 25240 ? Sl 05:49 0:00 node /usr/src/app/node_modules/.bin/better-npm-run start-prod
root 44 0.1 0.6 743636 25140 ? Sl 05:49 0:00 node /usr/src/app/node_modules/.bin/better-npm-run start-prod-api
root 55 0.0 0.0 4356 740 ? S 05:49 0:00 sh -c node ./bin/server.js
root 56 0.0 0.0 4356 820 ? S 05:49 0:00 sh -c node ./bin/api.js
root 57 18.6 4.9 1018088 189416 ? Sl 05:49 0:08 node ./bin/server.js
root 58 13.9 5.2 1343296 197576 ? Sl 05:49 0:06 node ./bin/api.js
root 77 0.0 0.0 11128 1024 ? S+ 05:50 0:00 grep node
</code></pre>
<p>When I try to kill one of them by</p>
<pre><code>kill -9 15
</code></pre>
<p>I am taken out of my container's shell and back to my computer's shell. When I enter the container again, I see that the process is still there with the same process id. This example uses a Kubernetes pod but I believe I have the same result with entering a Docker container using the <code>docker exec</code> command.</p>
| <p>Every docker container has an <strong>ENTRYPOINT</strong> that will either be set in the <em>dockerfile</em>, using <code>ENTRYPOINT</code>or <code>CMD</code> declarations, or specified in the run command <code>docker run myimage:tag "entrypoint_command"</code>. When the <strong>ENTRYPOINT</strong> process is killed, I think the container gets killed as well. The <code>docker exec</code>, as I understand it, is kind of like "attaching" command to a container. But if the <strong>ENTRYPOINT</strong> is down there is no container to attach to. </p>
<p>Kubernetes will restart a container after failure as far as I understand it. Which might be the reason you see the process is back up. I haven't really worked with Kubernetes but I'd try and play around with the way that the replications are <a href="https://kubernetes.io/docs/user-guide/deployments/#scaling-a-deployment" rel="noreferrer">scaled</a> to terminate your process.</p>
|
<p>I have a k8s cluster, among other things running an nginx.
when I do <code>curl -v <url></code> I get </p>
<pre><code> HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:25:27 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
</code></pre>
<p>however when I do <code>curl -v <url> -H 'Accept: application/json'</code> I get</p>
<pre><code>< HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:26:10 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
* Could not resolve host: application
* Closing connection 1
curl: (6) Could not resolve host: application
</code></pre>
<p>My task is to get the request to return a json not html.
To my understanding I have to create an ingress-controller and modify the ngnix.conf somehow, I've been trying for a few days now but can't get it right. Any kind of help would be most appreciated.</p>
<p>The following are of the yaml files I've been using:</p>
<p>configmap:</p>
<pre><code>apiVersion: v1
data:
server-tokens: "false"
proxy-body-size: "4110m"
server-name-hash-bucket-size: "128"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
labels:
app: nginx-ingress-lb
</code></pre>
<p>daemonset:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-lb
labels:
app: nginx-ingress-lb
spec:
template:
metadata:
labels:
name: nginx-ingress-lb
app: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
nodeSelector:
NodeType: worker
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.1
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
</code></pre>
<p>deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
replicas: 2
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.2
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 100m
memory: 20Mi
requests:
cpu: 100m
memory: 20Mi
</code></pre>
<p>service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- port: 80
targetPort: 8080
</code></pre>
| <p>Remove the space after colon in <code>curl -v <url> -H 'Accept: application/json'</code></p>
<p>The error message <code>Could not resolve host: application</code> means that it's taking <code>application/json</code> as the URL, instead of a header.</p>
|
<p>I want to set Kubernetes for CI runners on Gitlab. I want to use Kuberntes for continuous integration and build running, not deployment. Does somebody try that and how I can do that?</p>
| <p>Yes, use the <a href="https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/executors/kubernetes.md" rel="nofollow noreferrer">kubernetes executor</a> for your gitlab runner</p>
|
<p>I'm using <a href="https://github.com/kubernetes/kubernetes/blob/7ef585be224dae4ec5deae0f135653116e21a6e0/cluster/addons/registry/README.md#expose-the-registry-on-each-node" rel="nofollow noreferrer">private Docker registry addon</a> in my kubernetes cluster, and I would like to expose port 5000 on each node to pull image from localhost:5000 easily. So I placed a pod manifest file <code>/etc/kubernetes/manifests/kube-registry-proxy.manifest</code> on every node to start a local proxy for port 5000. It works when I manually deployed kubernetes on bare metal ubuntu few months ago, but failed when I try kargo, the port 5000 not listening. </p>
<p>I'm using kargo with calico network plugin, the docker registry's configurations are:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: kube-system-kube-registry-pv
labels:
kubernetes.io/cluster-service: "true"
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /registry
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kube-registry-pvc
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Gi
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: kube-registry
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry
version: v0
template:
metadata:
labels:
k8s-app: kube-registry
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2.5.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
persistentVolumeClaim:
claimName: kube-registry-pvc
apiVersion: v1
kind: Service
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeRegistry"
spec:
selector:
k8s-app: kube-registry
ports:
- name: registry
port: 5000
protocol: TCP
</code></pre>
<p>I have created a pod manifest file <code>/etc/kubernetes/manifests/kube-registry-proxy.manifest</code> before run kargo:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-registry-proxy
namespace: kube-system
spec:
containers:
- name: kube-registry-proxy
image: gcr.io/google_containers/kube-registry-proxy:0.3
resources:
limits:
cpu: 100m
memory: 50Mi
env:
- name: REGISTRY_HOST
value: kube-registry.kube-system.svc.cluster.local
- name: REGISTRY_PORT
value: "5000"
- name: FORWARD_PORT
value: "5000"
ports:
- name: registry
containerPort: 5000
hostPort: 5000
</code></pre>
<p>kube-registry-proxy is running on all nodes, but nothing listen on port 5000. Some output:</p>
<pre><code>ubuntu@k8s15m1:~$ kubectl get all --all-namespaces | grep registry-proxy
kube-system po/kube-registry-proxy-k8s15m1 1/1 Running 1 1h
kube-system po/kube-registry-proxy-k8s15m2 1/1 Running 0 1h
kube-system po/kube-registry-proxy-k8s15s1 1/1 Running 0 1h
ubuntu@k8s15m1:~$ docker ps | grep registry
756fcf674288 gcr.io/google_containers/kube-registry-proxy:0.3 "/usr/bin/run_proxy" 19 minutes ago Up 19 minutes k8s_kube-registry-proxy.bebf6da1_kube-registry-proxy-k8s15m1_kube-system_a818b22dc7210ecd31414e328ae28e43_7221833c
ubuntu@k8s15m1:~$ docker logs 756fcf674288 | tail
waiting for kube-registry.kube-system.svc.cluster.local to come online
starting proxy
ubuntu@k8s15m1:~$ netstat -ltnp | grep 5000
ubuntu@k8s15m1:~$ curl -v localhost:5000/v1/
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 5000 failed: Connection refused
* Failed to connect to localhost port 5000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 5000: Connection refused
ubuntu@k8s15m1:~$ kubectl get po kube-registry-proxy-k8s15m1 --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-registry-proxy-k8s15m1 1/1 Running 3 1h 10.233.69.64 k8s15m1
ubuntu@k8s15m1:~$ curl -v 10.233.69.64:5000/v1/
* Trying 10.233.69.64...
* Connected to 10.233.69.64 (10.233.69.64) port 5000 (#0)
> GET /v1/ HTTP/1.1
> Host: 10.233.69.64:5000
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Tue, 14 Mar 2017 16:41:56 GMT
< Content-Length: 19
<
404 page not found
* Connection #0 to host 10.233.69.64 left intact
</code></pre>
| <p>I think there are a couple of things going on here.</p>
<p>Foremost, be aware that Kubernetes <code>Services</code> come in 3 flavors: <code>ClusterIP</code> (which is the default), <code>NodePort</code> (which sounds very much like what you were expecting to happen), and <code>LoadBalancer</code> (which I won't mention further, but the docs do).</p>
<p>I would expect that if you updated your <code>Service</code> to explicitly request <code>type: NodePort</code>, you'll get closer to what you had in mind (but be aware that unless you changed it, <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="nofollow noreferrer">NodePort ports are limited to 30000-32767</a>.</p>
<p>Thus:</p>
<p><code>
apiVersion: v1
kind: Service
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeRegistry"
spec:
type: NodePort # <--- updated line
selector:
k8s-app: kube-registry
ports:
- name: registry
port: 5000
nodePort: 30500 # <-- you can specify, or omit this
protocol: TCP
</code></p>
<p>If you have an opinion about the port you want the Service to listen on, feel free to specify it, or just leave it off and Kubernetes will pick one from the available space.</p>
<p>I'm going to mention this next thing for completeness, but what I'm about to say is a bad practice, so please don't do it. You can also have the <code>Pods</code> listen directly on the actual TCP/IP stack of the <code>Node</code>, by <a href="https://kubernetes.io/docs/api-reference/v1.5/#containerport-v1" rel="nofollow noreferrer">specifying <code>hostPort</code></a>; so in your case, it would be <code>hostPort: 5000</code> right below <code>containerPort: 5000</code>, causing the <code>Pod</code> to behave like a normal <code>docker -p 5000:5000</code> command would. But doing that makes scheduling <code>Pods</code> a nightmare, so please don't.</p>
<p>Secondly, about your 404 from <code>curl</code>:</p>
<p>I'm going to assume, based on the output of your curl command that <code>10.233.69.x</code> is your Service CIDR, which explains why port 5000 responded with anything. The request was in the right spirit, but <code>/v1/</code> was an incorrect URI to attempt. The <a href="https://docs.docker.com/registry/spec/api/" rel="nofollow noreferrer">Docker Registry API docs</a> contains a section about <a href="https://docs.docker.com/registry/spec/api/#api-version-check" rel="nofollow noreferrer">checking it is a V2 API instance</a>. My favorite <code>curl</code> of a registry is <code>https://registry.example.com/v2/_catalog</code> because it will return the name of every image therein, ensuring that my credentials are correct, that the registry server is operating correctly, and so on.</p>
<p>I know it's a lot to take in, so if you feel I glossed over something, let me know and I'll try to address it. Good luck!</p>
|
<p>I found that there is an object PodTemplate registered in apiserver which we can use as other kube objects.</p>
<p>From the struct definition, PodTemplate contains PodTemplateSpec just as other workload objects, such as rc. But I do not find any usage of PodTemplate object from either doc or code. And rc/ps/ss does not have any reference to PodTemplate actually.</p>
<p>So I want to understand:</p>
<ul>
<li>What's the intention to expose PodTemplate object from apiserver?</li>
<li>Is this object a legacy one which will be retired in the future?</li>
</ul>
| <ul>
<li><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/templates.md" rel="nofollow noreferrer">design proposal</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/170" rel="nofollow noreferrer">Separate the pod template from replicationController #170</a> (170! Being discussed for 3 years now!)</li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/42789" rel="nofollow noreferrer">should PodTemplatesGetter be part of public API #42789</a></li>
</ul>
<p>The official statement:</p>
<blockquote>
<p>The objects exist and can be used. We don't really have a position on it today.</p>
</blockquote>
<p>It is hoped that eventually StatefulSets and DaemonSets will have history, like Deployments, and that will make use of this object.</p>
|
<p>I am trying to evaluate <a href="https://gravitational.com/teleport/" rel="nofollow noreferrer">Gravitational Teleport</a> to be for maintaining our own architecture as we will expand soon looking into building clusters. It seems a containerized cluster using Kubernetes is where the bleeding edge is happening and SSH'ing into instances seems more like something you would want to do if you want to manage a lot of VPS yourselves rather than having an immutable infrastructure.</p>
<p>So is Teleport a parallell technology to Container / Kubernetes / Immutable Infrastructure or is it orthogonal, as in, can be used in addition?</p>
| <p>Teleport can be used to augment a Kubernetes cluster. Kubernetes gives you access into your pods/container, but does not handle the OS level access and control.</p>
<p>Teleport enterprise has extra features for Kubernetes via TeleKube- <a href="http://gravitational.com/teleport/docs/2.0/enterprise/" rel="nofollow noreferrer">http://gravitational.com/teleport/docs/2.0/enterprise/</a></p>
|
<p>We have a kubernetes service running on three machines. Clients both inside and outside of our cluster talk to this service over http with the keep-alive option enabled. During a deploy of the service, the exiting pods have a readiness check that starts to fail when shutdown starts, and are removed from the service endpoints list appropriately, however they still receive traffic and some requests fail as the container will abruptly exit. We believe this is because of the keep-alive which allows the the client to re-use these connections that were established when the host was Ready. Is there a series of steps one should follow to make sure we don't run into these issues? We'd like to allow keep-alive connections if at all possible.</p>
| <p>The issue happens if the proxying/load balancing happens in layer 4 instead of layer 7. For the internal services (Kubernetes service of type ClusterIP), since the Kube-proxy does the proxying using layer 4 proxying, the clients will keep the connection even after the pod isn't ready to serve anymore. Similarly, for the services of type LoadBalancer, if the backend type is set to TCP (which is by default with AWS ELB), the same issue happens. Please see <a href="https://github.com/kubernetes/kubernetes/issues/38456" rel="noreferrer">this</a> issue for more details.</p>
<p>The solution to this problem as of now is:</p>
<ul>
<li>If you are using a cloud LoadBalancer, go ahead and set the backend to HTTP. For example, You can add <code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol</code> annotation to kubernetes service and set it to HTTP so that ELB uses HTTP proxying instead of TCP.</li>
<li>Use a layer 7 proxy/ingress controller within the cluster to route the traffic instead of sending it via <code>kube-proxy</code></li>
</ul>
|
<p>I am using the ELK stack (elasticsearch, logsash, kibana) for log processing and analysis in a Kubernetes (minikube) environment. To capture logs I am using filebeat. Logs are propagated successfully from filebeat through to elasticsearch and are viewable in Kibana. </p>
<p>My problem is that I am unable to get the pod name of the actual pod issuing log records. Rather I only get the filebeat podname which is gathering log files and not name of the pod that is originating log records.</p>
<p>The information I can get from filebeat are (as viewed in Kibana)</p>
<ul>
<li>beat.hostname: the value of this field is the filebeat pod name</li>
<li>beat.name: value is the filebeat pod name</li>
<li>host: value is the filebeat pod name</li>
</ul>
<p>I can also see/discern container information in Kibana which flow through from filebeat / logstash / elasticsearch:</p>
<ul>
<li>app: value is {log-container-id}-json.log</li>
<li>source: value is /hostfs/var/lib/docker/containers/{log-container-id}-json.log</li>
</ul>
<p>As shown above, I seem to be able to get the container Id but not the pod name.</p>
<p>To mitigate the situation, I could probably embed the pod-name in the actual log message and parse it from there, but I am hoping there is a solution in which I can configure filebeat to emit actual pod names.</p>
<p>Does anyone now how to configure filebeat (or other components) to capture kubernetes (minikube) pod names in their logs?</p>
<p>My current filebeat configuration is listed below:</p>
<p>ConfigMap is shown below:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat
namespace: logging
labels:
component: filebeat
data:
filebeat.yml: |
filebeat.prospectors:
- input_type: log
tags:
- host
paths:
- "/hostfs/var/log"
- "/hostfs/var/log/*"
- "/hostfs/var/log/*/*"
exclude_files:
- '\.[0-9]$'
- '\.[0-9]\.gz$'
- input_type: log
tags:
- docker
paths:
- /hostfs/var/lib/docker/containers/*/*-json.log
json:
keys_under_root: false
message_key: log
add_error_key: true
multiline:
pattern: '^[[:space:]]+|^Caused by:'
negate: false
match: after
output.logstash:
hosts: ["logstash:5044"]
logging.level: info
</code></pre>
<p>DamemonSet is shown below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
spec:
template:
metadata:
labels:
component: filebeat
spec:
containers:
- name: filebeat
image: giantswarm/filebeat:5.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
requests:
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/filebeat
readOnly: true
- name: hostfs-var-lib-docker-containers
mountPath: /hostfs/var/lib/docker/containers
readOnly: true
- name: hostfs-var-log
mountPath: /hostfs/var/log
readOnly: true
volumes:
- name: config
configMap:
name: filebeat
- name: hostfs-var-log
hostPath:
path: /var/log
- name: hostfs-var-lib-docker-containers
hostPath:
path: /var/lib/docker/containers
</code></pre>
| <p>disclaimer: I'm a beats developer</p>
<p>What you want to do is not yet supported by filebeat, but definitely, it's something we want to put some effort on, so you can expect future releases supporting this kind of mapping.</p>
<p>In the meantime, I think your approach is correct. You can append the info you need to your logs so you have it in elasticsearch</p>
|
<p>I am struggling to find any answer to this in the Kubernetes documentation. The scenario is the following:</p>
<ul>
<li>Kubernetes version 1.4 over AWS</li>
<li>8 pods running a NodeJS API (Express) deployed as a Kubernetes Deployment</li>
<li>One of the pods gets restarted for no apparent reason late at night (no traffic, no CPU spikes, no memory pressure, no alerts...). Number of restarts is increased as a result of this.</li>
<li>Logs don't show anything abnormal (ran <code>kubectl -p</code> to see previous logs, no errors at all in there)</li>
<li>Resource consumption is normal, cannot see any events about Kubernetes rescheduling the pod into another node or similar</li>
<li>Describing the pod gives back <code>TERMINATED</code> state, giving back <code>COMPLETED</code> reason and exit code 0. I don't have the exact output from <code>kubectl</code> as this pod has been replaced multiple times now.</li>
</ul>
<p>The pods are NodeJS server instances, they cannot <em>complete</em>, they are always running waiting for requests.</p>
<p>Would this be internal Kubernetes rearranging of pods? Is there any way to know when this happens? Shouldn't be an event somewhere saying why it happened?</p>
<p><strong>Update</strong></p>
<p>This just happened in our prod environment. The result of describing the offending pod is:</p>
<p><code>
api:
Container ID: docker://7a117ed92fe36a3d2f904a882eb72c79d7ce66efa1162774ab9f0bcd39558f31
Image: 1.0.5-RC1
Image ID: docker://sha256:XXXX
Ports: 9080/TCP, 9443/TCP
State: Running
Started: Mon, 27 Mar 2017 12:30:05 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 24 Mar 2017 13:32:14 +0000
Finished: Mon, 27 Mar 2017 12:29:58 +0100
Ready: True
Restart Count: 1
</code></p>
<p><strong>Update 2</strong></p>
<p>Here it is the <code>deployment.yaml</code> file used:</p>
<pre><code>apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
namespace: "${ENV}"
name: "${APP}${CANARY}"
labels:
component: "${APP}${CANARY}"
spec:
replicas: ${PODS}
minReadySeconds: 30
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
component: "${APP}${CANARY}"
spec:
serviceAccount: "${APP}"
${IMAGE_PULL_SECRETS}
containers:
- name: "${APP}${CANARY}"
securityContext:
capabilities:
add:
- IPC_LOCK
image: "134078050561.dkr.ecr.eu-west-1.amazonaws.com/${APP}:${TAG}"
env:
- name: "KUBERNETES_CA_CERTIFICATE_FILE"
value: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: "ENV"
value: "${ENV}"
- name: "PORT"
value: "${INTERNAL_PORT}"
- name: "CACHE_POLICY"
value: "all"
- name: "SERVICE_ORIGIN"
value: "${SERVICE_ORIGIN}"
- name: "DEBUG"
value: "http,controllers:recommend"
- name: "APPDYNAMICS"
value: "true"
- name: "VERSION"
value: "${TAG}"
ports:
- name: "http"
containerPort: ${HTTP_INTERNAL_PORT}
protocol: "TCP"
- name: "https"
containerPort: ${HTTPS_INTERNAL_PORT}
protocol: "TCP"
</code></pre>
<p>The Dockerfile of the image referenced in the above Deployment manifest:</p>
<pre><code>FROM ubuntu:14.04
ENV NVM_VERSION v0.31.1
ENV NODE_VERSION v6.2.0
ENV NVM_DIR /home/app/nvm
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/v$NODE_VERSION/bin:$PATH
ENV APP_HOME /home/app
RUN useradd -c "App User" -d $APP_HOME -m app
RUN apt-get update; apt-get install -y curl
USER app
# Install nvm with node and npm
RUN touch $HOME/.bashrc; curl https://raw.githubusercontent.com/creationix/nvm/${NVM_VERSION}/install.sh | bash \
&& /bin/bash -c 'source $NVM_DIR/nvm.sh; nvm install $NODE_VERSION'
ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/$NODE_VERSION/bin:$PATH
# Create app directory
WORKDIR /home/app
COPY . /home/app
# Install app dependencies
RUN npm install
EXPOSE 9080 9443
CMD [ "npm", "start" ]
</code></pre>
<p><code>npm start</code> is an alias for a regular <code>node app.js</code> command that starts a NodeJS server on port 9080.</p>
| <p>Check the version of docker you run, and whether the docker daemon was restarted during that time.</p>
<p>If the docker daemon was restarted, all the container would be terminated (unless you use the new "live restore" feature in 1.12). In <em>some</em> docker versions, docker may incorrectly reports "exit code 0" for all containers terminated in this situation. See <a href="https://github.com/docker/docker/issues/31262" rel="nofollow noreferrer">https://github.com/docker/docker/issues/31262</a> for more details. </p>
|
<p>I hope someone can enlighten me on Kubernetes network setup. I want to use flannel as the Kubernetes network layer.</p>
<p>I'm on Kubernetes: 1.5.5 btw.</p>
<p>The problem is, there are so many places you can(should?) define a cidr, I'm not seeing the woods for the trees anymore.</p>
<p>To give you some info about my setup:</p>
<ul>
<li>my hosts are deployed in the 10.9.0.0/24 range.</li>
<li>I want to use 10.254.0.0/16 as the flannel range</li>
<li>at the moment, docker on the worker nodes uses the 172.17.0.0/16 range</li>
</ul>
<p>kube-apiserver has the following options for cidr's:</p>
<pre><code>--service-cluster-ip-range
</code></pre>
<p>kube-controller-manager has the following options for cidr's:</p>
<pre><code>--cluster-cidr
--service-cluster-ip-range
</code></pre>
<p>what's the difference between those two?</p>
<p>kube-proxy has this one:</p>
<pre><code>--cluster-cidr={{ kubernetes_cluster_cidr }}
</code></pre>
<p>what ip range goes where exactly?</p>
| <p>Actually, you have 2 different network layers there:</p>
<ul>
<li>Cluster-cidr: the pods layer (where you want to use 10.254.0.0/16):
Once you have your flannel network up and running, you will have to configure Docker to use it (via systemd drop-in or with something like:
<code>echo "DOCKER_OPTS="--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}"" >> /etc/default/docker</code>). This way all the "docker0" interfaces in your cluster will be connected within the flannel network.</li>
<li>Service-cluster-ip-range: the <a href="https://kubernetes.io/docs/user-guide/services/" rel="nofollow noreferrer">service</a> layer. Services are used to abstract a logical set of pods. As long as you don't know where a pod is going to be located, or the IP that is going to be assigned to it... You need some kind of abstraction to reach a pod/set of pods, wherever they are.</li>
</ul>
<blockquote>
<p>A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).
As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling.</p>
</blockquote>
<p>NOTE: The service layer must not overlap with your cluster pods network (flannel) or any other existing network infrastructure.</p>
|
<p>I'm running a three node cluster on GCE. I want to drain one node and delete the underlying VM.</p>
<p>Documentation for kubectl <code>drain</code> command says:</p>
<p><code>Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node)
</code></p>
<p>I execute the following commands:</p>
<ol>
<li><p>Get the nodes</p>
<pre><code>$ kl get nodes
NAME STATUS AGE
gke-jcluster-default-pool-9cc4e660-6q21 Ready 43m
gke-jcluster-default-pool-9cc4e660-rx9p Ready 6m
gke-jcluster-default-pool-9cc4e660-xr4z Ready 23h
</code></pre></li>
<li><p>Drain node <code>rx9p</code>.</p>
<pre><code>$ kl drain gke-jcluster-default-pool-9cc4e660-rx9p --force
node "gke-jcluster-default-pool-9cc4e660-rx9p" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: fluentd-cloud-logging-gke-jcluster-default-pool-9cc4e660-rx9p, kube-proxy-gke-jcluster-default-pool-9cc4e660-rx9p
node "gke-jcluster-default-pool-9cc4e660-rx9p" drained
</code></pre></li>
<li><p>Delete gcloud VM.</p>
<pre><code> $ gcloud compute instances delete gke-jcluster-default-pool-9cc4e660-rx9p
</code></pre></li>
<li><p>List VMs.</p>
<pre><code> $ gcloud compute instances list
</code></pre>
<p>In the result, I'm seeing the VM I deleted above - <code>rx9p</code>. If I do <code>kubectl get nodes</code>, I'm seeing the rx9p node too.</p></li>
</ol>
<p>What's going on? Something is restarting the VM I'm deleting? Do I have to wait for some timeout between the commands?</p>
| <p>You are on the right track with draining the node first.</p>
<p>The nodes (compute instances) are part of a <a href="https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups" rel="noreferrer">managed instance group</a>. If you delete just them with the <code>gcloud compute instances delete</code> command the managed instance group will recreate them.</p>
<p>To delete one properly use this command (after you have drained it!):</p>
<pre><code>gcloud compute instance-groups managed delete-instances \
gke-jcluster-default-pool-9cc4e660-grp \
--instances=gke-jcluster-default-pool-9cc4e660-rx9p \
--zone=...
</code></pre>
|
<p>I am running a kubernetes cluster in my centos machine.
I donot want to create a pod for mysql. MySQL is installed in another machine in same network (Machine is not in kubernates private network).</p>
<p>How can I access the mysql service from the pods running in kubernetes cluster ?</p>
<p>I have tried with service and end point with below configuration. But, No luck.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: database
spec:
ports:
- port: 13080
targetPort: 13080
protocol: TCP
---
kind: Deployment
apiVersion: v1
metadata:
name: database
subsets:
- addresses:
- ip: XX.XX.XX.XX
ports:
- port: 13080
---
kind: ReplicationController
metadata:
name: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
name: test
labels:
app: test
spec:
containers:
- name: my_pods
image: my_pods
env:
- name: DATABASE_HOST
value: database
- name: DATABASE_PORT
value: "13080"
- name: DATABASE_USER
value: "SAAS"
- name: DATABASE_PASSWORD
value: "SAAS"
- name: DATABASE_NAME
value: "SAASDB"
ports:
- containerPort: 8080
imagePullSecrets:
- name: my-secret
---
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
name: test-service
spec:
type: NodePort
ports:
- port: 11544
targetPort: 8080
nodePort: 30600
selector:
name: test
</code></pre>
| <p>You don't need a service for things outside the cluster. Depending on the networking model you're using, the docker container (ie kubernetes pod) should be able to connect to the MySQL container normally via the bridge that Docker sets up. Check the host has connectivity on port 3306, and it does, simply put in the DNS name (your kube-dns pod should forward any non kubernetes based requests on to the hosts resolv.conf of the host it was scheduled on)</p>
|
<p>I am trying to build an application which in essence consists of two parts. </p>
<ol>
<li>Django based api</li>
<li>SQLite database.</li>
</ol>
<p>Api interacts with the SQLite database, but has only read only access. However, the SQLite database needs to be updated every N minutes. So my plan was to make two docker container. First one, for the api. The second one, for the script which is executed every N minutes using CRON (Ubuntu) to update the database. </p>
<p>I am using Kubernetes to serve my applications. So I was wondering if there is a way to achieve what I want here? </p>
<p>I've researched about Persistent Volumes in Kubernetes, but still do not see how I can make it work.</p>
<p>EDIT:</p>
<p>So I have figured that I can use one pod two share two containers on Kubernetes and this way make use of the emptyDir. My question is then, how do I define the path to this directory in my python files?</p>
<p>Thanks,
Lukas </p>
| <p>Take into account that <code>emptyDir</code> is erased every time the pod is stopped/killed (you do a deployment, a node crash, etc.). See the doc for more details: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#emptydir</a></p>
<p>Taking that into account, if that solves your problem, then you just need to put the <code>mountPath</code> to the directory you want, as in the link above shows the example.</p>
<p>Take into account that the whole directory will be empty, so if you have other things there they won't be visible if you set up and <code>emptyDir</code> (just typical unix mount semantics, nothing k8s specific here)</p>
|
<p>My current development environment allows for automatic code reload whenever changing a file (i.e <code>nodemon</code> / <code>webpack</code>). However I am setting up a <code>kubernetes</code> (<code>minikube</code>) environment so that I can quickly open 3-4 related services at once. </p>
<p>Everything is working fine, but it is not currently doing the automatic code reload. I tried mounting the volume but there is some conflict with the way <code>docker</code> and <code>virtualbox</code> handles files such that the conflict leads to <a href="https://github.com/docker/docker/issues/15793" rel="noreferrer">file changes from the host not reflected in docker container</a>. (That's not the first link I have that appears related to this problem, it's just the first I found while googling it on another day)...</p>
<p>Anyways, long story short, ppl have trouble getting live reload done in development. I've found the problem literred throughout the interweb with very few solutions. The best solution I would say I found so far is <a href="https://github.com/kubernetes/kubernetes/issues/13776#issuecomment-196325337" rel="noreferrer">This person used tar from the host</a> to sync folders. </p>
<p>However I would like a solution from the container. The reason is that I want to run the script from the container so that the developer doesn't have to run some script on his host computer every time he starts developing in a particular repo.</p>
<p>In order to do this however I need to run rsync from the container to the host machine. And I'm having a surprising lot of trouble figuring out how to write the syntax for that. </p>
<p>Let's pretend my app exists in my container and host respectively as: </p>
<pre><code>/workspace/app # location in container
/Users/terence/workspace/app # location in host computer
</code></pre>
<p>How do I rsync from the container to the host? I've tried using the <code>172.17.0.17</code> and <code>127.0.0.1</code> to no avail. Not entirely sure if there is a way to do it?</p>
<p>examples I tried:</p>
<pre><code>rsync -av 172.17.0.17:Users/terence/workspace/app /workspace/app
rsync -av 127.0.0.1:Users/terence/workspace/app /workspace/app
</code></pre>
| <p>If you're running the rsync from the host (not inside the container), you could use <code>docker cp</code> instead:</p>
<p>e.g., <code>docker cp containerName:/workspace/app Users/terence/workspace/app</code></p>
<p>Could you clarify:<br>
1. are you running the rsync from the host or from inside the container?<br>
If it's from inside the container it'll depend a lot on the <code>--network</code> the container is attached to (i.e., bridged or host) and also the mounted volumes (i.e., when you started up the container did you use -v flag?)</p>
<p><strong>Update</strong>: For rsync to work from within the container you need to expose the host's dir to the container.</p>
<p>As you think of a solution, keep this in mind: <a href="https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume" rel="noreferrer">host dir as a data volume</a></p>
<blockquote>
<p><strong>Note</strong>: The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile, the VOLUME instruction does not support passing a host-dir, because built images should be portable. A host directory wouldn’t be available on all potential hosts.</p>
</blockquote>
|
<p>I have create two secret docker-registrys using with these step:</p>
<pre><code>$ kubectl create secret docker-registry my-secret --docker-username=burak --docker-password=deniz --docker-email=burak@...
$ kubectl create secret docker-registry secret --docker-username=burak --docker-password=deniz --docker-email=burak@...
</code></pre>
<p>After these step I tried to get deployment with using that:</p>
<pre><code>$ kubectl get deployment
</code></pre>
<p>and I have faced to that ERROR:</p>
<blockquote>
<p>Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA")</p>
</blockquote>
<p>and When I have tried to delete or get these secret docker-registrys,I faced to same Error.</p>
<p>How Can I delete these docker-registrys. Can you help me?.</p>
<p>Thank you for your answers.</p>
| <p>I recommend rebuilding your cluster and taking care to not disrupt the TLS tokens that you get for <code>kubectl</code> to function correctly.</p>
|
<p>I have a 6 minion cluster and would like to know how many of these minions are actually hosting pods at any given time. Is there a specific command for that ? Right now Im using a very generic command.</p>
<pre><code>kubectl get po | grep Running > RUNNING.txt
for i in `cat RUNNING.txt `; do kubectl describe po $i; done | grep "Started container with docker
</code></pre>
<p>"</p>
<p>Any direct command to fetch the info I want ? </p>
| <p>Just add <code>-o wide</code>:</p>
<pre><code>kubectl get pod -o wide
</code></pre>
|
<p>Is there a way to automatically remove completed Jobs besides making a CronJob to clean up completed Jobs?</p>
<p>The <a href="http://kubernetes.io/docs/user-guide/jobs/" rel="noreferrer">K8s Job Documentation</a> states that the intended behavior of completed Jobs is for them to remain in a completed state until manually deleted. Because I am running thousands of Jobs a day via CronJobs and I don't want to keep completed Jobs around.</p>
| <p>You can now set history limits, or disable history altogether, so that failed or successful CronJobs are not kept around indefinitely. See my answer <a href="https://stackoverflow.com/a/43115763/379037">here</a>. Documentation is <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">here</a>.</p>
<p>To set the <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="noreferrer">history limits</a>:</p>
<blockquote>
<p>The <code>.spec.successfulJobsHistoryLimit</code> and <code>.spec.failedJobsHistoryLimit</code> fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to <code>0</code> corresponds to keeping none of the corresponding kind of jobs after they finish.</p>
</blockquote>
<p>The config with 0 limits would look like:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
|
<p>When having configmap update, how to automatically trigger the reload of parameters by the application? Application uses POSIX signals for that.</p>
| <p>Depending on how you are consuming the configmap values, there could be two ways in which you can reload the configmap updates into a running pod.</p>
<ul>
<li><p>If you are consuming the configs as environment variables, you can write a controller, which watches for the updates in configs and restarts your pods with new config whenever the config changes.</p></li>
<li><p>If you are consuming the configmap via volumes, you can watch for file changes and notify that to your process in the container and handle the update in application. Please see <a href="https://github.com/jimmidyson/configmap-reload" rel="nofollow noreferrer">https://github.com/jimmidyson/configmap-reload</a> for example.</p></li>
</ul>
|
<p>I have started recently getting familiar with Kubernetes, however while I do get the concept I have some questions I am unable to answer clearly through Kubernete's <a href="https://kubernetes.io/docs/concepts/" rel="noreferrer">Concept</a> and <a href="https://kubernetes.io/docs/home/" rel="noreferrer">Documentation</a>, and some understandings that I'd wish to confirm.</p>
<ul>
<li><p>A <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-intro/" rel="noreferrer">Deployment</a> is a group of one or more container images (Docker ..etc) that is deployed within a <a href="https://kubernetes.io/docs/user-guide/pods/" rel="noreferrer">Pod</a>, and through Kubernetes Deployment Controller such deployments are monitored and created, updated, or deleted.</p></li>
<li><p>A <a href="https://kubernetes.io/docs/user-guide/pods/" rel="noreferrer">Pod</a> is a group of one or more containers, are those containers from the same Deployment, or can they be from multiple deployments?</p></li>
<li><p>"A pod models contains one or more application containers which are relatively tightly coupled". Is there any clear criteria on when to deploy containers within the same pod, rather than separate pods?</p></li>
<li><p>"Pods are the smallest deployable units of computing that can be created and managed in Kubernetes" - <a href="https://kubernetes.io/docs/user-guide/pods/" rel="noreferrer">Pods, Kuberenets Documentation</a>. Is that to mean that Kubernetes API is unable to monitor, and manage containers (at least directly)?</p></li>
</ul>
<p>Appreciate your input.</p>
| <p>your question is actually too broad for StackOverflow but I'll quickly answer before this one is closed.</p>
<p>Maybe it get's clearer when you look at the API documentation. Which you could read like this:</p>
<p><em>A <a href="https://kubernetes.io/docs/api-reference/v1.6/#deployment-v1beta1-extensions" rel="nofollow noreferrer">Deployment</a> describes a specification of the desired behavior for the contained objects.
This is done within the <code>spec</code> field which is of type <a href="https://kubernetes.io/docs/api-reference/v1.6/#deploymentspec-v1beta1-extensions" rel="nofollow noreferrer">DeploymentSpec</a>.</em></p>
<p><em>A <a href="https://kubernetes.io/docs/api-reference/v1.6/#deploymentspec-v1beta1-extensions" rel="nofollow noreferrer">DeploymentSpec</a> defines how the related Pods should look like with a <code>template</code>through the <a href="https://kubernetes.io/docs/api-reference/v1.6/#podtemplatespec-v1-core" rel="nofollow noreferrer">PodTemplateSpec</a></em></p>
<p><em>The <a href="https://kubernetes.io/docs/api-reference/v1.6/#podtemplatespec-v1-core" rel="nofollow noreferrer">PodTemplateSpec</a> then holds the <a href="https://kubernetes.io/docs/api-reference/v1.6/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a> for all the require parameters and that defines how containers within this Pod should look like through a <a href="https://kubernetes.io/docs/api-reference/v1.6/#container-v1-core" rel="nofollow noreferrer">Container</a> definition.</em></p>
<p>This is not a punchy oneline statement, but maybe makes it easier to see how things relate to each other.</p>
<p>Related to the criteria on what's a good size and what's too big for a Pod or a Container. This is very opinion loaded and the best way to figure that out is to read through the opinions on the size of <a href="https://www.martinfowler.com/microservices/#how" rel="nofollow noreferrer">Microservices</a>.</p>
<p>To cover your last point - Kubernetes is able to monitor and manage containers, but the "user" is not able to schedule single containers. They have to be embedded in a Pod definion. You can of course access Container status and details per container (e.g. through <code>kubeget logs <pod> -c <container></code> (<a href="https://kubernetes.io/docs/user-guide/kubectl/v1.6/#logs" rel="nofollow noreferrer">details</a>) or through the <a href="http://blog.kubernetes.io/2015/05/resource-usage-monitoring-kubernetes.html" rel="nofollow noreferrer">metrics</a> API.</p>
<p>I hope this helps a bit and doesn't add to the confusion.</p>
|
<p>I've been trying to enabled token auth for HTTP REST API Server access from a remote client.</p>
<p>I installed my CoreOS/K8S cluster controller using this script: <a href="https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh" rel="noreferrer">https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh</a></p>
<p>My cluster works fine. This is a TLS installation so I need to configure any kubectl clients with the client certs to access the cluster.</p>
<p>I then tried to enable token auth via running:</p>
<pre><code> echo `dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null`
</code></pre>
<p>this gives me a token. I then added the token to a token file on my controller containing a token and default user:</p>
<pre><code>$> cat /etc/kubernetes/token
3XQ8W6IAourkXOLH2yfpbGFXftbH0vn,default,default
</code></pre>
<p>I then modified the /etc/kubernetes/manifests/kube-apiserver.yaml to add in:</p>
<pre><code> - --token-auth-file=/etc/kubernetes/token
</code></pre>
<p>to the startup param list</p>
<p>I then reboot (not sure the best way to restart API Server by itself??)</p>
<p>At this point, kubectl from a remote server quits working(won't connect). I then look at <code>docker ps</code> on the controller and see the api server. I run <code>docker logs container_id</code> and get no output. If I look at other docker containers I see output like:</p>
<pre><code> E0327 20:05:46.657679 1 reflector.go:188]
pkg/proxy/config/api.go:33: Failed to list *api.Endpoints:
Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0:
dial tcp 127.0.0.1:8080: getsockopt: connection refused
</code></pre>
<p>So it appears that my api-server.yaml config it preventing the API Server from starting properly....</p>
<p>Any suggestions on the proper way to configure API Server for bearer token REST auth?</p>
<p>It is possible to have both TLS configuration and Bearer Token Auth configured, right?</p>
<p>Thanks!</p>
| <p>I think your kube-apiserver dies because it's can't find the <code>/etc/kubernetes/token</code>. That's because on your deployment the apiserver is a static pod therefore running in a container which in turn means it has a different root filesystem than that of the host.</p>
<p>Look into <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> and add a <code>volume</code> and a <code>volumeMount</code> like this (I have omitted the lines that do not need changing and don't help in locating the correct section):</p>
<pre><code>kind: Pod
metadata:
name: kube-apiserver
spec:
containers:
- name: kube-apiserver
command:
- ...
- --token-auth-file=/etc/kubernetes/token
volumeMounts:
- mountPath: /etc/kubernetes/token
name: token-kubernetes
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/token
name: token-kubernetes
</code></pre>
<p>One more note: the file you quoted as <code>token</code> should <strong>not</strong> end in <code>.</code> (dot) - maybe that was only a copy-paste mistake but check it anyway. The format is documented under <a href="https://kubernetes.io/docs/admin/authentication/#static-token-file" rel="noreferrer">static token file</a>:</p>
<blockquote>
<p>token,user,uid,"group1,group2,group3"</p>
</blockquote>
<p>If your problem perists execute the command below and post the output:</p>
<pre><code>journalctl -u kubelet | grep kube-apiserver
</code></pre>
|
<p>I use the Jenkins kubernetes plugin to build a job, but when I start build I have to wait for about 15s before the slave is online. Why does this happen?</p>
| <p>You need to be aware that when you use the Kubernetes Plugin, your Jenkins-slave is created on demand when you build a job. Supposing you are using the jnlp-slave as your jenkins-slave image, 15s is the time need for k8s to schedule the pod and to start up the jnlp slave jar.</p>
<p>What you can do in order to optimize the time, is using the option <code>Time in minutes to retain slave when idle</code> in the Kubernetes Plugin configuration, that holds the pods running for a determined amount of time, so the next builds can reuse that slave.</p>
|
<p>I would like to get shell access to the Persistent Volume I created on the Google Cloud Platform. </p>
<p>I tried using Google Cloud Shell for this. But to be able to do that I need to attach the Persistent Volume through the gcloud commands, and the command requires the instance name. But I don't see the instance name of the Google Cloud Shell when I list the instance names (in gcloud). </p>
<p>Is it possible to get shell access over Google Cloud Shell to the persistent disks? If not how can I get access to the Persistent Volume that I created?</p>
| <p>Yes, all disks need to be attached to an instance to allow access to them - you will need to create a compute instance and mount the persistent disk with <code>gcloud compute instances attach-disk [INSTANCE_NAME] --disk [DISK_NAME]</code>. </p>
<p>Once you create the new instance the instance name will become visible to you for usage by running <code>gcloud compute instances list</code></p>
<p>You will then be able to access the disk by ssh'ing into the instance and mounting it.</p>
<p>The following will help with mounting:</p>
<p><a href="https://cloud.google.com/compute/docs/disks/add-persistent-disk" rel="noreferrer">https://cloud.google.com/compute/docs/disks/add-persistent-disk</a></p>
|
Subsets and Splits