prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>This describes how one would install the agent on a regular gce instance:
<a href="https://cloud.google.com/monitoring/agent/install-agent" rel="nofollow noreferrer">https://cloud.google.com/monitoring/agent/install-agent</a></p>
<p>Previously the cluster ran on debian os nodes and we'd have the agent running to monitor cpu, disk space etc. now it's upgraded to kubernetes 1.4 and running on container-optimized os (<a href="https://cloud.google.com/container-optimized-os/docs/" rel="nofollow noreferrer">https://cloud.google.com/container-optimized-os/docs/</a>) the agent can't be installed manually. </p>
<p>I realise pods are monitored automatically, but that's only part of the picture.</p>
<p>I feel like I'm missing something here as this it'd be a big backward step for this not to be possible.</p>
| <p>I've ran into the same thing several times. You have to switch back to the <code>container-vm</code> format in order to install the stackdriver agent.</p>
<p><code>gcloud container clusters upgrade --image-type=container_vm [CLUSTER_NAME]</code> </p>
<p>That should flip it back. You can install the agent once the images flip. We're running 1.4.7 on the <code>container-vm</code> image and haven't seen any issues. Seems like overhead but not an actual step-back if that helps.</p>
|
<p>I'm new to Openshift/k8s. The docker image I'm running in openshift is using USER blabla. But when I exec into the pod, it use a different rather than the one in Dockerfile.</p>
<p>I'm wondering why? and how can I work around this ?</p>
<p>Thanks</p>
| <p>For security, cluster administrators have the option to force containers to run with cluster assigned uids. By default, most containers run using a uid from a range assigned to the project. </p>
<p>This is controlled by the configured <a href="https://docs.openshift.org/latest/admin_guide/manage_scc.html" rel="nofollow noreferrer">SecurityContextConstraints</a>. </p>
<p>To allow containers to run as the user declared in their dockerfile (even though this can expose the cluster, security-wise), allow the pod's service account access to the anyuid SecurityContextConstraint (<code>oadm policy add-scc-to-user anyuid system:serviceaccount:<your ns>:<your service account></code></p>
|
<p>Some info:</p>
<ul>
<li>Kubernetes (1.5.1)</li>
<li>AWS</li>
<li>1 master and 1 node (both ubuntu 16.04)</li>
<li>k8s installed via kubeadm</li>
<li>Terraform made by me</li>
</ul>
<p>Please don't reply use kube-up, kops or similar. This is about understanding how k8s works under the hood. There is by far too much unexplained magic in the system and I want to understand it.</p>
<p>== Question:</p>
<p>When creating a Service of type load balancer on k8s[aws] (for example):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
facing: external
spec:
type: LoadBalancer
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
</code></pre>
<p>I successfully create an internal or external facing ELB but none of the machines are added to the ELB (I can taint the master too but nothing changes). My problem is basically this:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/29298#issuecomment-260659722" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/29298#issuecomment-260659722</a></p>
<p>The subnets and nodes (but not the VPC) are all tagged with "KubernetesCluster" (again... elb are created in the right place). However no nodes is added.</p>
<p>In the logs </p>
<pre><code>kubectl logs kube-controller-manager-ip-x-x-x-x -n kube-system
</code></pre>
<p>after:</p>
<pre><code>aws_loadbalancer.go:63] Creating load balancer for
kube-system/kubernetes-dashboard with name:
acd8acca0c7a111e69ca306f22de69ae
</code></pre>
<p>There is no other output (it should print the nodes added or removed). I tried to understand the code at: </p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws_loadbalancer.go" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws_loadbalancer.go</a>
But whatever is the reason, this function to not add nodes.</p>
<p>The documentation doesn't go at length trying to explain the "process" behind k8s decisions. To try to understand k8s I tried/used kops, kube up, kubeadm, kubernetes the hard way repo and reading damn code, but still I am unable to understand how k8s on aws SELECTS the node to add to the elb.</p>
<p>As a consequence, also no security group is changed anywhere.</p>
<p>Is it a tag on the ec2?
Kublet setting?
Anything else?</p>
<p>Any help is greatly appreciated.</p>
<p>Thanks,
F.</p>
| <p>I think Steve is on the right track. Make sure your kubelets, apiserver, and controller-manager components all include <code>--cloud-provider=aws</code> in their arguments lists.</p>
<p>You mention your subnets and instances all have matching <code>KubernetesCluster</code> tags. Do your controller & worker security groups? K8s will modify the worker SG in particular to allow traffic to/from the service ELBs it creates. I tag my VPC as well, though I guess it's not required and may prohibit another cluster from living in the same VPC.</p>
<p>I also tag my private subnets with <code>kubernetes.io/role/internal-elb=true</code> and public ones with <code>kubernetes.io/role/elb=true</code> to identify where internal and public ELBs can be created.</p>
<p>The full list (AFAIK) of tags and annotations lives in <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go</a></p>
|
<p>So I built my Kubernetes cluster on AWS using <a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">KOPS</a></p>
<p>I then deployed SocketCluster on my K8s cluster using <a href="https://docs.baasil.io/deploying_your_app_to_your_kubernetes_cluster.html" rel="nofollow noreferrer">Baasil</a> which deploys 7 <a href="https://github.com/SocketCluster/socketcluster/tree/master/kubernetes" rel="nofollow noreferrer">YAML files</a></p>
<p>My problem is: the <a href="https://github.com/SocketCluster/socketcluster/blob/master/kubernetes/scc-ingress.yaml" rel="nofollow noreferrer">scc-ingress</a> isn't getting any IP or endpoint as I have not deployed any <a href="http://kubernetes.io/docs/user-guide/ingress/#ingress-controllers" rel="nofollow noreferrer">ingress controller</a>.</p>
<p>According to <a href="http://kubernetes.io/docs/user-guide/ingress/#ingress-controllers" rel="nofollow noreferrer">ingress controller</a> docs, I am recommended to deploy an <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/README.md" rel="nofollow noreferrer">nginx ingress controller</a></p>
<p>I need easy and explained steps to deploy the nginx ingress controller for my specific cluster. </p>
<p>To view the current status of my cluster in a nice GUI, see the screenshots below:</p>
<p><strong>Deployments</strong></p>
<p><a href="https://i.stack.imgur.com/JP6Uc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JP6Uc.png" alt="Deployments"></a></p>
<p><strong>Ingress</strong></p>
<p><a href="https://i.stack.imgur.com/f7Jin.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f7Jin.png" alt="Ingress"></a></p>
<p><strong>Pods</strong></p>
<p><a href="https://i.stack.imgur.com/j9W3t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j9W3t.png" alt="Pods"></a></p>
<p><strong>Replica Sets</strong></p>
<p><a href="https://i.stack.imgur.com/EcM8I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EcM8I.png" alt="Replica Sets"></a></p>
<p><strong>Services</strong></p>
<p><a href="https://i.stack.imgur.com/5OY7V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5OY7V.png" alt="Services"></a></p>
| <p>The answer is here <a href="https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx</a></p>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.4.0.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.4.0.yaml</a></p>
<p>But obviously the scc-ingress file needed to be changed to have a host such as foo.bar.com</p>
<p>Also, need to generate a self-signed SSL using OpenSSL as per this link <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls</a></p>
<p>Finally, had to add a CNAME on Route53 from foo.bar.com to the dns of the ELB created</p>
|
<p>I am trying to learn docker and kubernetes and one of the things I am trying to do is setup Redis with Sentinel and expose redis to things outside the container. </p>
<p>Getting redis and sentinel setup was pretty easy following
<a href="https://github.com/kubernetes/kubernetes/tree/master/examples/storage/redis" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/storage/redis</a></p>
<p>But now my next desire is to be able to access redis outside the container and I can't figure out who to expose sentinel and the master pod.</p>
| <p>The redis sentinel service file from your link (<a href="https://github.com/kubernetes/kubernetes/blob/master/examples/storage/redis/redis-sentinel-service.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/examples/storage/redis/redis-sentinel-service.yaml</a>) will expose the pods within the cluster. For external access (from outside your cluster) you can use a <strong>NodePort</strong>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: sentinel
role: service
name: redis-sentinel
spec:
type: NodePort
ports:
- port: 26379
targetPort: 26379
nodePort: 30369
selector:
redis-sentinel: "true"
</code></pre>
<p>This would expose the port 30369 on all your hosts from the outside world to the redis sentinel service.</p>
<p>Several remarks on this:
* Firewall: Security in redis is limited, so prevent unwanted access before opening the port
* The allowed to be assigned <strong>nodePort</strong> range is from 30000 to 32767, so be creative with this limitation.</p>
|
<p>Very simple question, where is the <code>emptyDir</code> located in my minikube VM? Since the <code>emptyDir</code> volume is pod dependent, it should exist on the VM otherwise it will die with a container exiting. When I do <code>minikube ssh</code> I cannot locate the volume. I need to inspect it and see if my containers are behaving how I want them to, copying some files to the volume mounted on them. Trying <code>find / -type d -name cached</code> results in many <code>permission denied</code>s and the volume is not in the rest of the dirs. My <code>YAML</code> has the following part:</p>
<pre><code>...
volumes:
- name: cached
emptyDir: {}
</code></pre>
<p>and also commands in a container where the container copies some files to the volume:</p>
<pre><code>containers:
- name: plum
image: plumsempy/plum
command: ["/bin/sh", "-c"]
args: ["mkdir /plum/cached"]
volumeMounts:
- mountPath: /plum/cached
name: cahced
command: ["bin/sh/", "-c"]
args: ["cp /plum/prune/cert.crt /plume/cached/"]
</code></pre>
<p>The container naturally exists after doing its job.</p>
| <p>A better way to see if your containers are behaving is by logging in into the container using the kubectl command. </p>
<p>That said: The location should of <strong>emptyDir</strong> should be in <strong>/var/lib/kubelet/pods/{podid}/volumes/kubernetes.io~empty-dir/</strong> on the given node where your pod is running.</p>
|
<p>I'd like to deploy kubernetes on a large physical server (24 cores) and I'm uncertain as to a number of things.</p>
<p>What are the pros and cons of creating virtual machines for the k8s cluster other than running on bare-metal.</p>
<p>I have the following considerations:</p>
<ul>
<li>Creating vms will allow for work load isolation. New vms for experiments can be created and assigned to devs.</li>
<li>On the other hand, with k8s running on bare metal a new NAMESPACE can be created for each developer for experimentation and they can run their code in it. After all their code should be running in docker containers.</li>
</ul>
<p><strong>Security:</strong></p>
<ul>
<li>Having vms would limit the amount of access given to future maintainers, limiting the amount of damage that could be done. While on the other hand the primary task for any future maintainers would be adding/deleting nodes and they would require bare metal access to do that.</li>
</ul>
<p><strong>Authentication:</strong></p>
<ul>
<li>At the moment devs would only touch the server when their code runs through the CI pipeline and their running deployments are deployed. But what about viewing logs? Could we setup tiered kubectl authentication to allow devs to only access whatever namespaces have been assigned to them (I believe this should be possible with the k8s namespace authorization plugin).</li>
</ul>
<p>A number of vms already exist on the server. Would this be an issue?</p>
| <p>I would separate dev and prod in the form of different vms. I once had a webapp inside docker which used too many threads so the docker daemon on the host crashed. It was limited to one host luckily. You can protect this by setting limits, but it's a risk: one mistake in dev could bring down prod as well. </p>
|
<p>On my local I run a <code>mysql</code> container and then ping it from another container on the same network:</p>
<pre><code>$ docker run -d tutum/mysql
$ docker run -it plumsempy/plum bash
# ping MYSQL_CONTAINER_ID
PING 67e35427d638 (198.105.244.24): 56 data bytes
64 bytes from 198.105.244.24: icmp_seq=0 ttl=37 time=0.243 ms
...
</code></pre>
<p>That is good. Then, using Kubernetes(minikube) locally, I deploy tutum/mysql using the following <code>YAML</code>:</p>
<pre><code>...
- name: mysql
image: tutum/mysql
...
</code></pre>
<p>There is nothing else for the <code>mysql</code> container. Then I deploy it, ssh into the minikube pod, spin up a random container and try pinging the mysql container inside the pod this time:</p>
<pre><code>$ kubectl create -f k8s-deployment.yml
$ minikube ssh
$ docker ps
$ docker run -it plumsempy/plum bash
# ping MYSQL_CONTAINER_ID_INSIDE_MINIKUBE
PING mysql (198.105.244.24): 56 data bytes
^C--- mysql ping statistics ---
10 packets transmitted, 0 packets received, 100% packet loss
# traceroute MYSQL_CONTAINER_ID_INSIDE_MINIKUBE
traceroute to aa7f7ed7af01 (198.105.244.24), 30 hops max, 60 byte packets
1 172.17.0.1 (172.17.0.1) 0.031 ms 0.009 ms 0.007 ms
2 10.0.2.2 (10.0.2.2) 0.156 ms 0.086 ms 0.050 ms
3 * * *
4 * * *
5 dtr02gldlca-tge-0-2-0-1.gldl.ca.charter.com (96.34.102.201) 16.153 ms 16.107 ms 16.077 ms
6 crr01lnbhca-bue-200.lnbh.ca.charter.com (96.34.98.188) 18.753 ms 18.011 ms 30.642 ms
7 crr01mtpkca-bue-201.mtpk.ca.charter.com (96.34.96.63) 30.779 ms 30.523 ms 30.428 ms
8 bbr01mtpkca-bue-2.mtpk.ca.charter.com (96.34.2.24) 24.089 ms 23.900 ms 23.814 ms
9 bbr01ashbva-tge-0-1-0-1.ashb.va.charter.com (96.34.3.139) 26.061 ms 25.949 ms 36.002 ms
10 10ge9-10.core1.lax1.he.net (65.19.189.177) 34.027 ms 34.436 ms 33.857 ms
11 100ge12-1.core1.ash1.he.net (184.105.80.201) 107.873 ms 107.750 ms 104.078 ms
12 100ge3-1.core1.nyc4.he.net (184.105.223.166) 100.554 ms 100.478 ms 100.393 ms
13 xerocole-inc.10gigabitethernet12-4.core1.nyc4.he.net (216.66.41.242) 109.184 ms 111.122 ms 111.018 ms
14 * * *
15 * * *
...(til it ends)
</code></pre>
<p>the <code>plumsempy/plum</code> can be any container since they are both on the same network and same pod, the pinging should go through. The question is <strong>Why can I not reach <code>mysql</code> on minikube and how could I fix that?</strong></p>
| <p>From <a href="http://kubernetes.io/docs/user-guide/pods/multi-container/" rel="nofollow noreferrer">k8s multi-container pod docs</a>:</p>
<blockquote>
<p>Pods share fate, and share some resources, such as storage volumes and IP addresses.</p>
</blockquote>
<p>Hence the <code>mysql</code> container is reachable from the <code>plum</code> container at the IP address <code>127.0.0.1</code>.</p>
<p>Also, since <code>mysql</code> runs on port 3306 by default, you probably want <code>telnet 127.0.0.1 3306</code> to check if it's reachable (<code>ping</code> uses ICMP which doesn't have the concept of ports).</p>
|
<p>I have a simple cluster set up with just a single master for the time being running CoreOS. I have the kubelet running using the kubelet-wrapper script from CoreOS and I'm running weave for the pod network.</p>
<p>The API server, controller manager, and scheduler are all running using systemd units properly with host networking.</p>
<p>My problem is that the pod's can't communicate with each other, with service IPs, or with internet IPs. There appears to be a network interface, route, default gateway, but always get "no route to host".</p>
<pre><code>$ kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1" --overrides='{"spec": {"template": {"metadata": {"annotations": {"scheduler.alpha.kubernetes.io/tolerations": "[{\"key\":\"dedicated\",\"value\":\"master\",\"effect\":\"NoSchedule\"}]"}}}}}'
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.32.0.1 0.0.0.0 UG 0 0 0 eth0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 eth0
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1410 qdisc noqueue
link/ether 62:d0:49:a6:f9:59 brd ff:ff:ff:ff:ff:ff
inet 10.32.0.7/12 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::60d0:49ff:fea6:f959/64 scope link
valid_lft forever preferred_lft forever
/ # ping 10.32.0.6 -c 5
PING 10.32.0.6 (10.32.0.6): 56 data bytes
--- 10.32.0.6 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
/ # wget http://10.32.0.6:80/
Connecting to 10.32.0.6:80 (10.32.0.6:80)
wget: can't connect to remote host (10.32.0.6): No route to host
</code></pre>
<p>Internet IPs also don't work:</p>
<pre><code>/ # ping -c 5 172.217.24.132
PING 172.217.24.132 (172.217.24.132): 56 data bytes
--- 172.217.24.132 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
/ # wget http://172.217.24.132/
Connecting to 172.217.24.132 (172.217.24.132:80)
wget: can't connect to remote host (172.217.24.132): No route to host
</code></pre>
<p>My kubelet unit is as follows:</p>
<pre><code>[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --hostname-override=192.168.86.50"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.3.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_EXTRA_ARGS=--v=4"
Environment=KUBELET_VERSION=v1.4.6_coreos.0
Environment="RKT_OPTS=--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume cni-conf,kind=host,source=/etc/cni \
--mount volume=cni-conf,target=/etc/cni \
--volume cni-bin,kind=host,source=/opt/cni \
--mount volume=cni-bin,target=/opt/cni"
ExecStart=/usr/lib/coreos/kubelet-wrapper \
$KUBELET_KUBECONFIG_ARGS \
$KUBELET_SYSTEM_PODS_ARGS \
$KUBELET_NETWORK_ARGS \
$KUBELET_DNS_ARGS \
$KUBELET_EXTRA_ARGS
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --hostname-override=192.168.86.50"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.3.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_EXTRA_ARGS=--v=4"
Environment=KUBELET_VERSION=v1.4.6_coreos.0
Environment="RKT_OPTS=--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume cni-conf,kind=host,source=/etc/cni \
--mount volume=cni-conf,target=/etc/cni \
--volume cni-bin,kind=host,source=/opt/cni \
--mount volume=cni-bin,target=/opt/cni"
ExecStart=/usr/lib/coreos/kubelet-wrapper \
$KUBELET_KUBECONFIG_ARGS \
$KUBELET_SYSTEM_PODS_ARGS \
$KUBELET_NETWORK_ARGS \
$KUBELET_DNS_ARGS \
$KUBELET_EXTRA_ARGS
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
<p>I have the weave DaemonSet running in the cluster.</p>
<pre><code>$ kubectl --kubeconfig=ansible/roles/kubernetes-master/admin-user/files/kubeconfig -n kube-system get daemonset
NAME DESIRED CURRENT NODE-SELECTOR AGE
kube-proxy-amd64 1 1 <none> 22h
weave-net 1 1 <none> 22h
</code></pre>
<p>Weave logs look like this:</p>
<pre><code>$ kubectl -n kube-system logs weave-net-me1lz weave
INFO: 2016/12/19 02:19:56.125264 Command line options: map[docker-api: http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 nickname:ia-master1 status-addr:0.0.0.0:6782 datapath:datapath ipalloc-range:10.32.0.0/12 name:52:b1:20:55:0c:fc no-dns:true port:6783]
INFO: 2016/12/19 02:19:56.213194 Communication between peers is unencrypted.
INFO: 2016/12/19 02:19:56.237440 Our name is 52:b1:20:55:0c:fc(ia-master1)
INFO: 2016/12/19 02:19:56.238232 Launch detected - using supplied peer list: [192.168.86.50]
INFO: 2016/12/19 02:19:56.258050 [allocator 52:b1:20:55:0c:fc] Initialising with persisted data
INFO: 2016/12/19 02:19:56.258412 Sniffing traffic on datapath (via ODP)
INFO: 2016/12/19 02:19:56.293898 ->[192.168.86.50:6783] attempting connection
INFO: 2016/12/19 02:19:56.311408 Discovered local MAC 52:b1:20:55:0c:fc
INFO: 2016/12/19 02:19:56.314972 ->[192.168.86.50:47921] connection accepted
INFO: 2016/12/19 02:19:56.370597 ->[192.168.86.50:47921|52:b1:20:55:0c:fc(ia-master1)]: connection shutting down due to error: cannot connect to ourself
INFO: 2016/12/19 02:19:56.381759 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2016/12/19 02:19:56.391405 ->[192.168.86.50:6783|52:b1:20:55:0c:fc(ia-master1)]: connection shutting down due to error: cannot connect to ourself
INFO: 2016/12/19 02:19:56.423633 Listening for metrics requests on 0.0.0.0:6782
INFO: 2016/12/19 02:19:56.990760 Error checking version: Get https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&flag_docker-version=none&flag_kernel-version=4.7.3-coreos-r3&os=linux&signature=1Pty%2FGagYcrEs2TwKnz6IVegmP23z5ifqrP1D9vCzyM%3D&version=1.8.2: x509: failed to load system roots and no roots provided
10.32.0.1
INFO: 2016/12/19 02:19:57.490053 Discovered local MAC 3a:5c:04:54:80:7c
INFO: 2016/12/19 02:19:57.591131 Discovered local MAC c6:1c:f5:43:f0:91
INFO: 2016/12/19 02:34:56.242774 Expired MAC c6:1c:f5:43:f0:91 at 52:b1:20:55:0c:fc(ia-master1)
INFO: 2016/12/19 03:46:29.865157 ->[192.168.86.200:49276] connection accepted
INFO: 2016/12/19 03:46:29.866767 ->[192.168.86.200:49276] connection shutting down due to error during handshake: remote protocol header not recognised: [71 69 84 32 47]
INFO: 2016/12/19 03:46:34.704116 ->[192.168.86.200:49278] connection accepted
INFO: 2016/12/19 03:46:34.754782 ->[192.168.86.200:49278] connection shutting down due to error during handshake: remote protocol header not recognised: [22 3 1 0 242]
</code></pre>
<p>The weave CNI plugin binaries seem to be created properly.</p>
<pre><code>core@ia-master1 ~ $ ls /opt/cni/bin/
bridge cnitool dhcp flannel host-local ipvlan loopback macvlan ptp tuning weave-ipam weave-net weave-plugin-1.8.2
core@ia-master1 ~ $ ls /etc/cni/net.d/
10-weave.conf
core@ia-master1 ~ $ cat /etc/cni/net.d/10-weave.conf
{
"name": "weave",
"type": "weave-net"
}
</code></pre>
<p>Iptables looks like this:</p>
<pre><code>core@ia-master1 ~ $ sudo iptables-save
# Generated by iptables-save v1.4.21 on Mon Dec 19 04:15:14 2016
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [2:120]
:POSTROUTING ACCEPT [2:120]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-AN54BNMS4EGIFEJM - [0:0]
:KUBE-SEP-BQM5WFNH2M6QPJV6 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-NWV5X2332I4OT4T3 - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-AN54BNMS4EGIFEJM -s 192.168.86.50/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-AN54BNMS4EGIFEJM -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-AN54BNMS4EGIFEJM --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.86.50:443
-A KUBE-SEP-BQM5WFNH2M6QPJV6 -s 10.32.0.6/32 -m comment --comment "default/hostnames:" -j KUBE-MARK-MASQ
-A KUBE-SEP-BQM5WFNH2M6QPJV6 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.32.0.6:9376
-A KUBE-SERVICES -d 10.3.0.137/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SERVICES -d 10.3.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.3.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.3.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 180 --reap --name KUBE-SEP-AN54BNMS4EGIFEJM --mask 255.255.255.255 --rsource -j KUBE-SEP-AN54BNMS4EGIFEJM
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-AN54BNMS4EGIFEJM
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-BQM5WFNH2M6QPJV6
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Mon Dec 19 04:15:14 2016
# Generated by iptables-save v1.4.21 on Mon Dec 19 04:15:14 2016
*filter
:INPUT ACCEPT [73:57513]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [72:61109]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -d 172.17.0.1/32 -i docker0 -p tcp -m tcp --dport 6783 -j DROP
-A INPUT -d 172.17.0.1/32 -i docker0 -p udp -m udp --dport 6783 -j DROP
-A INPUT -d 172.17.0.1/32 -i docker0 -p udp -m udp --dport 6784 -j DROP
-A INPUT -i docker0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i docker0 -p tcp -m tcp --dport 53 -j ACCEPT
-A FORWARD -i docker0 -o weave -j DROP
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o weave -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d 10.3.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.3.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]<@p dst -j ACCEPT
COMMIT
# Completed on Mon Dec 19 04:15:14 2016
</code></pre>
<p>What am I doing wrong?</p>
<p>Maybe relevant info:</p>
<ul>
<li>192.168.86.50 is the master node's IP</li>
<li>Pod network CIDR: 10.32.0.0/12 (at least this is what it is for the master node)</li>
<li>Service CIDR: 10.3.0.0/24</li>
<li>API server cluster IP: 10.3.0.1</li>
</ul>
| <p>I've run into this myself, and it's more of a CoreOS bug I think.</p>
<p>CoreOS uses a network daemon for managing network interfaces, but in some cases (like here) you want to manage some interfaces yourself. And CNI is broken here on CoreOS, because networkd tries to manage the CNI interface and the Weave interfaces.</p>
<p>Related issues/conversations:</p>
<ul>
<li><code>cni0</code> should be unmanaged: <a href="https://github.com/coreos/bugs/issues/1678" rel="nofollow noreferrer">https://github.com/coreos/bugs/issues/1678</a></li>
<li>Issue on the Weave side: <a href="https://github.com/weaveworks/weave/issues/2601" rel="nofollow noreferrer">https://github.com/weaveworks/weave/issues/2601</a></li>
<li>Kontena has run into this also, and is listing some workarounds in this thread: <a href="https://github.com/kontena/kontena/issues/1264" rel="nofollow noreferrer">https://github.com/kontena/kontena/issues/1264</a></li>
<li>CoreOS fixed the issue for Docker (but yet not for k8s with CNI): <a href="https://github.com/coreos/coreos-overlay/pull/2300" rel="nofollow noreferrer">https://github.com/coreos/coreos-overlay/pull/2300</a>, <a href="https://github.com/coreos/systemd/pull/73" rel="nofollow noreferrer">https://github.com/coreos/systemd/pull/73</a></li>
</ul>
<p>I would try something like this in <code>/etc/systemd/network/50-weave.network</code> (using CoreOS alpha):</p>
<pre><code>[Match]
Name=weave datapath vxlan-* dummy*
[Network]
# I'm not sure if DHCP or IPv6AcceptRA are required here...
DHCP=no
IPv6AcceptRA=false
Unmanaged=yes
</code></pre>
<p>And this for <code>/etc/systemd/network/50-cni.network</code>:</p>
<pre><code>[Match]
Name=cni*
[Network]
Unmanaged=yes
</code></pre>
<p>Then reboot and see if it works! (or you might want to try CoreOS stable, assuming you're on alpha now)</p>
|
<p>I'm setting up a sample Kubernetes cluster in my laptop using virtual box vm's.
Using flannel as the overlay network. I have successfully created a master and a node. When I spin up a pod in the node to deploy a mongodb container, the pod and container is deployed successfully with endpoints. Service also created successfully</p>
<p>On the master</p>
<pre><code>[osboxes@kubemaster pods]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
busybox 1/1 Running 0 3m 172.17.60.3 192.168.6.103
mongodb 1/1 Running 0 21m 172.17.60.2 192.168.6.103
[osboxes@kubemaster pods]$ kubectl get services -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes 10.254.0.1 <none> 443/TCP 17d <none>
mongoservice 10.254.244.175 <none> 27017/TCP 47m name=mongodb
[osboxes@kubemaster pods]$ kubectl describe svc mongoservice
Name: mongoservice
Namespace: default
Labels: name=mongoservice
Selector: name=mongodb
Type: ClusterIP
IP: 10.254.244.175
Port: <unset> 27017/TCP
Endpoints: 172.17.60.2:27017
Session Affinity: None
</code></pre>
<p>On the node with docker ps</p>
<pre><code>8707a465771b busybox "sh" 9 minutes ago Up 9minutes k8s_busybox.53e510d6_busybox_default_c4892314-cde3-11e6-8a53-08002700df07_492b1a89
bea9de4e05cf registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 9 minutes ago Up 9 minutes k8s_POD.ae8ee9ac_busybox_default_c4892314-cde3-11e6-8a53-08002700df07_2bffae46
eaff8dc1a360 mongo "/entrypoint.sh mongo" 28 minutes ago Up 28 minutes k8s_mongodb.d1eca71a_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_ef5a8bbe
6a90b06cd434 registry.access.redhat.com/rhel7/pod-infrastructure:latest "/pod" 28 minutes ago Up 28 minutes k8s_POD.7ce0ec0_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_11074b20
</code></pre>
<p>The only way I can connect to the mongodb service is running <a href="http://172.17.60.2:27017/" rel="nofollow noreferrer">http://172.17.60.2:27017/</a> on the node which displays <em>"It looks like you are trying to access MongoDB over HTTP on the native driver port."</em></p>
<p>The issue is that I am not able to access mongodb endpoint from the master or any of he other nodes in atleast hitting the same url as above.. I have another java webapp container will run as a pod in another node and I need to make it interact , but that is the next step. I plan to use env variables for interpod communication and also see that the environment variables are created properly on the container in the node.</p>
<p>I went through this <a href="http://kubernetes.io/docs/user-guide/debugging-services/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/debugging-services/</a> and followed the process but not able to get any of the steps working.</p>
<pre><code>[osboxes@kubemaster pods]$ sudo kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: dial tcp 192.168.6.103:10250: getsockopt: connection refused
Error from server: Get https://192.168.6.103:10250/containerLogs/default/busybox/busybox: dial tcp 192.168.6.103:10250: getsockopt: connection refused
[osboxes@kubemaster pods]$ curl 172.17.60.2:27017
curl: (7) Failed connect to 172.17.60.2:27017; No route to host
[osboxes@kubemaster pods]$ curl 10.254.244.175:27017
curl: (7) Failed connect to 10.254.244.175:27017; Connection timed out
[osboxes@kubemaster pods]$ kubectl exec -ti mongodb -c k8s_mongodb.d1eca71a_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_ef5a8bbe sh
Error from server: container k8s_mongodb.d1eca71a_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_ef5a8bbe is not valid for pod mongodb
[osboxes@kubemaster pods]$ kubectl exec -ti mongodb -c k8s_POD.7ce0ec0_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_11074b20 sh
Error from server: container k8s_POD.7ce0ec0_mongodb_default_240cd411-cde1-11e6-8a53-08002700df07_11074b20 is not valid for pod mongodb
[osboxes@kubemaster pods]$ kubectl exec -ti mongodb sh
Error from server: dial tcp 192.168.6.103:10250: getsockopt: connection refused
</code></pre>
<p>I suspect that there is some underlying network issue inspite of the pods and services created, but Im not a networking person, so can't figure out what exactly is the issue. Kindly help.</p>
| <p><strong>TL;DR:</strong> you cannot connect from the master to the service as it is, you have to create a <code>NodePort</code> service and then, use the <code>NodePort</code> given to connect.</p>
<p>Look at the service description:</p>
<pre><code>[osboxes@kubemaster pods]$ kubectl describe svc mongoservice
Name: mongoservice
Namespace: default
Labels: name=mongoservice
Selector: name=mongodb
Type: ClusterIP
IP: 10.254.244.175
Port: <unset> 27017/TCP
Endpoints: 172.17.60.2:27017
Session Affinity: None
</code></pre>
<p>The service endpoint is: <code>Endpoints: 172.17.60.2:27017</code> which is, as you can see, a flannel IP: the IP of the container that runs Mongo. This IP is part of your overlay network. It's not accessible from outside the virtual network (flannel network).</p>
<p>That's why when you do this:</p>
<pre><code> [osboxes@kubemaster pods]$ curl 172.17.60.2:27017
</code></pre>
<p>You get this error, which is right:</p>
<pre><code> curl: (7) Failed connect to 172.17.60.2:27017; No route to host
</code></pre>
<p>Because you're trying to access the container IP from the master node.</p>
<p>You have 2 options, you either expose your mongoservice as a NodePort service (that binds the container port to a node port, the way you do in docker when using <code>-p 8000:8080</code>, or you jump inside the cluster and try to connect to the service from the pod (that's what you've tried and failed).</p>
<p>To expose the service as nodeport (the equivalent of LoadBalancer when you do not run in the cloud)</p>
<pre><code>kubectl expose service mongoservice --port=27017 --type=NodePort
</code></pre>
<p>You might have to delete the mongoservice if it already exists. let;s check the service now: </p>
<pre><code>Name: mongo-4025718836
Namespace: default
Labels: pod-template-hash=4025718836
run=mongo
Selector: pod-template-hash=4025718836,run=mongo
Type: NodePort
IP: 10.0.0.11
Port: <unset> 27017/TCP
NodePort: <unset> 32076/TCP
Endpoints: 172.17.0.9:27017
Session Affinity: None
</code></pre>
<p>Note that now you have the same endpoint (Pod Ip, mongo port: 172.17.0.9:27017), but the NodePort has a value, which is the port in the NODE where mongodb is exposed).</p>
<pre><code>curl NODE:32076
It looks like you are trying to access MongoDB over HTTP on the native driver port.
</code></pre>
|
<p>Does Kubernetes have the ability/need to hook into a cloud provider (AWS, Rackspace) to spin up new nodes? If so, how does it then provision the node - does it run Ansible etc? Or will Kubernetes need to have all the nodes available to it manually?</p>
| <p>The short answer is no.</p>
<p>The longer answer is explained in the following blog posting that describes the new <a href="http://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">kubeadm</a> command:</p>
<p><a href="http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html" rel="nofollow noreferrer">http://blog.kubernetes.io/2016/09/how-we-made-kubernetes-easy-to-install.html</a></p>
<blockquote>
<p>There are three stages in setting up a Kubernetes cluster, and we
decided to focus on the second two (to begin with):</p>
<ol>
<li>Provisioning: getting some machines</li>
<li>Bootstrapping: installing Kubernetes on them and configuring certificates</li>
<li>Add-ons: installing necessary cluster add-ons like DNS and monitoring services, a pod network, etc</li>
</ol>
<p>We realized early on that there's enormous variety in the way that
users want to provision their machines.</p>
<p>They use lots of different cloud providers, private clouds, bare
metal, or even Raspberry Pi's, and almost always have their own
preferred tools for automating provisioning machines: Terraform or
CloudFormation, Chef, Puppet or Ansible, or even PXE booting bare
metal. <strong>So we made an important decision: kubeadm would not provision
machines.</strong> Instead, the only assumption it makes is that the user has
some computers running Linux.</p>
</blockquote>
<hr>
<p>Update</p>
<ul>
<li><a href="http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html" rel="nofollow noreferrer">http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html</a></li>
</ul>
|
<p>The new ability to specify a HEALTHCHECK in a Dockerfile seems redundant with the Kubernetes probe directives. Any advice on what to use when?</p>
| <p>If you use Kubernetes, I'd suggest using only the Kubernetes liveness/readiness checks because Docker healthcheck <a href="https://github.com/kubernetes/kubernetes/issues/25829" rel="noreferrer">has not been integrated in the Kubernetes</a> as of now (release 1.12). This means that Kubernetes does not expose the check status in its api server, and the internal system components can not consume this information. Also, Kubernetes distinguishes <a href="http://kubernetes.io/docs/user-guide/pod-states/#container-probes" rel="noreferrer">liveness from readiness checks</a>, so that other components can react differently (e.g., restarting the container vs. removing the pod from the list of endpoints for a service), which the docker HEALTHCHECK currently does not provide.</p>
<p>Update: Since Kubernetes 1.8, the Docker HEALTHCHECK has been <a href="https://github.com/kubernetes/kubernetes/pull/50796" rel="noreferrer">disabled explicitly</a> in Kubernetes.</p>
|
<p>I have installed a local Kubernetes cluster using minikube following the instructions <a href="http://kubernetes.io/docs/getting-started-guides/minikube/" rel="noreferrer">here</a>.</p>
<p>I'm under a corporate proxy. Therefore I have set the http_proxy and https_proxy env vars. Once the cluster is started after <code>minikube start</code> command I also added the value of <code>minikube ip</code> to the no_proxy env var. However still kubectl cannot connect to the cluster.</p>
<pre><code>ubuntu@ros-2:~$ kubectl -v=7 get pods
I0105 10:31:47.773801 17607 loader.go:354] Config loaded from file /home/ubuntu/.kube/config
I0105 10:31:47.775151 17607 round_trippers.go:296] GET https://192.168.42.22:8443/api
I0105 10:31:47.778533 17607 round_trippers.go:303] Request Headers:
I0105 10:31:47.778606 17607 round_trippers.go:306] Accept: application/json, */*
I0105 10:31:47.778676 17607 round_trippers.go:306] User-Agent: kubectl/v1.5.1 (linux/amd64) kubernetes/82450d0
I0105 10:31:47.783069 17607 round_trippers.go:321] Response Status: in 4 milliseconds
I0105 10:31:47.783166 17607 helpers.go:221] Connection error: Get https://192.168.42.22:8443/api: Forbidden port
F0105 10:31:47.783181 17607 helpers.go:116] Unable to connect to the server: Forbidden port
</code></pre>
<p>I'm assuming this is because of kubectl not being aware of the no_proxy settings. A simple curl to the cluster goes through fine.</p>
<pre><code>ubuntu@ros-2:~$ curl -v -k https://192.168.42.22:8443/api
* Hostname was NOT found in DNS cache
* Trying 192.168.42.22...
* Connected to 192.168.42.22 (192.168.42.22) port 8443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Request CERT (13):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
* subject: CN=minikube
* start date: 2017-01-04 16:04:47 GMT
* expire date: 2018-01-04 16:04:47 GMT
* issuer: CN=minikubeCA
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
> GET /api HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 192.168.42.22:8443
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 05 Jan 2017 10:33:45 GMT
< Content-Length: 13
<
Unauthorized
* Connection #0 to host 192.168.42.22 left intact
</code></pre>
<p>Any ideas on how to fix this?</p>
| <p>Fixed this. The fix was to have the no_proxy details in NO_PROXY as well.</p>
<pre><code>export NO_PROXY=$no_proxy,$(minikube ip)
</code></pre>
<p><a href="https://github.com/kubernetes/minikube/issues/530" rel="noreferrer">Relevant thread</a>. Hope this will be useful to someone.</p>
|
<p>I'm using Kops 1.4.4 to launch my Kubernetes cluster on AWS. My Elasticsearch pods require me to set the kernel parameter <code>vm.max_map_count</code> to at least 262144. Kubernetes 1.5.1 has systctl feature, but it requires Docker >= 1.12. Kops currently builds my nodes with a lesser Docker version and so I'm stuck trying to figure out how to automate setting the kernel parameter. If I attempt to set it in my Dockerfile using <code>RUN sysctl -w vm.max_map_count=262144</code>, I get the error message: 'sysctl: setting key "vm.max_map_count": Read-only file system'.</p>
<p>Are there any workarounds for this?</p>
| <p>Apparently this can be done using <a href="http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization" rel="nofollow noreferrer">Kubernetes init containers</a>. Following the Kubernetes deployment config posted <a href="https://github.com/giantswarm/kubernetes-elastic-stack/blob/c8cda9e8efc1e33eb3cb314023edd0e91d39194a/manifests-all.yaml" rel="nofollow noreferrer">here</a> this can be done by applying the following annotation to your deployment. Under spec > template > metadata > annotations add:</p>
<pre><code>pod.beta.kubernetes.io/init-containers: '[
{
"name": "sysctl",
"image": "busybox",
"command": ["sysctl", "-w", "vm.max_map_count=262144"],
"securityContext": {
"privileged": true
}
}
]'
</code></pre>
|
<p>I'm experimenting with Cassandra and Redis on Kubernetes, using <a href="https://github.com/kubernetes/kubernetes/tree/v1.5.1/examples/storage/cassandra#step-2-use-a-statefulset-to-create-cassandra-ring" rel="nofollow noreferrer">the examples for v1.5.1</a>.</p>
<ul>
<li>With a Cassandra StatefulSet, if I shutdown a node without draining or deleting it via <code>kubectl</code>, that node's Pod stays around forever (at least over a week, anyway), without being moved to another node.</li>
<li>With Redis, even though the pod sticks around like with Cassandra, <a href="https://github.com/kubernetes/kubernetes/tree/v1.5.1/examples/storage/redis#conclusion" rel="nofollow noreferrer">the sentinel service</a> starts a new pod, so the number of functional pods is always maintained.</li>
</ul>
<p>Is there a way to automatically move the Cassandra pod to another node, if a node goes down? Or do I have to drain or delete the node manually?</p>
| <p>Please refer to the documentation <a href="http://kubernetes.io/docs/tasks/manage-stateful-set/delete-pods/#deleting-pods" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Kubernetes (versions 1.5 or newer) will not delete Pods just because a
Node is unreachable. The Pods running on an unreachable Node enter the
‘Terminating’ or ‘Unknown’ state after a timeout. Pods may also enter
these states when the user attempts graceful deletion of a Pod on an
unreachable Node. The only ways in which a Pod in such a state can be
removed from the apiserver are as follows: </p>
<ul>
<li>The Node object is deleted (either by you, or by the Node Controller).</li>
<li>The kubelet on the unresponsive Node starts responding,
kills the Pod and removes the entry from the apiserver. </li>
<li>Force deletion of the Pod by the user.</li>
</ul>
</blockquote>
<p>This was a behavioral change introduced in kubernetes 1.5, which allows StatefulSet to prioritize safety. </p>
<p>There is no way to differentiate between the following cases:</p>
<ol>
<li>The instance being shut down without the Node object being deleted.</li>
<li>A network partition is introduced between the Node in question and the kubernetes-master. </li>
</ol>
<p>Both these cases are seen as the kubelet on a Node being unresponsive by the Kubernetes master. If in the second case, we were to quickly create a replacement pod on a different Node, we may violate the at-most-one semantics guaranteed by StatefulSet, and have multiple pods with the same identity running on different nodes. At worst, this could even lead to split brain and data loss when running Stateful applications. </p>
<p>On most cloud providers, when an instance is deleted, Kubernetes can figure out that the Node is also deleted, and hence let the StatefulSet pod be recreated elsewhere. </p>
<p><strong>However, if you're running on-prem, this may not happen. It is recommended that you delete the Node object from kubernetes as you power it down, or have a reconciliation loop keeping the Kubernetes idea of Nodes in sync with the the actual nodes available.</strong></p>
<p>Some more context is in the <a href="https://github.com/kubernetes/kubernetes/issues/35145" rel="nofollow noreferrer">github issue</a>.</p>
|
<p>I am getting error message in /var/log/messages when tried to setup the cluster with the command "kubeadm init":</p>
<pre><code>e4dad33)": pods "kube-scheduler-master" already exists
Jan 3 21:28:45 master kubelet: I0103 21:28:45.777830 8726 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Jan 3 21:28:46 master kubelet: I0103 21:28:46.829714 8726 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
Jan 3 21:28:47 master kubelet: I0103 21:28:47.015478 8726 kubelet_node_status.go:74] Attempting to register node master
Jan 3 21:28:47 master kubelet: I0103 21:28:47.027349 8726 kubelet_node_status.go:77] Successfully registered node master
Jan 3 21:28:52 master kubelet: E0103 21:28:52.761903 8726 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
Jan 3 21:29:02 master kubelet: E0103 21:29:02.762461 8726 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
</code></pre>
<p>My Linux version is:</p>
<pre><code>[root@master ~]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
</code></pre>
<p>Docker version is:</p>
<pre><code>[root@master ~]# docker -v
Docker version 1.12.5, build 7392c3b
</code></pre>
<p>Kubernetes version:</p>
<pre><code>[root@master ~]# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>The docker containers:</p>
<pre><code>[root@master ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9d197b32eeb gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1 "kube-controller-mana" 8 minutes ago Up 8 minutes k8s_kube-controller-manager.c989015b_kube-controller-manager-master_kube-system_403e1523940e3f352d70e32c97d29be5_812fd5f5
cc196346d2fa gcr.io/google_containers/kube-scheduler-amd64:v1.5.1 "kube-scheduler --add" 8 minutes ago Up 8 minutes k8s_kube-scheduler.acb91962_kube-scheduler-master_kube-system_3bfbd36dfb8c8f71984a0d812e4dad33_7b6cc90e
5340aebc6aa4 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 8 minutes ago Up 8 minutes k8s_kube-apiserver.7fe53ba_kube-apiserver-master_kube-system_d74382f649787a7b1081e1a2b36071dd_a8b18f5f
6b56cda441d6 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 8 minutes ago Up 8 minutes k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_80669ce9
6fe1004d404d gcr.io/google_containers/pause-amd64:3.0 "/pause" 8 minutes ago Up 8 minutes k8s_POD.d8dbe16c_kube-controller-manager-master_kube-system_403e1523940e3f352d70e32c97d29be5_a65251b2
434d49024d1f gcr.io/google_containers/pause-amd64:3.0 "/pause" 8 minutes ago Up 8 minutes k8s_POD.d8dbe16c_kube-scheduler-master_kube-system_3bfbd36dfb8c8f71984a0d812e4dad33_f8d4ad55
e5da18222b52 gcr.io/google_containers/pause-amd64:3.0 "/pause" 8 minutes ago Up 8 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_d74382f649787a7b1081e1a2b36071dd_187a58df
66de3a3ad7e9 gcr.io/google_containers/pause-amd64:3.0 "/pause" 8 minutes ago Up 8 minutes k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_d58fa3b8
</code></pre>
<p>And the CNI was already installed:</p>
<pre><code>[root@master ~]# yum list |grep kubernetes-cni.x86_64
kubernetes-cni.x86_64 0.3.0.1-0.07a8a2 @kubernetes
</code></pre>
<p>Any guys face the similiar issue?</p>
| <p>You just have to install a (3rd party) Pod Network of some kind, like stated in the guide.</p>
<p>Personally, I'm using Weave or Flannel. You can for example install Weave this way:</p>
<pre><code>kubectl apply -f https://git.io/weave-kube
</code></pre>
<p>That said, there are many other Pod Network providers as well, see <a href="http://kubernetes.io/docs/admin/addons/" rel="nofollow noreferrer">http://kubernetes.io/docs/admin/addons/</a>.</p>
<p>Kubernetes focus is not networking per se, instead it exposes an interface for third party solutions, like Weave and Flannel.</p>
|
<p>I am new to Google Container Engine (GKE). When run on <code>localhost</code> it's working fine but when I deploy to production with GKE I got websocket error.</p>
<p>My node app is develop with <code>Hapi.js</code> and <code>Socket.io</code> and my structure is shown in image below.</p>
<p><a href="https://i.stack.imgur.com/4cUsG.png" rel="noreferrer">Application Architecture</a></p>
<p>I'm using Glue to compose Hapi server. Below is my <code>manifest.json</code></p>
<pre><code>{
...
"connections": [
{
"host": "app",
"address": "0.0.0.0",
"port": 8000,
"labels": ["api"],
"routes": {
"cors": false,
"security": {
"hsts": false,
"xframe": true,
"xss": true,
"noOpen": true,
"noSniff": true
}
},
"router": {
"stripTrailingSlash": true
},
"load": {
"maxHeapUsedBytes": 1073741824,
"maxRssBytes": 1610612736,
"maxEventLoopDelay": 5000
}
},
{
"host": "app",
"address": "0.0.0.0",
"port": 8099,
"labels": ["web"],
"routes": {
"cors": true,
"security": {
"hsts": false,
"xframe": true,
"xss": true,
"noOpen": true,
"noSniff": true
}
},
"router": {
"stripTrailingSlash": true
},
"load": {
"maxHeapUsedBytes": 1073741824,
"maxRssBytes": 1610612736,
"maxEventLoopDelay": 5000
}
},
{
"host": "app",
"address": "0.0.0.0",
"port": 8999,
"labels": ["admin"],
"routes": {
"cors": true,
"security": {
"hsts": false,
"xframe": true,
"xss": true,
"noOpen": true,
"noSniff": true
}
},
"router": {
"stripTrailingSlash": true
},
"load": {
"maxHeapUsedBytes": 1073741824,
"maxRssBytes": 1610612736,
"maxEventLoopDelay": 5000
},
"state": {
"ttl": null,
"isSecure": false,
"isHttpOnly": true,
"path": null,
"domain": null,
"encoding": "none",
"clearInvalid": false,
"strictHeader": true
}
}
],
...
}
</code></pre>
<p>And my <code>nginx.conf</code></p>
<pre><code>worker_processes 5; ## Default: 1
worker_rlimit_nofile 8192;
error_log /dev/stdout info;
events {
worker_connections 4096; ## Default: 1024
}
http {
access_log /dev/stdout;
server {
listen 80 default_server;
listen [::]:80 default_server;
# Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
return 301 https://$host$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name _;
# Configure ssl
ssl_certificate /etc/secret/ssl/myapp.com.csr;
ssl_certificate_key /etc/secret/ssl/myapp.com.key;
include /etc/nginx/ssl-params.conf;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name api.myapp.com;
location / {
proxy_pass http://api_app/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name myapp.com;
location / {
proxy_pass http://web_app/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name admin.myapp.com;
location / {
proxy_pass http://admin_app/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Handle Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# Define your "upstream" servers - the
# servers request will be sent to
upstream api_app {
server localhost:8000;
}
upstream web_app {
server localhost:8099;
}
upstream admin_app {
server localhost:8999;
}
}
</code></pre>
<p>Kubernetes service <code>app-service.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-nginx
labels:
app: app-nginx
spec:
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
# Label keys and values that must match in order to receive traffic for this service.
selector:
app: app-nginx
</code></pre>
<p>Kubernetes Deployment <code>app-deployment.yaml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-nginx
spec:
replicas: 3
template:
metadata:
labels:
app: app-nginx
spec:
containers:
- name: nginx
image: us.gcr.io/myproject/nginx
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
volumeMounts:
# This name must match the volumes.name below.
- name: ssl-secret
readOnly: true
mountPath: /etc/secret/ssl
- name: app
image: us.gcr.io/myproject/bts-server
ports:
- containerPort: 8000
name: api
- containerPort: 8099
name: web
- containerPort: 8999
name: admin
volumeMounts:
# This name must match the volumes.name below.
- name: client-secret
readOnly: true
mountPath: /etc/secret/client
- name: admin-secret
readOnly: true
mountPath: /etc/secret/admin
volumes:
- name: ssl-secret
secret:
secretName: ssl-key-secret
- name: client-secret
secret:
secretName: client-key-secret
- name: admin-secret
secret:
secretName: admin-key-secret
</code></pre>
<p>And I'm using <code>Cloudflare SSL full strict</code>.</p>
<p>Error get from Browser console:</p>
<pre><code>WebSocket connection to 'wss://api.myapp.com/socket.io/?EIO=3&transport=websocket&sid=4Ky-y9K7J0XotrBFAAAQ' failed: WebSocket is closed before the connection is established.
https://api.myapp.com/socket.io/?EIO=3&transport=polling&t=LYByND2&sid=4Ky-y9K7J0XotrBFAAAQ Failed to load resource: the server responded with a status of 400 ()
VM50:35 WebSocket connection to 'wss://api.myapp.com/socket.io/?EIO=3&transport=websocket&sid=FsCGx-UE7ohrsSSqAAAT' failed: Error during WebSocket handshake: Unexpected response code: 502WrappedWebSocket @ VM50:35WS.doOpen @ socket.io.js:6605Transport.open @ socket.io.js:4695Socket.probe @ socket.io.js:3465Socket.onOpen @ socket.io.js:3486Socket.onHandshake @ socket.io.js:3546Socket.onPacket @ socket.io.js:3508(anonymous function) @ socket.io.js:3341Emitter.emit @ socket.io.js:6102Transport.onPacket @ socket.io.js:4760callback @ socket.io.js:4510(anonymous function) @ socket.io.js:5385exports.decodePayloadAsBinary @ socket.io.js:5384exports.decodePayload @ socket.io.js:5152Polling.onData @ socket.io.js:4514(anonymous function) @ socket.io.js:4070Emitter.emit @ socket.io.js:6102Request.onData @ socket.io.js:4231Request.onLoad @ socket.io.js:4312xhr.onreadystatechange @ socket.io.js:4184
socket.io.js:4196 GET https://api.myapp.com/socket.io/?EIO=3&transport=polling&t=LYByNpy&sid=FsCGx-UE7ohrsSSqAAAT 400 ()
</code></pre>
<p>And here is Nginx's logs:</p>
<pre><code>[22/Nov/2016:12:10:19 +0000] "GET /socket.io/?EIO=3&transport=websocket&sid=MGc--oncQbQI6NOZAAAX HTTP/1.1" 101 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
10.8.0.1 - - [22/Nov/2016:12:10:19 +0000] "POST /socket.io/?EIO=3&transport=polling&t=LYByQBw&sid=MGc--oncQbQI6NOZAAAX HTTP/1.1" 200 2 "https://myapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
10.128.0.2 - - [22/Nov/2016:12:10:20 +0000] "GET /socket.io/?EIO=3&transport=polling&t=LYByQKp HTTP/1.1" 200 101 "https://myapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
10.8.0.1 - - [22/Nov/2016:12:10:21 +0000] "GET /socket.io/?EIO=3&transport=polling&t=LYByQWo&sid=c5nkusT9fEPRsu2rAAAY HTTP/1.1" 200 24 "https://myapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
2016/11/22 12:10:21 [error] 6#6: *157 connect() failed (111: Connection refused) while connecting to upstream, client: 10.8.0.1, server: api.myapp.com, request: "GET /socket.io/?EIO=3&transport=polling&t=LYByQaN&sid=c5nkusT9fEPRsu2rAAAY HTTP/1.1", upstream: "http://[::1]:8000/socket.io/?EIO=3&transport=polling&t=LYByQaN&sid=c5nkusT9fEPRsu2rAAAY", host: "api.myapp.com", referrer: "https://myapp.com/"
2016/11/22 12:10:21 [warn] 6#6: *157 upstream server temporarily disabled while connecting to upstream, client: 10.8.0.1, server: api.myapp.com, request: "GET /socket.io/?EIO=3&transport=polling&t=LYByQaN&sid=c5nkusT9fEPRsu2rAAAY HTTP/1.1", upstream: "http://[::1]:8000/socket.io/?EIO=3&transport=polling&t=LYByQaN&sid=c5nkusT9fEPRsu2rAAAY", host: "api.myapp.com", referrer: "https://myapp.com/"
10.8.0.1 - - [22/Nov/2016:12:10:22 +0000] "GET /socket.io/?EIO=3&transport=polling&t=LYByQaN&sid=c5nkusT9fEPRsu2rAAAY HTTP/1.1" 200 4 "https://myapp.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
</code></pre>
<p><strong>UPDATE</strong></p>
<p>When I change <code>replicas</code> to <code>1</code> in <code>app-deployment.yaml</code> it's work. But I think it's not a good solution. I need <strong>3 replicas</strong>. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: app-nginx
</code></pre>
<p><strong>How to make it work with 3 replicas?</strong></p>
| <p>After I update Kubernetes service template to use <code>sessionAffinity: ClientIP</code> it works now. But just get some error when first press <code>Ctrl + F5</code> and on second press it's work fine.</p>
<pre><code>Error during WebSocket handshake: Unexpected response code: 400
</code></pre>
<p><strong>However, I still get data from server. So I think it's okay.</strong></p>
<p><strong>Updated Service template</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-nginx
labels:
app: app-nginx
spec:
sessionAffinity: ClientIP
type: LoadBalancer
ports:
# The port that this service should serve on.
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
# Label keys and values that must match in order
# to receive traffic for this service.
selector:
app: app-nginx
</code></pre>
|
<p>I'm playing with Google container engine on gcloud. So after successfully finished <code>gloud init</code> I followed instructions and did:</p>
<pre><code>gcloud container clusters get-credentials cluster-1 --zone europe-west1-c --project whatever
</code></pre>
<p>And then:</p>
<pre><code>kubectl proxy
</code></pre>
<p>But I got the following error message:</p>
<pre><code>error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
</code></pre>
<p>I do see stuff in <code>~/.kube/config</code> file so I'm not sure what went wrong. I have <code>minikube</code> also installed on the machine but I don't think that's a problem.</p>
| <p>Use</p>
<pre><code>gcloud auth application-default login
</code></pre>
<p>to login for application default creadentials (<a href="https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login" rel="noreferrer">docs</a>). The behavior for application default credentials has <a href="https://cloud.google.com/sdk/release_notes#12800_20160928" rel="noreferrer">changed</a> in <code>gcloud</code> since version 128.</p>
<p>Note that changing credentials via <code>gcloud auth login</code> or <code>gcloud init</code> or <code>gcloud config set account MY_ACCOUNT</code> will NOT affect application default credentials, they managed separately from gcloud credentials.</p>
|
<p><strong>Question 1</strong> - I'm reading the documentation and I'm slightly confused with the wording. It says:</p>
<blockquote>
<p><strong>ClusterIP</strong>: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType</p>
<p><strong>NodePort</strong>: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <code><NodeIP>:<NodePort></code>.</p>
<p><strong>LoadBalancer</strong>: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.</p>
</blockquote>
<p>Does the NodePort service type still use the <code>ClusterIP</code> but just at a different port, which is open to external clients? So in this case is <code><NodeIP>:<NodePort></code> the same as <code><ClusterIP>:<NodePort></code>?</p>
<p>Or is the <code>NodeIP</code> actually the IP found when you run <code>kubectl get nodes</code> and not the virtual IP used for the ClusterIP service type?</p>
<p><strong>Question 2</strong> - Also in the diagram from the link below:</p>
<p><a href="https://i.stack.imgur.com/P36yL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/P36yL.png" alt="enter image description here" /></a></p>
<p>Is there any particular reason why the <code>Client</code> is inside the <code>Node</code>? I assumed it would need to be inside a <code>Cluster</code>in the case of a ClusterIP service type?</p>
<p>If the same diagram was drawn for NodePort, would it be valid to draw the client completely outside both the <code>Node</code> and<code>Cluster</code> or am I completely missing the point?</p>
| <p>A ClusterIP exposes the following:</p>
<ul>
<li><code>spec.clusterIp:spec.ports[*].port</code></li>
</ul>
<p>You can only access this service while inside the cluster. It is accessible from its <code>spec.clusterIp</code> port. If a <code>spec.ports[*].targetPort</code> is set it will route from the port to the targetPort. The CLUSTER-IP you get when calling <code>kubectl get services</code> is the IP assigned to this service within the cluster internally.</p>
<p>A NodePort exposes the following:</p>
<ul>
<li><code><NodeIP>:spec.ports[*].nodePort</code></li>
<li><code>spec.clusterIp:spec.ports[*].port</code></li>
</ul>
<p>If you access this service on a nodePort from the node's external IP, it will route the request to <code>spec.clusterIp:spec.ports[*].port</code>, which will in turn route it to your <code>spec.ports[*].targetPort</code>, if set. This service can also be accessed in the same way as ClusterIP.</p>
<p>Your NodeIPs are the external IP addresses of the nodes. You cannot access your service from <code>spec.clusterIp:spec.ports[*].nodePort</code>.</p>
<p>A LoadBalancer exposes the following:</p>
<ul>
<li><code>spec.loadBalancerIp:spec.ports[*].port</code></li>
<li><code><NodeIP>:spec.ports[*].nodePort</code></li>
<li><code>spec.clusterIp:spec.ports[*].port</code></li>
</ul>
<p>You can access this service from your load balancer's IP address, which routes your request to a nodePort, which in turn routes the request to the clusterIP port. You can access this service as you would a NodePort or a ClusterIP service as well.</p>
|
<p>just going through this guide on gitlab and k8s <a href="https://about.gitlab.com/2016/12/14/continuous-delivery-of-a-spring-boot-application-with-gitlab-ci-and-kubernetes/" rel="nofollow noreferrer">gitlab-k8s-cd</a>, but my build keeps failing on this part:</p>
<pre><code>- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=<my_username> --docker-password=$REGISTRY_PASSWD --docker-email=<my_email>
</code></pre>
<p>Although I am not entirely sure what password is needed for --docker-password, I have created an API token in gitlab for my user and I am using that in the secure variables. </p>
<p>This is the error:</p>
<pre><code>$ gcloud container clusters get-credentials deployment
Fetching cluster endpoint and auth data.
kubeconfig entry generated for deployment.
$ kubectl delete secret registry.gitlab.com
Error from server: secrets "registry.gitlab.com" not found
ERROR: Build failed: exit code 1
</code></pre>
<p>Any help would be much appreciated thanks.</p>
<p><strong>EDIT</strong></p>
<p>Since the initial post, by removing the initial <code>kubectl delete secret</code> and re-building worked, so it was failing on deleting when there was no previous secret. </p>
<p><strong>Second Edit</strong></p>
<p>Having problems with my deployment.yml for K8s, could anyone shed any light on why I am getting this error:</p>
<pre><code>error validating "deployment.yml": error validating data: field spec.template.spec.containers[0].ports[0]: expected object of type map[string]interface{},
</code></pre>
<p>With this yml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: deployment
image: registry.gitlab.com/<username>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
</code></pre>
<p>And this error:</p>
<pre><code>error validating "deployment.yml": error validating data: found invalid field imagePullSecrets for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>With this yml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app>
spec:
replicas: 2
template:
metadata:
labels:
app: <app>
spec:
containers:
- name: <app>
image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
ports:
- "80:8080"
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
</code></pre>
<p><strong>Latest YAML</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
ports:
- port: 80
selector:
app: <app_name>
tier: frontend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <app_name>
labels:
app: <app_name>
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: <app_name>
tier: frontend
spec:
containers:
- image: registry.gitlab.com/<project>/<app>
imagePullPolicy: Always
name: <app_name>
env:
- name: PORT
value: "8080"
imagePullSecrets:
- name: registry.gitlab.com
ports:
- containerPort: 8080
hostPort: 80
</code></pre>
| <p><strong>Regarding your first error:</strong></p>
<p>Ports are defined differently in Kubernetes than in Docker or Docker Compose. This is how the port specification should look like:</p>
<pre><code>ports:
- containerPort: 8080
hostPort: 80
</code></pre>
<p>See the <a href="http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_containerport" rel="nofollow noreferrer">reference</a> for more information.</p>
<p><strong>Regarding your second error:</strong></p>
<p>According to the <a href="http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podspec" rel="nofollow noreferrer">reference</a> on PodSpecs, the <code>imagePullSecrets</code> property is correctly placed in your example. However, from reading the error message, it seems that you actually included the <code>imagePullSecrets</code> property into the ContainerSpec, not the PodSpec.</p>
<p>The YAML in your question seems to be correct, in this case. Make sure that your actual manifest matches the example from your question and you did not accidentally indented the <code>imagePullSecrets</code> property more than necessary.</p>
|
<p>I want to find out which version of kubernetes code I am reading. It spend too much time to build the source code.</p>
| <p>I don't entirely follow your question. If you're building from source, it's not necessary that you're on a given version at all. You're a lot more likely to be "between" versions.</p>
<p>Your best bet is to look at</p>
<pre><code>git describe --exact-match --abbrev=0
</code></pre>
<p>It will give you the latest tag before the commit you're on. Tags are likely to indicate versions.</p>
<p>Look at the output of</p>
<pre><code>git tag --list
</code></pre>
<p>to see all available tags.</p>
<p>For a more interactive view, you should be able to look through</p>
<pre><code>git log --decorate
</code></pre>
|
<p>This may sound like a n00b question, and maybe it is, but some things with the Azure Container Services puzzle me a little. I have managed to get a Kubernetes Cluster up and running on Azure inside a Resource Group, so for starters I am set and done.</p>
<p>Now my questions are the following:</p>
<ul>
<li>Who will take care of patching and upgrading the Master and Agent VMs?</li>
<li>Who will take care of patching and updating the Kubernetes Components?</li>
<li>Will I need to take care of backing up the <code>etcd</code> database myself?</li>
<li>Do I get an SLA with the Kubernetes Cluster, or is everything on top of the VM SLAs all up to me (i.e. making sure Kubernetes behaves)?</li>
</ul>
<p>I have the feeling the answers to these questions are "me", "me", "yes" and "no", which would make me ask myself whether the ACS is just a set of Resource Manager Templates, or where's the added value? Am I right on my assumptions, or where am I wrong?</p>
| <p>I'm on the Azure Container Service team, and your statement:</p>
<p>"ACS is just a set of Resource Manager Templates"</p>
<p>Is more or less correct at this moment in time (Jan. 2017)</p>
<p>Over the course of the next few months, we will be improving support for all of the scenarios you mentioned:
* Backup
* Upgrade
* Health maintainence and repair</p>
<p>And this applies not just to Kubernetes but also to DC/OS and Docker Swarm which are also supported in ACS.</p>
<p>Please let me know if I can help with more information.</p>
|
<p>I have successfully set up a Kubernetes cluster on my VMware host using Rancher. I have the kubernetes-dashboard running and can execute commands to the cluster using kubectl. </p>
<p>Now, I want to deploy my application to the cluster using a SaaS build tool (Distelli). This build tool should connect to my host using a HTTPS client certificate, client key, and cluster certificate.</p>
<p>However, my kubernetes API is not public yet.</p>
<p>This is my current kubernetes service configuration:</p>
<pre><code>$kubectl describe services kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Selector: <none>
Type: ClusterIP
IP: 10.43.0.1
Port: https 443/TCP
Endpoints: 10.42.173.175:6443
Session Affinity: ClientIP
</code></pre>
<p>How do I make this service available on the external IP address? I have tried to use an ingress loadbalancer to the server, but it only returns an 503 Service not available.</p>
<p>Any ideas?</p>
| <p>You need to have a route from the public internet to the API server. This can be accomplished by assigning a public IP direct to the machine running the api server, or you could have a load balancer direct traffic in as well. You mentioned you were on VMWare so there could be a couple different paths depending on your network setup. </p>
|
<p>I have a 3x node kubernetes cluster: node1 (master), node2, and node3. I have a pod that's currently scheduled on node3 that I'd like to be exposed externally to the cluster. So I have a service of type nodePort with the nodePort set to 30080. I can successfully do <code>curl localhost:30080</code> locally on each node: node1, node2, and node3. But externally, <code>curl nodeX:30080</code> only works against node3. The other two timeout. tcpdump confirms node1 and node2 are receiving the request but not responding.</p>
<p>How can I make this work for all three nodes so I don't have to keep track of which node the pod is currently scheduled on? My best guess is that this is an iptables issue where I'm missing an iptables rule to DNAT traffic if the source IP isn't localhost. That being said, I have no idea how to troubleshoot to confirm this is the issue and then how to fix it. It seems like that rule should automatically be there. </p>
<p>Here's some info my setup:<br>
kube-ravi196: 10.163.148.196<br>
kube-ravi197: 10.163.148.197<br>
kube-ravi198: 10.163.148.198<br>
CNI: Canal (flannel + calico)<br>
Host OS: Ubuntu 16.04<br>
Cluster set up through kubeadm</p>
<pre><code>$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-registry" -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-registry-v0-1mthd 1/1 Running 0 39m 192.168.75.13 ravi-kube198
$ kubectl get service --namespace=kube-system -l "k8s-app=kube-registry"
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-registry 10.100.57.109 <nodes> 5000:30080/TCP 5h
$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-proxy" -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-proxy-1rzz8 1/1 Running 0 40m 10.163.148.198 ravi-kube198
kube-proxy-fz20x 1/1 Running 0 40m 10.163.148.197 ravi-kube197
kube-proxy-lm7nm 1/1 Running 0 40m 10.163.148.196 ravi-kube196
</code></pre>
<p>Note that curl localhost from node ravi-kube196 is successful (a 404 is good).</p>
<pre><code>deploy@ravi-kube196:~$ curl localhost:30080/test
404 page not found
</code></pre>
<p>But trying to curl the IP from a machine outside the cluster fails:</p>
<pre><code>ravi@rmac2015:~$ curl 10.163.148.196:30080/test
(hangs)
</code></pre>
<p>Then trying to curl the node IP that the pod is scheduled on works.:</p>
<pre><code>ravi@rmac2015:~$ curl 10.163.148.198:30080/test
404 page not found
</code></pre>
<p>Here are my iptables rules for that service/pod on the 196 node:</p>
<pre><code>deploy@ravi-kube196:~$ sudo iptables-save | grep registry
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-SEP-7BIJVD3LRB57ZVM2 -s 192.168.75.13/32 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-MARK-MASQ
-A KUBE-SEP-7BIJVD3LRB57ZVM2 -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp -j DNAT --to-destination 192.168.75.13:5000
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -s 10.163.148.196/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.196:1
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -s 10.163.148.198/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.198:1
-A KUBE-SEP-KXX2UKHAML22525B -s 10.163.148.197/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-KXX2UKHAML22525B -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.197:1
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-SVC-E66MHSUH4AYEXSQE
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-7QBKTOBWZOW2ADYZ
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-KXX2UKHAML22525B
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-SEP-DARQFIU6CIZ6DHSZ
-A KUBE-SVC-JV2WR75K33AEZUK7 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-SEP-7BIJVD3LRB57ZVM2
</code></pre>
<p>kube-proxy logs from 196 node:</p>
<pre><code>deploy@ravi-kube196:~$ kubectl logs --namespace=kube-system kube-proxy-lm7nm
I0105 06:47:09.813787 1 server.go:215] Using iptables Proxier.
I0105 06:47:09.815584 1 server.go:227] Tearing down userspace rules.
I0105 06:47:09.832436 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0105 06:47:09.836004 1 conntrack.go:66] Setting conntrack hashsize to 32768
I0105 06:47:09.836232 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0105 06:47:09.836260 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
</code></pre>
| <p>I found the cause of why the service couldn't be reached externally. It was because iptables FORWARD chain was dropping the packets. I raised an issue with kubernetes at <a href="https://github.com/kubernetes/kubernetes/issues/39658" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/39658</a> with a bunch more detail there. A (poor) workaround is to change the default FORWARD policy to ACCEPT.</p>
<p><strong>Update 1/10</strong></p>
<p>I raised an issue with Canal, <a href="https://github.com/projectcalico/canal/issues/31" rel="nofollow noreferrer">https://github.com/projectcalico/canal/issues/31</a>, as it appears to be a Canal specific issue. Traffic getting forwarded to flannel.1 interface is getting dropped. A better fix than changing default FORWARD policy to ACCEPT is to just add a rule for flannel.1 interface. <code>sudo iptables -A FORWARD -o flannel.1 -j ACCEPT</code>.</p>
|
<p>I've got a setup described bellow - so a simple replication controller, service and an https ingress deployed with kubernetes on google cloud.</p>
<p>I need to kill my app for a bit so that I can test how the rest of my stack reacts - what's a good way to do it?</p>
<p>I've tried deleting the service, but when I've recreated it, it wouldn't pick up the backend service (replication controller and pod got created and I could access them internally, but not via the ingress - the service didn't see it.</p>
<pre><code>echo "
apiVersion: v1
kind: Service
metadata:
name: nodeapp-https
labels:
app: nodeapp-https
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: nodeapp-https
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nodeapp-https
spec:
replicas: 1
template:
metadata:
labels:
app: nodeapp-https
spec:
containers:
- name: nodeapp-https
image: gcr.io/my-project/node-app:v84
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodeapp-httpss
spec:
tls:
- secretName: node-app-secret
backend:
serviceName: nodeapp-https
servicePort: 80
" | kubectl create -f -
</code></pre>
| <p>You could set the replica count to 0 for the duration of the test. When you're finished testing, you would reset the replica count to the desired number to bring your app back up.</p>
<p>The command to do this would be</p>
<pre><code>$ kubectl scale rc nodeapp-https --replicas=0
... do your test
$ kubectl scale rc nodeapp-https --replicas=1
</code></pre>
|
<p>I'm using the PersistentVolume functionality for sharing VM directories with Pods. For example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: psql-data-disk
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/psqldata
</code></pre>
<p>But I can't figure out how to delete the directory from the host VM so that I can reset the data. Minikube persists the <code>/data/</code> directory across VM reboots, but doesn't document where it's storing it.</p>
<p>If you <code>kubectl delete PersistentVolume psql-data-disk</code> it doesn't delete any of the contents in the directory itself, it just deletes the K8s resource.</p>
<p>I'm using the <code>docker-machine-driver-xhyve</code> driver installed via brew on OSX Sierra.</p>
| <p>Ugh, I hadn't known about the <code>minikube ssh</code> command to get into the VM. So I just went in there and deleted the directory.</p>
|
<p>I have kube service running with 2 named ports like this:</p>
<pre><code>$ kubectl get service elasticsearch --output json
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
... stuff that really has nothing to do with my question ...
},
"spec": {
"clusterIP": "10.0.0.174",
"ports": [
{
"name": "http",
"nodePort": 31041,
"port": 9200,
"protocol": "TCP",
"targetPort": 9200
},
{
"name": "transport",
"nodePort": 31987,
"port": 9300,
"protocol": "TCP",
"targetPort": 9300
}
],
"selector": {
"component": "elasticsearch"
},
"sessionAffinity": "None",
"type": "NodePort"
},
"status": {
"loadBalancer": {}
}
}
</code></pre>
<p>I'm trying to get output containing just the 'http' port:</p>
<pre><code>$ kubectl get service elasticsearch --output jsonpath={.spec.ports[*].nodePort}
31041 31987
</code></pre>
<p>Except when I add the test expression as hinted in the cheatsheet here <a href="http://kubernetes.io/docs/user-guide/kubectl-cheatsheet/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/kubectl-cheatsheet/</a> for the name I get an error</p>
<pre><code>$ kubectl get service elasticsearch --output jsonpath={.spec.ports[?(@.name=="http")].nodePort}
-bash: syntax error near unexpected token `('
</code></pre>
| <p><code>(</code> and <code>)</code> mean something in bash (see <a href="http://www.gnu.org/software/bash/manual/html_node/Command-Grouping.html" rel="noreferrer">subshell</a>), so your shell interpreter is doing that first and getting confused. Wrap the argument to <code>jsonpath</code> in single quotes, that will fix it:</p>
<pre><code>$ kubectl get service elasticsearch --output jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'
</code></pre>
<p>For example:</p>
<pre><code># This won't work:
$ kubectl get service kubernetes --output jsonpath={.spec.ports[?(@.name=="https")].targetPort}
-bash: syntax error near unexpected token `('
# ... but this will:
$ kubectl get service kubernetes --output jsonpath='{.spec.ports[?(@.name=="https")].targetPort}'
443
</code></pre>
|
<p>I'm pretty new to Kubernetes and clusters so this might be very simple.</p>
<p>I set up a Kubernetes cluster with 5 nodes using <code>kubeadm</code> following <a href="http://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="noreferrer">this guide</a>. I got some issues but it all worked in the end. So now I want to install the <a href="http://kubernetes.io/docs/user-guide/ui/" rel="noreferrer">Web UI (Dashboard)</a>. To do so I need to set up authentication:</p>
<blockquote>
<p>Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.</p>
</blockquote>
<p>So I got to read <a href="http://kubernetes.io/docs/admin/authentication/" rel="noreferrer">authentication page</a> of the documentation. And I decided I want to add authentication via a <a href="http://kubernetes.io/docs/admin/authentication/#static-password-file" rel="noreferrer">Static Password File</a>. To do so I have to append the option <code>--basic-auth-file=SOMEFILE</code> to the Api server. </p>
<p>When I do <code>ps -aux | grep kube-apiserver</code> this is the result, so it is already running. (which makes sense because I use it when calling <code>kubectl</code>)</p>
<pre><code>kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
</code></pre>
<p>Couple of questions I have:</p>
<ul>
<li>So where are all these options set? </li>
<li>Can i just kill this process and restart it with the option I need? </li>
<li>Will it be started when I reboot the system?</li>
</ul>
| <p>in <code>/etc/kubernetes/manifests</code> is a file called <code>kube-apiserver.json</code>. This is a JSON file and contains all the option you can set. I've appended the <code>--basic-auth-file=SOMEFILE</code> and rebooted the system (right after the change of the file <code>kubectl</code> wasn't working anymore and the API was shutdown)</p>
<p>After a reboot the whole system was working again.</p>
<h2>Update</h2>
<p>I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did <code>kubectl proxy</code> to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui</p>
|
<p>I'm setting up a kubernetes cluster with many different components for our application stack and I'm trying to balance storage requirements while minimizing the number of components.</p>
<p>We have a web <strong>scraper</strong> that downloads tens of thousands of HTML files (and maybe PDFs) every day and I want to store these somewhere (along with some JSON metadata). I want the files stored in a redundant scalable way but having millions of small files seems like a bad fit with e.g. GlusterFS.</p>
<p>At the same time we have some very large binary files used by our system (several gigabytes large) and also probably many smaller binary files (10's of MBs). These do not seem like a good fit for any distribtued NoSQL DB like MongoDB.</p>
<p>So I'm considering using MongoDB + GlusterFS to separately address these two needs but I would rather reduce the number of moving pieces and just use one system. I have also read various warnings about using GlusterFS without e.g. Redhat support (which we definitely will not have).</p>
<p>Can anyone recommend an alternative? I am looking for something that is a distributed binary object store which is easy to setup/maintain and supports both small and large files. One advantage of our setup is that files will rarely ever be updated or deleted (just written and then read) and we don't even need indexing (that will be handled separately by elasticsearch) or high speed access for reads.</p>
| <p>Are you in a cloud? If in AWS S3 would be a good spot, object storage sounds like what you might want, but not sure of your requirements. </p>
<p>If not in a cloud, you could run Minio (<a href="https://www.minio.io/" rel="nofollow noreferrer">https://www.minio.io/</a>) which would give you the same type of object storage that s3 would give you. </p>
<p>I do something similar now where I store binary documents in MongoDB and we back the nodes with EBS volumes. </p>
|
<p>I have a kubernetes + flannel setup. Flannel config is <code>{"Network": "10.200.0.0/16", "SubnetLen":24, "Backend": {"Type": "vxlan"}}</code>.</p>
<p>I started the apiserver with <code>--service-cluster-ip-range=10.32.0.0/24</code>. As I understand, pods addresses are managed by flannel and service-cluster-ip-range is managed by iptables. I ran kubedns and tried executing dig from the kubernetes worker node for a deployment that I am running.</p>
<pre><code>$ dig phonebook.default.svc.cluster.local @10.32.0.10 +short
10.32.0.7
</code></pre>
<p>However, when I run the same command from one of the containers running in the pod, I get:</p>
<pre><code>$ dig phonebook.default.svc.cluster.local
;; reply from unexpected source: 10.200.16.10#53, expected 10.32.0.10#53
;; reply from unexpected source: 10.200.16.10#53, expected 10.32.0.10#53
;; reply from unexpected source: 10.200.16.10#53, expected 10.32.0.10#53
; <<>> DiG 9.9.5-9+deb8u8-Debian <<>> phonebook.default.svc.cluster.local
;; global options: +cmd
;; connection timed out; no servers could be reached
</code></pre>
<p>Any idea what might be wrong here?</p>
| <p>adding <code>--masquerade-all</code> flag to kube-proxy, solved this for me. It seems that iptables is not masquerading the requests without this flag which causes dns lookup to fail.</p>
|
<p>Considering other orchestration tools like <a href="https://github.com/dokku/dokku" rel="noreferrer">dokku</a>, dcos, deis, <a href="https://github.com/flynn/flynn" rel="noreferrer">flynn</a>, docker swarm, etc.. Kubernetes is no where near to them in terms of lines of code, on an average those tools are around 100k-200k lines of code.</p>
<p>Intuitively it feels strange that to manage containers i.e. to check health, scale containers up and down, kill them, restart them, etc.. doesn't have to consist of <em>2.4M+ lines of code</em> (which is the scale of an entire Operating System code base), I feel like there is something more to it.</p>
<p><strong>What is different in Kubernetes compared to other orchestration solutions that makes it so big?</strong></p>
<p>I dont have any knowledge of maintaining more than 5-6 servers. Please explain why it is so big, <em>what functionalities play big part in it</em>.</p>
| <p><strong>First and foremost</strong>: don't be misled by the number of lines in the code, most of it are dependencies in the <code>vendor</code> folder that does not account for the core logic (<em>utilities, client libraries, gRPC, etcd,</em> etc.).</p>
<h2>Raw LoC Analysis with cloc</h2>
<p><em>To put things into perspective</em>, for <strong>Kubernetes</strong>:</p>
<pre><code>$ cloc kubernetes --exclude-dir=vendor,_vendor,build,examples,docs,Godeps,translations
7072 text files.
6728 unique files.
1710 files ignored.
github.com/AlDanial/cloc v 1.70 T=38.72 s (138.7 files/s, 39904.3 lines/s)
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Go 4485 115492 139041 1043546
JSON 94 5 0 118729
HTML 7 509 1 29358
Bourne Shell 322 5887 10884 27492
YAML 244 374 508 10434
JavaScript 17 1550 2271 9910
Markdown 75 1468 0 5111
Protocol Buffers 43 2715 8933 4346
CSS 3 0 5 1402
make 45 346 868 976
Python 11 202 305 958
Bourne Again Shell 13 127 213 655
sed 6 5 41 152
XML 3 0 0 88
Groovy 1 2 0 16
--------------------------------------------------------------------------------
SUM: 5369 128682 163070 1253173
--------------------------------------------------------------------------------
</code></pre>
<p>For <strong>Docker</strong> (and not Swarm or Swarm mode as this includes more features like volumes, networking, and plugins that are not included in these repositories). We do not include projects like <em>Machine</em>, <em>Compose</em>, <em>libnetwork</em>, so in reality the whole docker platform might include much more LoC:</p>
<pre><code>$ cloc docker --exclude-dir=vendor,_vendor,build,docs
2165 text files.
2144 unique files.
255 files ignored.
github.com/AlDanial/cloc v 1.70 T=8.96 s (213.8 files/s, 30254.0 lines/s)
-----------------------------------------------------------------------------------
Language files blank comment code
-----------------------------------------------------------------------------------
Go 1618 33538 21691 178383
Markdown 148 3167 0 11265
YAML 6 216 117 7851
Bourne Again Shell 66 838 611 5702
Bourne Shell 46 768 612 3795
JSON 10 24 0 1347
PowerShell 2 87 120 292
make 4 60 22 183
C 8 27 12 179
Windows Resource File 3 10 3 32
Windows Message File 1 7 0 32
vim script 2 9 5 18
Assembly 1 0 0 7
-----------------------------------------------------------------------------------
SUM: 1915 38751 23193 209086
-----------------------------------------------------------------------------------
</code></pre>
<blockquote>
<p>Please note that these are very raw estimations, using <strong>cloc</strong>. This might be worth a deeper analysis.</p>
</blockquote>
<p>Roughly, it seems like the project accounts for half of the LoC (<strong>~1250K LoC</strong>) mentioned in the question (whether you value dependencies or not, which is subjective).</p>
<h2>What is included in Kubernetes that makes it so big?</h2>
<p>Most of the <em>bloat</em> comes from libraries supporting various Cloud providers to ease the bootstrapping on their platform or to support specific features (volumes, etc.) through plugins. It also has a <strong>Lot</strong> of <a href="https://github.com/kubernetes/kubernetes/tree/master/examples" rel="noreferrer">Examples</a> to dismiss from the line count. A fair LoC estimation needs to exclude a lot of unnecessary documentation and example directories.</p>
<p>It is also <strong>much more feature rich</strong> compared to <em>Docker Swarm</em>, <em>Nomad</em> or <em>Dokku</em> to cite a few. It supports advanced networking scenarios, has load balancing built-in, includes <a href="https://kubernetes.io/docs/user-guide/petset/" rel="noreferrer">PetSets</a>, <a href="https://github.com/kubernetes/kubernetes/tree/master/federation" rel="noreferrer">Cluster Federation</a>, volume plugins or other features that other projects do not support yet.</p>
<p>It supports <strong>multiple container engines</strong>, so it is not exclusively running docker containers but could possibly run other engines (such as <strong>rkt</strong>).</p>
<p>A lot of the core logic involves interaction with other components: Key-Value stores, client libraries, plugins, etc. which extends far beyond simple scenarios.</p>
<p><em>Distributed Systems are notoriously hard</em>, and <em>Kubernetes</em> seems to support a majority of the tooling from key players in the container industry without compromise (where other solutions are making such compromise). As a result, the project can look artificially <strong>bloated</strong> and too big for its core mission (deploying containers at scale). In reality, these statistics are not that surprising.</p>
<h2>Key idea</h2>
<p>Comparing <em>Kubernetes</em> to <em>Docker</em> or <em>Dokku</em> is not really appropriate. The scope of the project is far bigger and it includes much more features as it is not limited to the Docker family of tooling.</p>
<p>While Docker has a lot of its features scattered across multiple libraries, Kubernetes tends to have everything under its core repository (which inflates the line count substantially but also explains the popularity of the project).</p>
<p>Considering this, the LoC statistic is not that surprising.</p>
|
<p>I have just installed a basic kubernetes cluster the manual way, to better understand the components, and to later automate this installation. I followed this guide: <a href="https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/</a></p>
<p>The cluster is completely empty without addons after this. I've already deployed kubernetes-dashboard succesfully, however, when trying to deploy kube-dns, it fails with the log:</p>
<pre><code>2017-01-11T15:09:35.982973000Z F0111 15:09:35.978104 1 server.go:55]
Failed to create a kubernetes client:
invalid configuration: no configuration has been provided
</code></pre>
<p>I used the following yaml template for kube-dns without modification, only filling in the cluster IP:
<a href="https://coreos.com/kubernetes/docs/latest/deploy-addons.html" rel="noreferrer">https://coreos.com/kubernetes/docs/latest/deploy-addons.html</a></p>
<p>What did I do wrong?</p>
| <p>Experimenting with kubedns arguments, I added <code>--kube-master-url=http://mykubemaster.mydomain:8080</code> to the yaml file, and suddenly it reported in green. </p>
<p>How did this solve it? Was the container not aware of the master for some reason?</p>
|
<p>I'm new to openshift and k8s. I'm not sure what's the difference between these two terms, openshift route vs k8s ingress ?</p>
| <p>Ultimately they are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a <code>Route</code> was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using <code>Route</code> from OpenShift as a starting point for what could be done, <code>Ingress</code> was developed for Kubernetes. In the <code>Ingress</code> version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.</p>
|
<p>I've used Kubeadm to bootstrap a cluster, and added weave-net with</p>
<p><code>kubectl apply -f https://git.io/weave-kube</code></p>
<p>and I have everything running, but I can't "see" any of the assigned IP's within the cluster. </p>
<p>So:</p>
<p><code>[centos@atomic01 ~]$ kubectl get pods --all-namespaces -o wide<br>
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default hello-2533203682-b5usp 1/1 Running 0 13m 10.42.0.0 atomic03
default test-701078429-ely8s 1/1 Running 1 3h 10.40.0.1 atomic02
kube-system dummy-2088944543-6i81l 1/1 Running 0 5h 192.168.150.150 atomic01
kube-system etcd-atomic01 1/1 Running 0 5h 192.168.150.150 atomic01
kube-system kube-apiserver-atomic01 1/1 Running 0 5h 192.168.150.150 atomic01
kube-system kube-controller-manager-atomic01 1/1 Running 0 5h 192.168.150.150 atomic01
kube-system kube-discovery-982812725-c1kkw 1/1 Running 0 5h 192.168.150.150 atomic01
kube-system kube-dns-2247936740-nrszw 3/3 Running 2 5h 10.32.0.2 atomic01
kube-system kube-proxy-amd64-0y8ik 1/1 Running 1 5h 192.168.150.152 atomic03
kube-system kube-proxy-amd64-57y4o 1/1 Running 0 5h 192.168.150.150 atomic01
kube-system kube-proxy-amd64-mjpik 1/1 Running 1 5h 192.168.150.151 atomic02
kube-system kube-proxy-amd64-sh3ej 1/1 Running 1 5h 192.168.150.153 atomic04
kube-system kube-scheduler-atomic01 1/1 Running 0 5h 192.168.150.150 atomic01
kube-system kubernetes-dashboard-3095304083-xwuw8 1/1 Running 1 2h 10.38.0.0 atomic04
kube-system weave-net-edur9 2/2 Running 0 1m 192.168.150.151 atomic02
kube-system weave-net-l9xp3 2/2 Running 0 1m 192.168.150.150 atomic01
kube-system weave-net-sjpui 2/2 Running 0 1m 192.168.150.153 atomic04
kube-system weave-net-xu7j5 2/2 Running 0 1m 192.168.150.152 atomic03
</code></p>
<p>I <em>should</em> be able to ping the other nodes, but</p>
<p><code>[centos@atomic01 ~]$ kubectl exec test-701078429-ely8s -- ping 10.42.0.0 PING 10.42.0.0 (10.42.0.0) 56(84) bytes of data. From 10.40.0.1 icmp_seq=1 Destination Host Unreachable From 10.40.0.1 icmp_seq=2 Destination Host Unreachable From 10.40.0.1 icmp_seq=3 Destination Host Unreachable
</code></p>
<p>Of course, this works: </p>
<p><code>[centos@atomic01 ~]$ kubectl exec test-701078429-ely8s -- ping 192.168.150.150
PING 192.168.150.150 (192.168.150.150) 56(84) bytes of data.
64 bytes from 192.168.150.150: icmp_seq=1 ttl=63 time=0.484 ms
64 bytes from 192.168.150.150: icmp_seq=2 ttl=63 time=0.448 ms</code></p>
<p>I've run out of ideas, any clues on things to test or look out for would be much appreciated. [Running on Centos 7 Atomic VM's]</p>
| <p>Hmm, does any of the test or hello Pods expose any ports? Can you access a port on the hello pod from the test pod?</p>
<p>Normally, I don't think ping should work (unless you have a Pod that processes icmp requests), so I actually don't think there's anything wrong here.</p>
<p>Just run a Pod (preferably from a Deployment) that exposes a port maybe 80 for simplicity, set <code>containerPort: 80</code> and you should be able to curl that Pod IP successfully. Also consider making a Service with a stable IP that loadbalances requests to matching Pods, as Pods might come and go.</p>
<p>Hope it helps!</p>
|
<p>I want to watch the change of namespaces in kubernetes cluster, with code like:</p>
<pre><code> log.Infoln("====== 1 ======= ")
namespaces, err := clientset.Namespaces().List(api.ListOptions{Watch: true})
if err != nil {
log.Errorln("Get namespaces from kubernetes cluster error:%v", err)
}
log.Infoln("====== 2 ======= ")
for _, namespace := range namespaces.Items {
log.Println("=======>> namespaces: ", namespace)
}
</code></pre>
<p>this code will block at <code>namespaces, err := clientset.Namespaces().List(api.ListOptions{Watch: true})</code>. But there is no response when I create new namespace either delete the namespaces.</p>
<p>the <code>client-go</code> is <code>k8s.io/client-go/1.5/</code></p>
<p>Anyone can give me the example code for this, thanks.</p>
| <p>I had foud the answer:</p>
<pre><code>var watch watch.Interface
if watch, err = clientset.Namespaces().Watch(api.ListOptions{Watch: true}); err != nil {
log.Errorln("Watch namespaces from kubernetes cluster error:%v", err)
return
}
eventChan := watch.ResultChan()
for event := range eventChan {
log.Infoln(event)
}
</code></pre>
|
<p>Many applications require configuration via some combination of config files, command line arguments, and environment variables. These configuration artifacts should be decoupled from image content in order to keep containerized applications portable. The ConfigMap API resource provides mechanisms to inject containers with configuration data while keeping containers agnostic of Kubernetes. ConfigMap can be used to store fine-grained information like individual properties or coarse-grained information like entire config files or JSON blobs.</p>
<p>I am unable to find where configmaps are saved. I know they are created however I can only read them via the minikube dashboard.</p>
| <p><a href="https://kubernetes.io/docs/user-guide/configmap/" rel="nofollow noreferrer">ConfigMaps</a> in Kubernetes can be consumed in many different ways and mounting it as a volume is one of those ways.</p>
<p>You can choose where you would like to mount the ConfigMap on your Pod. Example from K8s documentation:</p>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
</code></pre>
<p>Pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
</code></pre>
<p>Note the volumes definition and the corresponding volumeMounts.</p>
<p>Other ways include:</p>
<ul>
<li>Consumption via environment variables</li>
<li>Consumption via command-line arguments</li>
</ul>
<p>Refer to the <a href="https://kubernetes.io/docs/user-guide/configmap/" rel="nofollow noreferrer">documentation</a> for full examples.</p>
|
<p>I have simple ingress for virtual hosting:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-access
spec:
rules:
- host: service1.domain.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: service2.domain.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
</code></pre>
<p>Services service1 and servcie2 have definitions like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service1
labels:
chart: "chart"
spec:
type: ClusterIP # LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: web
selector:
app: service1
</code></pre>
<p>If I deploy services with type <code>ClusterIP</code> they don't work, ingress responds with error: </p>
<pre><code>Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
</code></pre>
<p>Then I change type of one of the services to <code>LoadBalancer</code>, it gets external IP and I can access it using this external IP, and also I can access it though ingress (using host name <code>service1.domain.com</code>)</p>
<p>If I try to access service2 (service2.domain.com - which still has type <code>ClusterIP</code>) ingress responds with:</p>
<pre><code>default backend - 404
</code></pre>
<p>If I change servcice2 type to <code>LoadBalancer</code> it starts to work through ingress.</p>
<p>I think that ingress should work with <code>ClusterIP</code> services, because LoadBalancer service type assigns external IP, which is not needed at all, and if I get it correctly gcloud ingress resource by default should use its own load loadbalancer.</p>
<p>So what is wrong with the setup?</p>
| <p><a href="https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md</a></p>
<p>It seems that the correct answer is that Ingress requires NodePort to work with, so services should have <code>NodePort</code> type for this case.</p>
|
<p>How do I get a pod's name from its IP address? What's the magic incantation of <code>kubectl</code> + <code>sed</code>/<code>awk</code>/<code>grep</code>/etc regardless of where <code>kubectl</code> is invoked?</p>
| <h3>Example:</h3>
<pre><code>kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
alpine-3835730047-ggn2v 1/1 Running 0 5d 10.22.19.69 ip-10-35-80-221.ec2.internal
</code></pre>
<h3>get pod name by IP</h3>
<pre><code>kubectl get --all-namespaces --output json pods | jq '.items[] | select(.status.podIP=="10.22.19.69")' | jq .metadata.name
"alpine-3835730047-ggn2v"
</code></pre>
<h3>get container name by IP</h3>
<pre><code>kubectl get --all-namespaces --output json pods | jq '.items[] | select(.status.podIP=="10.22.19.69")' | jq .spec.containers[].name
"alpine"
</code></pre>
|
<p>I set up a Kubernetes cluster. Apiserver is started on host 192.168.0.2, and I use self-signed certificate and static token as authentication. The other 2 nodes' ip are 192.168.0.3 and 192.168.0.4</p>
<p>Then I created a prometheus deployment, the config is <a href="https://raw.githubusercontent.com/prometheus/prometheus/master/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">this</a>. In prometheus dashboard, the two nodes and prometheus self are showed as "UP". However the apiserver is showed "DOWN", and the reason is "context deadline exceeded". </p>
<pre><code>kubernetes-apiservers
Endpoint State Labels Last Scrape Error
https://192.168.0.2:443/metrics
DOWN instance="192.168.0.2:443" 55.979s ago context deadline exceeded
</code></pre>
<p>I tried to curl the address (<a href="https://192.168.0.2:443/metrics" rel="nofollow noreferrer">https://192.168.0.2:443/metrics</a>) from node server with ca certificate and token. The result is ok. </p>
<p>By the way, I setup the apiserver by <code>hyperkube apiserver</code> command line instead of <code>kubelet</code>. Is this the problem? And where should I look into?</p>
<p>In prometheus log, I only found this related one.</p>
<pre><code>time="2017-01-13T10:51:28Z" level=debug msg="endpoints update" kubernetes_sd=endpoint source="endpoints.go:77" tg="&config.TargetGroup{Targets:[]model.LabelSet{model.LabelSet{\"__meta_kubernetes_endpoint_port_protocol\":\"TCP\", \"__meta_kubernetes_endpoint_ready\":\"true\", \"__address__\":\"192.168.0.2:443\", \"__meta_kubernetes_endpoint_port_name\":\"https\"}}, Labels:model.LabelSet{\"__meta_kubernetes_service_label_provider\":\"kubernetes\", \"__meta_kubernetes_namespace\":\"default\", \"__meta_kubernetes_endpoints_name\":\"kubernetes\", \"__meta_kubernetes_service_name\":\"kubernetes\", \"__meta_kubernetes_service_label_component\":\"apiserver\"}, Source:\"endpoints/default/kubernetes\"}"
</code></pre>
<p>Update:</p>
<p>The reason is that I didn't set up calico on master node. It works now.</p>
| <p>That sounds like a network issue as the request is timing out. Can you hit that endpoint from inside the Prometheus container?</p>
|
<p>I am new to K8s and this is my first time trying to get to grips with it. I am trying to set up a basic Nodejs Express API using this deployment.yml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:<TAG>
imagePullPolicy: Always
name: api
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
imagePullSecrets:
- name: registry.gitlab.com
</code></pre>
<p>Which is being deployed via gitlab-ci. This is working and I have set up a service to expose it:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-svc
labels:
app: api-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: api
type: LoadBalancer
</code></pre>
<p>But I have been looking into ingress to have a single point of entry for possibly multiple services. I have been reading through Kubernetes guides and I read through this <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">Kubernetes Ingress Example</a> and this is the ingress.yml I created:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: api-svc
servicePort: 80
</code></pre>
<p>But this did not work, when I visited the external IP address that was generated from the ingress and I just 502 error pages. </p>
<p>Could anyone point me in the right direction, what am I doing wrong or what am I missing? I see that in the example link above that there is an nginx-rc.yml which I deployed exactly like in the example and that was created but still got nothing from the endpoint. The API was accessible from the Service external IP though..</p>
<p>Many Thanks</p>
| <p>I have looked into it again and think I figured it out.</p>
<p>In order for Ingress to work on GCE you need to define your backend service das a <code>NodePort</code> not as ClusterIP or LoadBalancer.</p>
<p>Also you need to make sure the http health check to <code>/</code> works (you'll see the Google L7 Loadbalancer hitting your service quite a lot on that url) and then it's available.</p>
|
<p>DaemonSet is a Kubernetes beta resource that can ensure that exactly one pod is scheduled to a group of nodes. The group of nodes is all nodes by default, but can be limited to a subset using nodeSelector or the Alpha feature of node affinity/anti-affinity.</p>
<p>It seems that DaemonSet functionality can be achieved with replication controllers/replica sets with proper node affinity and anti-affinity.</p>
<p>Am I missing something? If that's correct should DaemonSet be deprecated before it even leaves Beta?</p>
| <p>As you said, DaemonSet guarantees one pod per node for a subset of the nodes in the cluster. If you use ReplicaSet instead, you need to</p>
<ol>
<li>use the node affinity/anti-affinity and/or node selector to control the set of nodes to run on (similar to how DaemonSet does it).</li>
<li>use <a href="https://kubernetes.io/docs/user-guide/node-selection/" rel="nofollow noreferrer">inter-pod anti-affinity</a> to spread the pods across the nodes. </li>
<li>make sure the number of pods > number of node in the set, so that every node has one pod scheduled.</li>
</ol>
<p>However, ensuring (3) is a chore as the set of nodes can change over time. With DaemonSet, you don't have to worry about that, nor would you need to create extra, unschedulable pods. On top of that, DaemonSet does not rely on the scheduler to assign its pods, which makes it useful for cluster bootstrap (see <a href="https://kubernetes.io/docs/admin/daemons/" rel="nofollow noreferrer">How Daemon Pods are scheduled</a>).</p>
<p>See the "Alternative to DaemonSet" section in the <a href="https://kubernetes.io/docs/admin/daemons/" rel="nofollow noreferrer">DaemonSet doc</a> for more comparisons. DaemonSet is still the easiest way to run a per-node daemon without external tools.</p>
|
<p>I am following this <a href="https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="nofollow noreferrer">tutorial</a> with 2 vms running CentOS7. Everything looks fine (no errors during installation/setup) but I can't see my nodes.</p>
<p>NOTE: </p>
<ul>
<li>I am running this on VMWare VMs</li>
<li>kub1 is my master and kub2 my worker node</li>
</ul>
<p><code>kubectl get nodes</code> output:</p>
<pre><code>[root@kub1 ~]# kubectl cluster-info
Kubernetes master is running at http://kub1:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@kub2 ~]# kubectl cluster-info
Kubernetes master is running at http://kub1:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
<p>nodes:</p>
<pre><code>[root@kub1 ~]# kubectl get nodes
[root@kub1 ~]# kubectl get nodes -a
[root@kub1 ~]#
[root@kub2 ~]# kubectl get nodes -a
[root@kub2 ~]# kubectl get no
[root@kub2 ~]#
</code></pre>
<p>cluster events:</p>
<pre><code>[root@kub1 ~]# kubectl get events -a
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
1h 1h 1 kub2.local Node Normal Starting {kube-proxy kub2.local} Starting kube-proxy.
1h 1h 1 kub2.local Node Normal Starting {kube-proxy kub2.local} Starting kube-proxy.
1h 1h 1 kub2.local Node Normal Starting {kubelet kub2.local} Starting kubelet.
1h 1h 1 node-kub2 Node Normal Starting {kubelet node-kub2} Starting kubelet.
1h 1h 1 node-kub2 Node Normal Starting {kubelet node-kub2} Starting kubelet.
</code></pre>
<p>/var/log/messages:</p>
<pre><code>kubelet.go:1194] Unable to construct api.Node object for kubelet: can't get ip address of node node-kub2: lookup node-kub2: no such host
</code></pre>
<p>QUESTION: any idea why my nodes are not shown using "kubectl get nodes"?</p>
| <p>My issue was that the <code>KUBELET_HOSTNAME</code> on <code>/etc/kubernetes/kubeletvalue</code> didn't match the hostname.</p>
<p>I commented that line, then restarted the services and I could see my worker after that.</p>
<p>hope that helps</p>
|
<h2>Problem</h2>
<p>Hi I recently deployed csanchez's jenkins-kubernetes build on a local kubernetes build (<a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow">https://github.com/jenkinsci/kubernetes-plugin</a>). This also means I used the provided jenkins-local.yml and service-local.yml. The build deployed well and everything is set up. However, when I try to run multiple jobs at once the jobs wait in queue and only one executor is spawned. Each of the jobs executes a shell script which prints "hello x friend" and then calls "sleep 1m or 30s".</p>
<p>Is there a certain criteria in which the plugin will spawn multiple containers? Is it supposed to spawn a container (as long as it doesn't surpass the container cap) for each job in queue?</p>
<h2>Build info</h2>
<p>Jenkins build: 1.642.2 <br>
Kubernetes plugin: 0.6 <br>
Kubernetes: 1.2 <br></p>
<p>The kubernetes plugin points to the internal jenkins master at containter0ip:8080<br>
The container cap is at 5 <br>
The docker image deployed is jenkin/jnlp-slave</p>
<p><em>Edit</em></p>
<p>When there are multiple jobs in queue, sometimes more than one executor becomes live. After reading the logs of the containers who die, all of them die because they cannot connect to containerip:8080/tcpSlaveAgentListener/.</p>
| <p>There is additional configuration needed for Jenkins to spawn agents ASAP, see <a href="https://github.com/jenkinsci/kubernetes-plugin#over-provisioning-flags" rel="nofollow noreferrer">https://github.com/jenkinsci/kubernetes-plugin#over-provisioning-flags</a></p>
<p>In short add Jenkins startup parameters</p>
<pre><code>-Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
</code></pre>
<p>and restart Jenkins server</p>
|
<p>I am new to K8s and this is my first time trying to get to grips with it. I am trying to set up a basic Nodejs Express API using this deployment.yml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:<TAG>
imagePullPolicy: Always
name: api
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
imagePullSecrets:
- name: registry.gitlab.com
</code></pre>
<p>Which is being deployed via gitlab-ci. This is working and I have set up a service to expose it:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-svc
labels:
app: api-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: api
type: LoadBalancer
</code></pre>
<p>But I have been looking into ingress to have a single point of entry for possibly multiple services. I have been reading through Kubernetes guides and I read through this <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">Kubernetes Ingress Example</a> and this is the ingress.yml I created:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: api-svc
servicePort: 80
</code></pre>
<p>But this did not work, when I visited the external IP address that was generated from the ingress and I just 502 error pages. </p>
<p>Could anyone point me in the right direction, what am I doing wrong or what am I missing? I see that in the example link above that there is an nginx-rc.yml which I deployed exactly like in the example and that was created but still got nothing from the endpoint. The API was accessible from the Service external IP though..</p>
<p>Many Thanks</p>
| <p><strong>Thought I would post my working deployment/service/ingress</strong></p>
<p>So after much effort in getting this working, here is what I used to get it working:</p>
<p><strong>Deployment</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: backend-api-v2
spec:
replicas: 2
template:
metadata:
labels:
app: backend-api-v2
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:<TAG>
imagePullPolicy: Always
name: backend-api-v2
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
# Path to probe; should be cheap, but representative of typical behavior
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 5
imagePullSecrets:
- name: registry.gitlab.com
</code></pre>
<p><strong>Service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-svc-v2
labels:
app: api-svc-v2
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 31810
protocol: TCP
name: http
selector:
app: backend-api-v2
</code></pre>
<p><strong>Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: api.foo.com
http:
paths:
- path: /v1/*
backend:
serviceName: api-svc
servicePort: 80
- path: /v2/*
backend:
serviceName: api-svc-v2
servicePort: 80
</code></pre>
<p>The important bits to notice as @Tigraine pointed out is the service is using <code>type: NodePort</code> and not <code>LoadBalancer</code>, I have also defined a nodePort but I believe it will create one if you leave it out. </p>
<p>It will use the <code>default-http-backend</code> for any routes that don't match the rules this is a default container that GKE runs in the <code>kube-system</code> namespace. So if I visited <code>http://api.foo.com/bob</code> I get the default response of <code>default backend - 404</code>.</p>
<p>Hope this helps </p>
|
<p>I built a kubernetes cluster through kubeadm and created several services. These services can be accessed externally through the node ip: port, but when I try to access the service through cluster ip, it fails, it displays the error message curl: (7) Failed connect to 10.99.237.89:2379; Connection timed out, this How to solve the problem?</p>
<p>Cluster-related information</p>
<p><img src="https://i.stack.imgur.com/9KCBS.png" alt="enter image description here">
<img src="https://i.stack.imgur.com/5jGRI.png" alt="enter image description here"></p>
<pre><code>[root@********** ~]# kubectl describe svc etcd-torus-internal --namespace=default
Name: etcd-torus-internal
Namespace: default
Labels: name=etcd-torus-internal
Selector: name=etcd-torus
Type: ClusterIP
IP: 10.99.237.89
Port: etcd-client 2379/TCP
Endpoints: 10.244.1.10:2379
Session Affinity: None
No events.
[root@********** ~]# curl 10.99.237.89:2379
curl: (7) Failed connect to 10.99.237.89:2379; Connection timed out
[root@********** ~]#
</code></pre>
| <p>Cluster IPs are not accessible from outside the cluster, they are internal only.</p>
<p>See <a href="https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types" rel="nofollow noreferrer">here</a> for more details</p>
|
<p>I can't find the syntax on how to do it. How can I do TCP health checks on containers inside a pod?</p>
| <p>You can do it with the tcpSocket definition:</p>
<pre><code>livenessProbe:
tcpSocket:
port: 9000
initialDelaySeconds: 120
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 9000
initialDelaySeconds: 120
timeoutSeconds: 5
</code></pre>
|
<p>I went through both <a href="https://stackoverflow.com/questions/36407449/daemonset-doesnt-create-any-pods">daemonset doesn't create any pods</a> and <a href="https://stackoverflow.com/questions/34818198/daemonset-doesnt-create-any-pods-v1-1-2">DaemonSet doesn't create any pods: v1.1.2</a> before asking this question. Here is my problem.</p>
<p>Kubernetes cluster is running on CoreOS </p>
<pre><code>NAME=CoreOS
ID=coreos
VERSION=1185.3.0
VERSION_ID=1185.3.0
BUILD_ID=2016-11-01-0605
PRETTY_NAME="CoreOS 1185.3.0 (MoreOS)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"
</code></pre>
<p>I refer to <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="nofollow noreferrer">https://coreos.com/kubernetes/docs/latest/getting-started.html</a> guide and created 3 etcd, 2 masters and 42 nodes. All applications running in the cluster without issue. </p>
<p>I got a requirement of setting up logging with fluentd-elasticsearch and downloaded yaml files in <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch</a> deployed fluentd deamonset.</p>
<pre><code>kubectl create -f fluentd-es-ds.yaml
</code></pre>
<p>I could see it got created but none of pod created.</p>
<pre><code>kubectl --namespace=kube-system get ds -o wide
NAME DESIRED CURRENT NODE-SELECTOR AGE CONTAINER(S) IMAGE(S) SELECTOR
fluentd-es-v1.22 0 0 alpha.kubernetes.io/fluentd-ds-ready=true 4h fluentd-es gcr.io/google_containers/fluentd-elasticsearch:1.22 k8s-app=fluentd-es,kubernetes.io/cluster-service=true,version=v1.22
kubectl --namespace=kube-system describe ds fluentd-es-v1.22
Name: fluentd-es-v1.22
Image(s): gcr.io/google_containers/fluentd-elasticsearch:1.22
Selector: k8s-app=fluentd-es,kubernetes.io/cluster-service=true,version=v1.22
Node-Selector: alpha.kubernetes.io/fluentd-ds-ready=true
Labels: k8s-app=fluentd-es
kubernetes.io/cluster-service=true
version=v1.22
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.
</code></pre>
<p>I verified below details according to the comments in above SO questions.</p>
<pre><code>kubectl api-versions
apps/v1alpha1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1beta1
autoscaling/v1
batch/v1
batch/v2alpha1
certificates.k8s.io/v1alpha1
extensions/v1beta1
policy/v1alpha1
rbac.authorization.k8s.io/v1alpha1
storage.k8s.io/v1beta1
v1
</code></pre>
<p>I could see below logs in one kube-controller-manager after restart. </p>
<pre><code>I0116 20:48:25.367335 1 controllermanager.go:326] Starting extensions/v1beta1 apis
I0116 20:48:25.367368 1 controllermanager.go:328] Starting horizontal pod controller.
I0116 20:48:25.367795 1 controllermanager.go:343] Starting daemon set controller
I0116 20:48:25.367969 1 horizontal.go:127] Starting HPA Controller
I0116 20:48:25.369795 1 controllermanager.go:350] Starting job controller
I0116 20:48:25.370106 1 daemoncontroller.go:236] Starting Daemon Sets controller manager
I0116 20:48:25.371637 1 controllermanager.go:357] Starting deployment controller
I0116 20:48:25.374243 1 controllermanager.go:364] Starting ReplicaSet controller
</code></pre>
<p>The other one has below log.</p>
<pre><code>I0116 23:16:23.033707 1 leaderelection.go:295] lock is held by {master.host.name} and has not yet expired
</code></pre>
<p>Am I missing something? Appreciate your help on figure out the issue.</p>
| <p>I found the solution after studying <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml</a> </p>
<p>There is <code>nodeSelector: set as alpha.kubernetes.io/fluentd-ds-ready: "true"</code></p>
<p>But nodes doesn't have a label like that. What I did is add the label as below to one node to check whether it's working.</p>
<pre><code>kubectl label nodes {node_name} alpha.kubernetes.io/fluentd-ds-ready="true"
</code></pre>
<p>After that, I could see fluentd pod started to run</p>
<pre><code>kubectl --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
fluentd-es-v1.22-x1rid 1/1 Running 0 6m
</code></pre>
<p>Thanks.</p>
|
<p>I am trying to create a Kubernetes cluster with Vagrant using an Ansible playbook that works perfectly on real (linux) servers. I am having a problem with the <code>kubeadm join</code> with vagrant.</p>
<p>I am using the following command to join a node to the cluster.</p>
<pre><code>kubeadm join --token={{ kube_token.stdout }} {{ hostvars[groups['kubemaster'][0]].ansible_default_ipv4.address }}
</code></pre>
<p>The problem with vagrant is that it interprets:</p>
<p><code>hostvars[groups['kubemaster'][0]].ansible_default_ipv4.address</code> </p>
<p>as the <code>enp0s3</code> address which seems to always be <code>10.0.2.15</code> on all machines in my cluster.</p>
<p>I have tried explicitly setting the ip of my machines using:</p>
<pre><code>machine.vm.network :private_network, ip: < ip >, auto_config: false
</code></pre>
<p>but this sets the <code>enp0s8</code> address so it still doesn't work.</p>
<p>How do I make the <code>hostvars[groups['kubemaster'][0]].ansible_default_ipv4.address</code> different on all the machines in my vagrant setup?</p>
| <p>you can use <code>hostvars[groups['kubemaster'][0]].ansible_eth1.ipv4.address</code>
assuming that your eth1 is your actual ip you want. as noted by Frederic, it uses the default as the first. you can do <code>ip addr</code> to find your interfaces on a machine. </p>
|
<p>I want to run k8s-visualizer for Kubernetes in der Google Cloud Platform. Just found how to run it local. </p>
<p>How to run it in the Google Cloud Platform?</p>
<p><a href="https://i.stack.imgur.com/idLaH.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/idLaH.gif" alt="enter image description here"></a></p>
| <p>The <a href="https://github.com/saturnism/gcp-live-k8s-visualizer.git" rel="nofollow noreferrer">k8s-visualizer</a> is written in a way that it depends on the <code>kubectl proxy</code> and runs all Ajax calls against <code>/api/...</code>. It isn't ready to run on the cluser.</p>
<p>If you want to have it on your cluster, you'd have to fork the existing code and adjust all API calls slightly to hit the apiserver.</p>
<p>Once this is done, wrap everything into a container and deploy it into a Pod along with a service.</p>
<p>A good starting point are the <a href="https://github.com/saturnism/gcp-live-k8s-visualizer/pull/3" rel="nofollow noreferrer">open pull requests</a></p>
<p>Cheers</p>
|
<p>As in title. I want to clone (create a copy of existing cluster).</p>
<p>If it's not possible to copy/clone Google Container Engine cluster, then how to clone Kubernetes cluster?</p>
<p>If that's not possible, is there a way to dump the whole cluster config?</p>
<h3>Note:</h3>
<p>I try to modify the cluster's configs by calling:</p>
<pre><code>kubectl apply -f some-resource.yaml
</code></pre>
<p>But nothing stops me/other employee modifying the cluster by running:</p>
<pre><code>kubectl edit service/resource
</code></pre>
<p>Or setting properties from command line <code>kubectl</code> calls.</p>
| <p>I'm using a bash script from CoreOS team, with small adjustments, that works pretty good. By default it's excluding the kube-system namespace, but you can adjust this if you need. Also you can add or remove the resources you want to copy.</p>
<pre><code>for ns in $(kubectl get ns --no-headers | cut -d " " -f1); do
if { [ "$ns" != "kube-system" ]; }; then
kubectl --namespace="${ns}" get --export -o=json svc,rc,rs,deployments,cm,secrets,ds,statefulsets,ing | \
jq '.items[] |
select(.type!="kubernetes.io/service-account-token") |
del(
.spec.clusterIP,
.metadata.uid,
.metadata.selfLink,
.metadata.resourceVersion,
.metadata.creationTimestamp,
.metadata.generation,
.status,
.spec.template.spec.securityContext,
.spec.template.spec.dnsPolicy,
.spec.template.spec.terminationGracePeriodSeconds,
.spec.template.spec.restartPolicy
)' >> "./my-cluster.json"
fi
done
</code></pre>
<p>To restore it on another cluster, you have to execute <code>kubectl create -f ./my-cluster.json</code></p>
|
<p>When trying to run <code>build/run.sh</code> on ubuntu 16.04 it fails with the below error message, any ideas?</p>
<pre><code>~/dev/kubernetes$ build/run.sh
+++ [0117 15:45:56] Verifying Prerequisites....
+++ [0117 15:45:56] Building Docker image kube-build:build-226b89c8c0-4-v1.7.4-1
+++ [0117 15:46:57] Keeping container amazing_ptolemy
+++ [0117 15:46:57] Keeping container amazing_ptolemy
+++ [0117 15:46:57] Keeping container amazing_ptolemy
+++ [0117 15:46:57] Keeping image kube-build:build-226b89c8c0-4-v1.7.4-1
+++ [0117 15:46:57] Creating data container kube-build-data-226b89c8c0-4-v1.7.4-1
+++ [0117 15:47:05] Syncing sources to container
+++ [0117 15:47:05] Stopping any currently running rsyncd container
+++ [0117 15:47:05] Starting rsyncd container
+++ [0117 15:47:07] Running rsync
+++ [0117 15:47:20] Stopping any currently running rsyncd container
+++ [0117 15:47:20] Running build command...
Invalid input - please specify a command to run.
!!! Error in build/../build/common.sh:546
'return 4' exited with status 4
Call stack:
1: build/../build/common.sh:546 kube::build::run_build_command(...)
2: build/run.sh:30 main(...)
Exiting with status 1
$ docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 15
Server Version: 1.12.6
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 31
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null overlay bridge host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.4.0-59-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 2.886 GiB
Name: tomerb-VirtualBox
ID: 32UH:TVHM:FTSL:ZPZV:PHJH:FA2Y:LPSG:YZZW:TLUU:J6TQ:W5JF:SBKW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
</code></pre>
| <p>The error is triggered within the script when checking the available arguments. As the error <code>Invalid input - please specify a command to run.</code> states you should use a subcommand / argument for the script as described in the <a href="https://github.com/kubernetes/kubernetes/tree/master/build" rel="nofollow noreferrer">README</a></p>
<ul>
<li><code>build/run.sh make</code>: Build just linux binaries in the container. Pass options and packages as necessary.</li>
<li><code>build/run.sh make cross</code>: Build all binaries for all platforms</li>
<li><code>build/run.sh make test</code>: Run all unit tests</li>
<li><code>build/run.sh make test-integration</code>: Run integration test</li>
<li><code>build/run.sh make test-cmd</code>: Run CLI tests</li>
</ul>
|
<p>I have a strange behaviour from kubelet where soon after cluster is bootsrapped kubelet does not register to the API server.</p>
<p>Funny thing is if I restart the kubelet daemon it registers correctly and everything works as expected which let me believe this is a synchronization issue?.(I am using coreos, cloud config and kubelet is configured as a systemd unit)</p>
<p>Soon after the Kubernetes node gets deployed Kubelet logs only show the below entries and nothing more:</p>
<pre><code>-- Logs begin at Wed 2017-01-11 10:59:51 UTC, end at Wed 2017-01-11 11:58:35 UTC. --
Jan 11 11:00:47 worker0 systemd[1]: Started Kubernetes Kubelet.
Jan 11 11:00:47 worker0 kubelet[1712]: Flag --api-servers has been deprecated, Use --kubeconfig instead. Will be removed in a future version.
Jan 11 11:00:47 worker0 kubelet[1712]: I0111 11:00:47.793484 1712 docker.go:375] Connecting to docker on unix:///var/run/docker.sock
Jan 11 11:00:47 worker0 kubelet[1712]: I0111 11:00:47.793603 1712 docker.go:395] Start docker client with request timeout=2m0s
Jan 11 11:00:47 worker0 kubelet[1712]: E0111 11:00:47.793740 1712 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
Jan 11 11:00:47 worker0 kubelet[1712]: I0111 11:00:47.804434 1712 manager.go:140] cAdvisor running in container: "/system.slice/kubelet.service"
</code></pre>
<p>if I restart kubelet I see the expected logs and it registers as expected to the API server. below kubelet logs after the restart:</p>
<pre><code>-- Logs begin at Wed 2017-01-11 10:59:51 UTC, end at Wed 2017-01-11 11:58:35 UTC. --
Jan 11 11:00:47 worker0 systemd[1]: Started Kubernetes Kubelet.
Jan 11 11:00:47 worker0 kubelet[1712]: Flag --api-servers has been deprecated, Use --kubeconfig instead. Will be removed in a future version.
Jan 11 11:00:47 worker0 kubelet[1712]: I0111 11:00:47.793484 1712 docker.go:375] Connecting to docker on unix:///var/run/docker.sock
Jan 11 11:00:47 worker0 kubelet[1712]: I0111 11:00:47.793603 1712 docker.go:395] Start docker client with request timeout=2m0s
Jan 11 11:00:47 worker0 kubelet[1712]: E0111 11:00:47.793740 1712 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
Jan 11 11:00:47 worker0 kubelet[1712]: I0111 11:00:47.804434 1712 manager.go:140] cAdvisor running in container: "/system.slice/kubelet.service"
Jan 11 11:58:26 worker0 systemd[1]: Stopping Kubernetes Kubelet...
Jan 11 11:58:26 worker0 systemd[1]: Stopped Kubernetes Kubelet.
Jan 11 11:58:26 worker0 systemd[1]: Started Kubernetes Kubelet.
Jan 11 11:58:26 worker0 kubelet[5180]: Flag --api-servers has been deprecated, Use --kubeconfig instead. Will be removed in a future version.
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.501190 5180 docker.go:375] Connecting to docker on unix:///var/run/docker.sock
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.501525 5180 docker.go:395] Start docker client with request timeout=2m0s
Jan 11 11:58:26 worker0 kubelet[5180]: E0111 11:58:26.501775 5180 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.521821 5180 manager.go:140] cAdvisor running in container: "/system.slice/kubelet.service"
Jan 11 11:58:26 worker0 kubelet[5180]: W0111 11:58:26.554844 5180 manager.go:148] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: ge
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.562578 5180 fs.go:116] Filesystem partitions: map[/dev/sda3:{mountpoint:/usr major:8 minor:3 fsType:ext4 blockSize:0} /dev/sda6:{mou
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.567504 5180 manager.go:195] Machine: {NumCores:2 CpuFrequency:2299998 MemoryCapacity:1045340160 MachineID:bed23c2c06d642f1904ebbe67a
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.572042 5180 manager.go:201] Version: {KernelVersion:4.7.3-coreos-r3 ContainerOsVersion:CoreOS 1185.5.0 (MoreOS) DockerVersion:1.11.2
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.574264 5180 kubelet.go:255] Adding manifest file: /opt/kubernetes/manifests
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.574340 5180 kubelet.go:265] Watching apiserver
Jan 11 11:58:26 worker0 kubelet[5180]: W0111 11:58:26.633161 5180 kubelet_network.go:71] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-vet
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.633682 5180 kubelet.go:516] Hairpin mode set to "hairpin-veth"
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.641810 5180 docker_manager.go:242] Setting dockerRoot to /var/lib/docker
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.642560 5180 kubelet_network.go:306] Setting Pod CIDR: -> 172.20.31.1/24
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.644117 5180 server.go:714] Started kubelet v1.4.0
Jan 11 11:58:26 worker0 kubelet[5180]: E0111 11:58:26.647154 5180 kubelet.go:1094] Image garbage collection failed: unable to find data for container /
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.650196 5180 kubelet_node_status.go:194] Setting node annotation to enable volume controller attach/detach
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.651955 5180 server.go:118] Starting to listen on 0.0.0.0:10250
Jan 11 11:58:26 worker0 kubelet[5180]: E0111 11:58:26.668376 5180 kubelet.go:2127] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable
Jan 11 11:58:26 worker0 kubelet[5180]: E0111 11:58:26.668432 5180 kubelet.go:2135] Failed to check if disk space is available on the root partition: failed to get fs info for "root": una
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.674021 5180 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.674110 5180 status_manager.go:129] Starting to sync pod status with apiserver
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.674141 5180 kubelet.go:2229] Starting kubelet main sync loop.
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.674208 5180 kubelet.go:2240] skipping pod synchronization - [network state unknown container runtime is down]
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.675339 5180 volume_manager.go:234] Starting Kubelet Volume Manager
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.713597 5180 factory.go:295] Registering Docker factory
Jan 11 11:58:26 worker0 kubelet[5180]: W0111 11:58:26.717164 5180 manager.go:244] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: canno
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.717777 5180 factory.go:54] Registering systemd factory
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.719843 5180 factory.go:86] Registering Raw factory
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.723229 5180 manager.go:1082] Started watching for new ooms in manager
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.725579 5180 oomparser.go:185] oomparser using systemd
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.728010 5180 manager.go:285] Starting recovery of all containers
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.837552 5180 kubelet_node_status.go:194] Setting node annotation to enable volume controller attach/detach
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.878400 5180 kubelet_node_status.go:64] Attempting to register node worker0
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.919196 5180 kubelet_node_status.go:67] Successfully registered node worker0
Jan 11 11:58:26 worker0 kubelet[5180]: I0111 11:58:26.924483 5180 kubelet_network.go:306] Setting Pod CIDR: 172.20.31.1/24 ->
Jan 11 11:58:27 worker0 kubelet[5180]: I0111 11:58:27.104781 5180 manager.go:290] Recovery completed
</code></pre>
<p>Any idea how to troubleshoot this kind of problems?</p>
<p>Thanks,
Davide</p>
| <p>Sounds like there's a delay in waiting for docker to start or the interfaces to initialize correctly. I found the following issue which sounds exactly like your problem: <a href="https://github.com/kubernetes/kubernetes/issues/33789#issuecomment-251251196" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/33789#issuecomment-251251196</a></p>
<blockquote>
<p>The fix could be adding a condition that "if configure-cbr=true AND
network-plugin=none or noop", then do not check /etc/default/docker to
decide whether to restart docker.</p>
</blockquote>
|
<p>I am having a bit of a problem. I deleted a pod of a replication controller and now I want to recreate it. I tried:</p>
<pre><code>kubectl create -f kube-controller-manager.yaml
Error from server: error when creating "kube-controller-manager.yaml": deployments.extensions "kube-controller-manager" already exists
</code></pre>
<p>So figured to:</p>
<pre><code>kubectl delete deployment kube-controller-manager --namespace=kube-system -v=8
</code></pre>
<p>Which circles for a while giving this response:</p>
<pre><code>GET https://k8s-k8s.westfield.io:443/apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-controller-manager
I0112 17:33:53.334288 44607 round_trippers.go:303] Request Headers:
I0112 17:33:53.334301 44607 round_trippers.go:306] Accept: application/json, */*
I0112 17:33:53.334310 44607 round_trippers.go:306] User-Agent: kubectl/v1.4.7 (darwin/amd64) kubernetes/92b4f97
I0112 17:33:53.369422 44607 round_trippers.go:321] Response Status: 200 OK in 35 milliseconds
I0112 17:33:53.369445 44607 round_trippers.go:324] Response Headers:
I0112 17:33:53.369450 44607 round_trippers.go:327] Content-Type: application/json
I0112 17:33:53.369454 44607 round_trippers.go:327] Date: Fri, 13 Jan 2017 01:33:53 GMT
I0112 17:33:53.369457 44607 round_trippers.go:327] Content-Length: 1688
I0112 17:33:53.369518 44607 request.go:908] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"kube-controller-manager","namespace":"kube-system","selfLink":"/apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-controller-manager","uid":"830c83d0-d860-11e6-80d5-066fd61aec22","resourceVersion":"197967","generation":5,"creationTimestamp":"2017-01-12T00:46:10Z","labels":{"k8s-app":"kube-controller-manager"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":0,"selector":{"matchLabels":{"k8s-app":"kube-controller-manager"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"k8s-app":"kube-controller-manager"}},"spec":{"volumes":[{"name":"secrets","secret":{"secretName":"kube-controller-manager","defaultMode":420}},{"name":"ssl-host","hostPath":{"path":"/usr/share/ca-certificates"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/coreos/hyperkube:v1.4.7_coreos.0","command":["./hyperkube","controller-manager","--root-ca-file=/etc/kubernetes/secrets/ca.crt","--service-account-private-key-file=/etc/kubernetes/secrets/service-account.key","--leader-elect=true","--cloud-provider=aws","--configure-cloud-routes=false"],"resources":{},"volumeMounts":[{"name":"secrets","readOnly":true,"mountPath":"/etc/kubernetes/secrets"},{"name":"ssl-host","readOnly":true,"mountPath":"/etc/ssl/certs"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"Default","securityContext":{}}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":1,"maxSurge":1}},"revisionHistoryLimit":0,"paused":true},"status":{"observedGeneration":3}}
I0112 17:33:54.335302 44607 round_trippers.go:296] GET https://k8s-k8s.westfield.io:443/apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-controller-manager
</code></pre>
<p>And then times out
saying that it timed out waiting for an api response.</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7", GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0", GitTreeState:"clean", BuildDate:"2016-12-10T04:49:33Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7+coreos.0", GitCommit:"0581d1a5c618b404bd4766544bec479aedef763e", GitTreeState:"clean", BuildDate:"2016-12-12T19:04:11Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I originally had client version 1.5.2 Downgraded to see if that helps. It didn't.</p>
| <p>A replication controller defines what a pod looks like and how many replicas should exist in your cluster. The controller-manager's job is to make sure enough replicas are healthy and running, if not then it will ask the scheduler to place the pods onto hosts. </p>
<p>If you delete a pod, then a new one should get spun up automatically. You would just have to run: <code>kubectl delete po <podname></code></p>
<p>It's interesting that you are trying to delete the controller manager. typically after creating that, you shouldn't have to touch it. </p>
|
<p>This is what I keep getting:</p>
<pre><code>[root@centos-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-h6nw8 1/1 Running 0 1h
nfs-web-07rxz 0/1 CrashLoopBackOff 8 16m
nfs-web-fdr9h 0/1 CrashLoopBackOff 8 16m
</code></pre>
<p>Below is output from <code>describe pods</code>
<a href="https://i.stack.imgur.com/qUtPV.png" rel="noreferrer">kubectl describe pods</a></p>
<pre><code>Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
16m 16m 1 {default-scheduler } Normal Scheduled Successfully assigned nfs-web-fdr9h to centos-minion-2
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id 495fcbb06836
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id 495fcbb06836
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id d56f34ae4e8f
16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id d56f34ae4e8f
16m 16m 2 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "web" with CrashLoopBackOff: "Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)"
</code></pre>
<p>I have two pods: <code>nfs-web-07rxz</code>, <code>nfs-web-fdr9h</code>, but if I do <code>kubectl logs nfs-web-07rxz</code> or with <code>-p</code> option I don't see any log in both pods.</p>
<pre><code>[root@centos-master ~]# kubectl logs nfs-web-07rxz -p
[root@centos-master ~]# kubectl logs nfs-web-07rxz
</code></pre>
<p>This is my replicationController yaml file:
<a href="https://i.stack.imgur.com/kSbnv.png" rel="noreferrer">replicationController yaml file</a></p>
<pre><code>apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 2 selector:
role: web-frontend template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
</code></pre>
<p>My Docker image was made from this simple docker file:</p>
<pre><code>FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y nfs-common
</code></pre>
<p>I am running my kubernetes cluster on CentOs-1611, kube version:</p>
<pre><code>[root@centos-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>If I run the docker image by <code>docker run</code> I was able to run the image without any issue, only through kubernetes I got the crash.</p>
<p>Can someone help me out, how can I debug without seeing any log?</p>
| <p>As @Sukumar commented, you need to have your Dockerfile have a <a href="https://docs.docker.com/engine/reference/builder/#/cmd" rel="noreferrer">Command</a> to run or have your ReplicationController specify a command. </p>
<p>The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues. </p>
|
<p><strong>Cannot pull image from local docker insecured registry repository inside Minikube.</strong><br><br>
I'm running Docker-toolbox v1.12.2 using Linux VM (Upstart) installed on Oracle VirtualBox 5.1.6 under Windows 7.<br></p>
<p>I've created a docker image and push (tag and then push) it into a local insecured docker-registry v2 that running on 192.168.99.100:5000/image/name.<br>
<code>docker run -d -p 5000:5000 --restart=always --name registry registry:2</code><br>
and inside the VM, on /var/lib/boot2docker/profile I've add to the EXTRA_ARGS the flag <br><code>--insecure-registry 192.168.99.100:5000</code> .</p>
<p><code>docker push</code> & <code>docker pull</code> from <code>localhost:5000/image/name</code> are working fine within Docker(VM).<br> <br>
_catalog is reachable from Postman :<code>GET http:192.168.99.100:5000/v2/_catalog</code> and I'm able to get the images inside the registry.</p>
<p>I'm starting my Minikube v0.15.0 VM with the command:<br></p>
<p><code>minikube start --insecure-registry=192.168.99.100:5000</code></p>
<p><strong>I'm under company PROXY</strong> so I've added the proxy in the command line (CMD):<br>
<code>set HTTP/HTTPS_PROXY=my.company.proxy:8080</code> and <code>set NO_PROXY={minikube ip}</code>.<br>
Then Kubernetes dashboard started to work for me.</p>
<p>Now for the real problem, when running the command:<br>
<code>kubectl run image-name --image=192.168.99.100:5000/image/name --port=9999</code>
to pull image from my local docker registry into Kubernetes its saying</p>
<blockquote>
<p>deployment "image-name" created</p>
</blockquote>
<p>But inside Kubernetes > Deployments I'm getting the following error:</p>
<blockquote>
<p>Failed to pull image "192.168.99.109:5000/image/name": image pull failed for 192.168.99.100:5000/image/name:latest, this may be because there are no credentials on this request. details: (Error response from daemon: Get <a href="https://192.168.99.100:5000/v1/_ping" rel="nofollow noreferrer">https://192.168.99.100:5000/v1/_ping</a>: Tunnel or SSL Forbidden)</p>
</blockquote>
<p>Can anyone help here with that <strong>Tunnel or SSL Forbidden error</strong>, it's driving me crazy, and I've tried so many solutions to configure --insecrue-registery inside docker, inside Kubernetes or when running the dokcer-registry.</p>
<p>BTW why it's refering to v1/_ping? i'm using the docker registry v2.</p>
| <p>Seems like minikube cannot see the same network that your registry is running. Can you try running <code>minikube ssh</code> then run your curl for the catalog? </p>
<p>Also, as an alternative, you could run <code>eval(minikube docker-env)</code> which then will set your local docker client to use the docker server inside minikube. </p>
<p>So for example if you built an image tagged with <code>myimage/foo</code> it would build and put that image on the minikube docker host, so when you deployed the image, it wouldn't need to be pulled. </p>
|
<p>I'm new to k8s, and recently, I read the cinder volume plugin source code:
<a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/volume/cinder" rel="nofollow noreferrer">cinder volume plugin</a>.</p>
<p>I don't know how the plugin works, and how it communicates with cinder?
And I don't find the request and response in code.</p>
<p>Does the cinder volume plugin call cinder API or other ways?</p>
| <p>cinder volume is a <a href="https://kubernetes.io/docs/user-guide/persistent-volumes/" rel="nofollow noreferrer">persistent volume</a>, more precisely one of the persistent volume <a href="https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses" rel="nofollow noreferrer">storage classes</a>.</p>
<blockquote>
<p>Each StorageClass contains the fields provisioner and parameters, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.</p>
</blockquote>
<p>Cinder specifically is an <a href="http://docs.openstack.org/admin-guide/dashboard-manage-volumes.html" rel="nofollow noreferrer">Openstack volume type</a>.<br>
It is an <a href="https://wiki.openstack.org/wiki/Cinder" rel="nofollow noreferrer">OpenStack Block Storage Cinder</a>, which:</p>
<blockquote>
<ul>
<li>implements services and libraries to provide on demand, self-service access to Block Storage resources. </li>
<li>Provides Software Defined Block Storage via abstraction and automation on top of various traditional backend block storage devices.</li>
</ul>
</blockquote>
<p>You can see how Kubernetes uses cinder in <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder_test.go" rel="nofollow noreferrer"><code>pkg/volume/cinder/cinder_test.go</code></a>.<br>
However, as mentioned in "<a href="https://stackoverflow.com/a/40841681/6309">Kubernetes Cinder volumes do not mount with cloud-provider=openstack</a>":</p>
<blockquote>
<p>the the Cinder provisioner is not implemented yet, given the following statement in the docs (<a href="http://kubernetes.io/docs/user-guide/persistent-volumes/#provisioner" rel="nofollow noreferrer">StorageClasses Provisioner</a>):</p>
<blockquote>
<p>During beta, the available provisioner types are <code>kubernetes.io/aws-ebs</code> and <code>kubernetes.io/gce-pd</code></p>
</blockquote>
</blockquote>
<p>So no "<code>kubernetes.io/cinder</code>" yet.<br>
Yet, <a href="https://stackoverflow.com/users/4457564/ewa">Ewa</a> mentions <a href="https://stackoverflow.com/questions/41658969/how-the-kubernetes-cinder-volume-plugin-works/41659037?noredirect=1#comment72464885_41659037">in the comments</a> making it work: see "<a href="https://stackoverflow.com/a/42670021/6309">Kubernetes Cinder volumes do not mount with <code>cloud-provider=openstack</code></a>" as an example.</p>
|
<p>Is there anyway to propagate all kubernetes events to google cloud log?<br><br> For instance, a pod creation/deletion or liveness probing failed, I knew I can use kubectl get events in a console.<br>However, I would like to preserve those events in a log file in the cloud log with other pod level logs.<br> It is quite helpful information.</p>
| <p>It seems that OP found the logs, but I wasn't able to on GKE (1.4.7) with Stackdriver. It was a little tricky to figure out, so I thought I'd share for others. I was able to get them by creating an eventer deployment with the gcl sink.</p>
<p>For example:</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: eventer
name: eventer
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: eventer
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: eventer
spec:
containers:
- name: eventer
command:
- /eventer
- --source=kubernetes:''
- --sink=gcl
image: gcr.io/google_containers/heapster:v1.2.0
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
restartPolicy: Always
terminationGracePeriodSeconds: 30
</code></pre>
<p>Then, search for logs with an advanced filter (substitute your GCE project name):</p>
<pre><code>resource.type="global"
logName="projects/project-name/logs/kubernetes.io%2Fevents"
</code></pre>
|
<p>I'm very new to Kubernetes and using k8s v1.4, Minikube v0.15.0 and Spotify maven Docker plugin.
<br>The build process of my project creates a Docker image and push it directly into the Docker engine of Minikube.</p>
<p>The pods are created by the Deployment I've created (using replica set), and the strategy was set to <code>type: RollingUpdate</code>.</p>
<p>I saw this in the documentation:</p>
<blockquote>
<p><strong>Note</strong>: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed.</p>
</blockquote>
<p><br>I'm searching for an easy way/workaround to automate the flow:
Build triggered > a new Docker image is pushed (withoud version changing) > Deployment will update the pod > service will expose the new pod.
<br></p>
| <p>when not changing the container image name or tag you would just scale your application to 0 and back to the original size with sth like:</p>
<pre><code>kubectl scale --replicas=0 deployment application
kubectl scale --replicas=1 deployment application
</code></pre>
<p>As mentioned in the comments already <code>ImagePullPolicy: Always</code> is then required in your configuration.</p>
<p>When changing the image I found this to be the most straight forward way to update the </p>
<pre><code>kubectl set image deployment/application app-container=$IMAGE
</code></pre>
<p>Not changing the image has the downsite that you'll have nothing to fall back to in case of problems. Therefore I'd not suggest to use this outside of a development environment.</p>
<hr>
<p>Edit: small bonus - keeping the scale in sync before and after could look sth. like:</p>
<pre><code>replica_spec=$(kubectl get deployment/applicatiom -o jsonpath='{.spec.replicas}')
kubectl scale --replicas=0 deployment application
kubectl scale --replicas=$replica_spec deployment application
</code></pre>
<hr>
<p>Cheers</p>
|
<p>I am trying to scrape pod level info using prometheus kubernetes. Here is the config i am using:</p>
<pre><code> - job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes.default'
role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+):(?:\d+);(\d+)
replacement: ${1}:${2}
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_pod_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
</code></pre>
<p>But i don't see any info on grafana. Do I need to make any changes in my apps?
<a href="https://i.stack.imgur.com/cqjHS.png" rel="noreferrer">snapshot</a></p>
| <p>With that configuration the first action asks that the pod be annotated with <code>prometheus.io/scrape=true</code>. Have you set that annotation on the pods in question?</p>
|
<p>I am trying to understand <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">Stateful Sets</a>. How does their use differ from the use of "stateless" Pods with Persistent Volumes? That is, assuming that a "normal" Pod may lay claim to persistent storage, what obvious thing am I missing that requires this new construct (with ordered start/stop and so on)?</p>
| <p>Yes, a regular pod can use a persistent volume. However, sometimes you have multiple pods that logically form a "group". Examples of this would be database replicas, ZooKeeper hosts, Kafka nodes, etc. In all of these cases there's a bunch of servers and they work together and talk to each other. What's special about them is that each individual in the group has an identity. For example, for a database cluster one is the master and two are followers and each of the followers communicates with the master letting it know what it has and has not synced. So the followers know that "db-x-0" is the master and the master knows that "db-x-2" is a follower and has all the data up to a certain point but still needs data beyond that.</p>
<p>In such situations you need a few things you can't easily get from a regular pod:</p>
<ol>
<li>A predictable name: you want to start your pods telling them where to find each other so they can form a cluster, elect a leader, etc. but you need to know their names in advance to do that. Normal pod names are random so you can't know them in advance.</li>
<li>A stable address/DNS name: you want whatever names were available in step (1) to stay the same. If a normal pod restarts (you redeploy, the host where it was running dies, etc.) on another host it'll get a new name and a new IP address. </li>
<li>A persistent <strong>link</strong> between an individual in the group and their persistent volume: if the host where one of your database master was running dies it'll get moved to a new host but should connect to the <strong>same</strong> persistent volume as there's one and only 1 volume that contains the right data for that "individual". So, for example, if you redeploy your group of 3 database hosts you want the same individual (by DNS name and IP address) to get the same persistent volume so the master is still the master and still has the same data, replica1 gets it's data, etc.</li>
</ol>
<p>StatefulSets solve these issues because they provide (quoting from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/</a>):</p>
<ol>
<li>Stable, unique network identifiers.</li>
<li>Stable, persistent storage.</li>
<li>Ordered, graceful deployment and scaling.</li>
<li>Ordered, graceful deletion and termination.</li>
</ol>
<p>I didn't really talk about (3) and (4) but that can also help with clusters as you can tell the first one to deploy to become the master and the next one find the first and treat it as master, etc.</p>
<p>As some have noted, you can indeed can <strong>some</strong> of the same benefits by using regular pods and services, but its much more work. For example, if you wanted 3 database instances you could manually create 3 deployments and 3 services. Note that you must manually create <strong>3 deployments</strong> as you can't have a service point to a single pod in a deployment. Then, to scale up you'd manually create another deployment and another service. This does work and was somewhat common practice before PetSet/PersistentSet came along. Note that it is missing some of the benefits listed above (persistent volume mapping & fixed start order for example).</p>
|
<p>I am no kind of networking genius, and I am a Kubernetes rookie. (What could possibly go wrong?)</p>
<p>At work, I am often behind a VPN. I have found that <code>minikube</code> operations hang attempting to connect to my <code>minikube</code>-installed Kubernetes cluster (I'm using VirtualBox on a Mac). When I disconnect from the VPN, everything works fine.</p>
<p>I've tried invoking <code>minikube</code> using something like <code>env http_proxy=foo.bar.com https_proxy=foo.bar.com minikube whatever</code> while on the VPN, but this merely reports that the network is unreachable (hey, at least it's not a hang).</p>
<p>This exhausts my staggering expertise in these two areas. :-)</p>
<p>Since it is merely an inconvenience, I find myself often disconnecting from and reconnecting to the VPN throughout the day, but I hate magic. Why am I encountering this behavior, and what can I do to fix it?</p>
| <p>It is the docker daemon inside minikube that can't connect to the internet. </p>
<p>If your VPN enforces a proxy then you need to start it with some docker environment variables. This is how I do it. It is dependent on the environment in my shell but you'll get the idea.</p>
<pre><code>minikube start --docker-env HTTP_PROXY=$http_proxy --docker-env HTTPS_PROXY=$https_proxy
</code></pre>
<p>To access my minikube with kubectl I also have to add it's ip to NO_PROXY</p>
<pre><code>export NO_PROXY=$NO_PROXY,$(minikube ip)
export no_proxy=$no_proxy,$(minikube ip)
</code></pre>
|
<p>I have a simple MYSQL pod sitting behind a MYSQL service. </p>
<p>Additionally I have another pod that is running a python process that is trying to connect to the MYSQL pod. </p>
<p>If I try connecting to the IP address of the MYSQL pod manually from the python pod, everything is A-OK. However if I try connecting to the MYSQL <em>service</em> then I get an error that I can't connect to MYSQL. </p>
<pre><code>grant@grant-Latitude-E7450:~/k8s/objects$ kubectl describe pod mysqlpod
Name: mysqlpod
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 20 Jan 2017 11:10:50 -0600
Labels: <none>
Status: Running
IP: 172.17.0.4
Controllers: <none>
grant@grant-Latitude-E7450:~/k8s/objects$ kubectl describe service mysqlservice
Name: mysqlservice
Namespace: default
Labels: <none>
Selector: db=mysqllike
Type: ClusterIP
IP: None
Port: <unset> 3306/TCP
Endpoints: 172.17.0.5:3306
Session Affinity: None
No events.
grant@grant-Latitude-E7450:~/k8s/objects$ kubectl describe pod basic-python-model
Name: basic-python-model
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 20 Jan 2017 12:01:50 -0600
Labels: db=mysqllike
Status: Running
IP: 172.17.0.5
Controllers: <none>
</code></pre>
<p>If I attach to my python container and do an nslookup of the mysqlservice, then I'm actually getting the <strong>wrong</strong> IP. As you saw above the IP of the mysqlpod is 172.17.0.4 while nslookup mysqlservice is resolving to 172.17.0.5. </p>
<pre><code>grant@grant-Latitude-E7450:~/k8s/objects$ k8s exec -it basic-python-model bash
[root@basic-python-model /]# nslookup mysqlservice
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: mysqlservice.default.svc.cluster.local
Address: 172.17.0.5
</code></pre>
<p>I'm fairly new to kubernetes, but I've been banging my head on this issue for a few hours and I can't seem to understand what I'm doing wrong.</p>
| <p>So this was the <em>exact correct</em> behavior but I just misconfigured my pods.</p>
<p>For future people who are stuck:</p>
<p>The selector defined in a kubernetes service must match the label of the pod(s) you wish to serve. IE) In my MySqlService.yaml file I have the name selector for "mysqlpod":</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysqlservice
spec:
clusterIP: None
ports:
- port: 3306
targetPort: 3306
selector:
name: mysqlpod
</code></pre>
<p>Thus in my MySqlPod.yaml file I need an exactly matching label. </p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: mysqlpod
labels:
name: mysqlpod
spec:
...
</code></pre>
|
<p>I'm setting up a Flask/uswgi web server. I'm still wondering about the micro-service architecture:</p>
<p>Should I put both nginx and Flask with uwsgi in one container or shall I put them in two different containers and link them?</p>
<p>I intend to run these services in a Kubernetes cluster.</p>
<p>Thanks</p>
| <p><strong>Short answer:</strong></p>
<p>I would deploy the nginx and uwsgi/flask application as separate containers. This gives you a more flexible architecture that allows you to link more microservices containers to the nginx instance, as your requirements for more services grow.</p>
<p><strong>Explanation:</strong></p>
<p>With docker the usual strategy is to split the nginx service and the uwsgi/flask service into two separate containers. You can then link both of them using links. This is a common architecture philosophy in the docker world. Tools like docker-compose simplify the management of running multiple containers and forming links between them. The following docker-compose configuration file shows an example of this:</p>
<pre><code>version: '2'
services:
app:
image: flask_app:latest
volumes:
- /etc/app.cfg:/etc/app.cfg:ro
expose:
- "8090"
http_proxy:
image: "nginx:stable"
expose:
- "80"
ports:
- "80:8090"
volumes:
- /etc/app_nginx/conf.d/:/etc/nginx/conf.d/:ro
</code></pre>
<p>This means that if you want to add more application containers, you can attach them to the same ngnix proxy easily by linking them. Furthermore, if you want to upgrade one part of your infrastructure, say upgrade nginx, or shift from apache to nginx, you are only re-building the relevant container, and leaving all the rest in the place.</p>
<p>If you were to add both services to a single container (e.g. by launching supervisord process from the Dockerfile ENTRYPOINT), that would allow you to opt more easily for communication between the nginx and uwsgi process using the socks file, rather than by IP, but I don't think this in it self is a strong enough reason to put both in the same container.</p>
<p>Also, consider if eventually you end up running twenty microservices and each is running each own nginx instance, that means you now have twenty sets of nginx (access.log/error.log) logs to track across twenty containers.</p>
<p>If you are employing a "microservices" architecture, that implies that with time you will be adding more and more containers. In such an ecosystem, running nginx as a separate docker process and linking the microservices to it makes it easier to grow in line with your expanding requirements.</p>
<p>Furthermore, certain tasks only need to be done once. For example, SSL termination can be done at the nginx container, with it configured with the appropriate SSL certificates once, irrespective of how many microservices are being attached to it, if the nginx service is running on its own container.</p>
<p><strong>A note about service discovery</strong> </p>
<p>If the containers are running in the same host then linking all the containers is easy. If the containers are running over multiple hosts, using Kubernetes or Docker swarm, then things may become a bit more complicated, as you (or your cluster framework) needs to be able to link your DNS address to your nginx instance, and the docker containers need to be able to 'find' each other - this adds a bit of conceptual overhead. Kubernetes helps you achieve this by grouping containers into pods, defining services, etc.</p>
|
<p>I have configured kubernetes cluster where it has 2 masters, 3 etcds and 20 nodes. Masters fronted by a load balancer. I followed up <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="noreferrer">https://coreos.com/kubernetes/docs/latest/getting-started.html</a> when creating the cluster. Everything work as expected. I could deploy pods without issue and pods running fine. But when I tried to tail the logs, kubectl suddenly return unexpected EOF and stop tailing. Again I have to execute kubectl logs command to continue. This is very annoying as it not even keep one minute.</p>
<p>The command which I execute is,
<code>kubectl logs -f --tail=100 <pod_name></code> or <code>kubectl logs -f <pod_name></code></p>
<p>After less than one minute it return, <code>error: unexpected EOF</code></p>
<p>Appreciate your input to sort out the issue.</p>
| <p>AWS ELB has a default "IdleTimeout" of 60 seconds, that can be increased up to 3600 seconds. I can confirm that increasing this value solve this issue.</p>
<p>Here you can find more information about it:
<a href="http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html" rel="nofollow noreferrer">http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html</a></p>
|
<p>I have setup one persistence volume with 100Gi capacity and setup two different persistence volume claims with 10Gi capacity each in Kubernetes. The first PVC in bond to the PV but the second one is in Pending state. And the "kubectl get pvc" shows the Capacity of the first PVC is 100Gi. </p>
<p>Is it possible to bind multiple PVCs to the same PV? And why the capacity show in "kubectl get pvc" is the capacity of PV, not the capacity of PVC?</p>
<p>thanks,
Jerry</p>
| <p>No thats not possible - one claim will bind the available volume and the other will be pending until another volume is available.
Cheers </p>
|
<p>I've been doing a lot of digging on Kubernetes, and I'm liking what I see a lot! One thing I've been unable to get a clear idea about is what the exact distinctions are between the Deployment and StatefulSet resources and in which scenarios would you use each (or is one generally preferred over the other).</p>
| <p>Deployments and ReplicationControllers are meant for stateless usage and are rather lightweight. <a href="https://kubernetes.io/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes/" rel="noreferrer">StatefulSets</a> are used when state has to be persisted. Therefore the latter use <code>volumeClaimTemplates</code> / claims on persistent volumes to ensure they can keep the state across component restarts.</p>
<p>So if your application is stateful or if you want to deploy stateful storage on top of Kubernetes use a StatefulSet.</p>
<p>If your application is stateless or if state can be built up from backend-systems during the start then use Deployments.</p>
<p>Further details about running stateful application can be found in <a href="https://kubernetes.io/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes/" rel="noreferrer">2016 kubernetes' blog entry about stateful applications</a></p>
|
<p>I want to change the number of replications (pods) for a <strong>Deployment</strong> using the Kubernetes API (v1beta1).</p>
<p>For now I'm able to increase the replicas from CLI using the command: </p>
<pre><code>kubectl scale --replicas=3 deployment my-deployment
</code></pre>
<p>In the <a href="https://kubernetes.io/docs/api-reference/v1.5/#replace-scale" rel="noreferrer">Kubernetes API documentation</a> it's mention that there is a PUT request to do the same </p>
<pre><code>PUT /apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale
</code></pre>
<p>but there is no example of how to do it.</p>
<p>I'm not sure what to send in the request body in order to perform the update.</p>
| <p>the easiest way is to retrieve the actual data first with:</p>
<pre><code>GET /apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale
</code></pre>
<p>This will give you an yaml or json object which you can modify and send back with the <code>PUT</code> request.</p>
<hr>
<p>With curl the roundtrip look like this:</p>
<pre><code>API_URL="http://kubernetes:8080/apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale"
curl -H 'Accept: application/json' $API_URL > scale.json
# edit scale.json
curl -X PUT [email protected] -H 'Content-Type: application/json' $API_URL
</code></pre>
<hr>
<p>Alternatively you could just use a <code>PATCH</code> call:</p>
<pre><code>PAYLOAD='[{"op":"replace","path":"/spec/replicas","value":"3"}]'
curl -X PATCH -d$PAYLOAD -H 'Content-Type: application/json-patch+json' $API_URL
</code></pre>
|
<p>Is there a way to configure Portainer's dashboard to show Minikube's docker?</p>
<p><strong>Portainer</strong></p>
<p>Installed in the local docker (toolbox), on VM under windows 7;
the dashboard connection to the local (inside) docker is working fine.</p>
<p><code>docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer</code></p>
<p><strong>Minikube</strong></p>
<p>Installed in another VM on the same machine with a different port.</p>
<ul>
<li>I've created a new Portainer Endpoint using the portainer UI</li>
<li>Set the Endpoint URL (minikubeIp:2375)</li>
<li>Selected TLS and point to the path of the cert files</li>
</ul>
<p><code>c:/users/<myusername>/.minikube/certs</code></p>
<p>but keep getting an error on the dashboard tab:</p>
<blockquote>
<p>Failed to load resource: the server responded with a status of 502 (Bad Gateway)</p>
</blockquote>
<p>I'm getting the same error also when configuring the Endpoint <em>without</em> TLS.</p>
<p>Is it <em>possible</em> to configure Portainer to work with Minikube's Docker?</p>
| <p>Are you sure that the Docker API is exposed in the Minikube configuration?</p>
<blockquote>
<p>Failed to load resource: the server responded with a status of 502 (Bad Gateway)</p>
</blockquote>
<p>This error is generally raised when Portainer cannot proxy requests to the Docker API.</p>
<p>A simple way to verify that would be to use the Docker CLI and check if Minikube's Docker API is exposed:</p>
<p><code>docker -H minikubeIp:2375 info</code></p>
<p>If this is returning a connection error, that means that the Docker API is not exposed and thus, Portainer will not be able to connect to it.</p>
|
<p>I have deployed Gitlab to my azure kubernetes cluster with a persistant storage defined the following way:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: gitlab-data
namespace: gitlab
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/tmp/gitlab-data"
</code></pre>
<p>That worked fine for some days. Suddenly all my data stored in Gitlab is gone and I don't know why. I was assuming that the <code>hostPath</code> defined PersistentVolumen is really persistent, because it is saved on a node and somehow replicated to all existing nodes. But my data now is lost and I cannot figure out why. I looked up the uptime of each node and there was no restart. I logged in into the nodes and checked the path and as far as I can see the data is gone.</p>
<p>So how do PersistentVolume Mounts work in Kubernetes? Are the data saved really persistent on the nodes? How do multiple nodes share the data, if a deployment is split to multiple nodes? Is <code>hostPath</code> reliable persistent storage?</p>
| <p><code>hostPath</code> doesn't share or replicate data between nodes and once your pod starts on another node, the data will be lost. You should consider to use some external shared storage. </p>
<p>Here's the related quote from the official docs:</p>
<blockquote>
<p>HostPath (single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)</p>
</blockquote>
<p>from <a href="https://kubernetes.io/docs/user-guide/persistent-volumes/" rel="noreferrer">kubernetes.io/docs/user-guide/persistent-volumes/</a> </p>
|
<p><strong>Scenario:</strong></p>
<p>I'm trying to run a basic <code>ls</code> command using <code>kubernetes</code> package via <code>cli.connect_post_namespaced_pod_exec()</code> however I get a stacktrace that I do not know how to debug. Yes I have tried searching around but I'm not really sure what the problem is as I'm using documentation example from <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/docs/CoreV1Api.md#connect_get_namespaced_pod_exec" rel="nofollow noreferrer">here</a></p>
<p><strong>OS:</strong></p>
<p>macOS Sierra 10.12.2</p>
<p><strong>Code:</strong></p>
<pre><code>#!/usr/local/bin/python2.7
import logging
from pprint import pprint
from kubernetes import client, config
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)s() ] %(message)s"
level = logging.DEBUG
logging.basicConfig(format=FORMAT, level=level)
def main():
path_to_config = "/Users/acabrer/.kube/config"
config.load_kube_config(config_file=path_to_config)
ns = "default"
pod = "nginx"
cmd = "ls"
cli = cli = client.CoreV1Api()
response = cli.connect_post_namespaced_pod_exec(pod, ns, stderr=True, stdin=True, stdout=True, command=cmd)
pprint(response)
if __name__ == '__main__':
main()
</code></pre>
<p><strong>Stack Trace:</strong></p>
<pre><code>Traceback (most recent call last):
File "/Users/acabrer/kube.py", line 16, in <module>
main()
File "/Users/acabrer/kube.py", line 12, in main
response = cli.connect_post_namespaced_pod_exec(pod, ns, stderr=True, stdin=True, stdout=True)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 3605, in connect_post_namespaced_pod_exec
(data) = self.connect_post_namespaced_pod_exec_with_http_info(name, namespace, **kwargs)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 3715, in connect_post_namespaced_pod_exec_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 328, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 152, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 373, in request
body=body)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 257, in POST
body=body)
File "/usr/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 213, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Date': 'Sat, 21 Jan 2017 00:55:28 GMT', 'Content-Length': '139', 'Content-Type': 'application/json'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Upgrade request required","reason":"BadRequest","code":400}
</code></pre>
<p>Any input would be much appreaciated.</p>
<p><strong>Edit 1:</strong></p>
<pre><code>abrahams-mbp:.kube acabrer$ curl --help |grep TLSv
-1, --tlsv1 Use >= TLSv1 (SSL)
--tlsv1.0 Use TLSv1.0 (SSL)
--tlsv1.1 Use TLSv1.1 (SSL)
--tlsv1.2 Use TLSv1.2 (SSL)
abrahams-mbp:.kube acabrer$ python2.7 -c "import ssl; print ssl.OPENSSL_VERSION_INFO"
(1, 0, 2, 10, 15)
</code></pre>
<p><strong>Edit 2:</strong></p>
<pre><code>abrahams-mbp:.kube acabrer$ curl --tlsv1.2 https://x.x.x.x -k
Unauthorized
abrahams-mbp:.kube acabrer$ curl --tlsv1.1 https://x.x.x.x -k
curl: (35) Unknown SSL protocol error in connection to x.x.x.x:-9836
</code></pre>
<p><strong>Edit 3:</strong>
I put some print statements to see the full on request information in <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/client/api_client.py#L341" rel="nofollow noreferrer">api_client.py</a> and this is what I see.</p>
<p><strong>Note:</strong> I removed the ip-address to my endpoint for security.</p>
<pre><code>bash-3.2# vim /usr/local/lib/python2.7/site-packages/kubernetes/client/api_client.py
bash-3.2# /Users/acabrer/kube.py
################
POST
https://x.x.x.x/api/v1/namespaces/default/pods/nginx/exec
[('stdin', True), ('command', 'ls'), ('stderr', True), ('stdout', True)]
{'Content-Type': 'application/json', 'Accept': '*/*', 'User-Agent': 'Swagger-Codegen/1.0.0-alpha/python'}
[]
None
################
</code></pre>
<p>Thanks,</p>
<p>-Abe.</p>
| <p>To answer my own question here, this is a bug in actual python binding referenced here: <a href="https://github.com/kubernetes-incubator/client-python/issues/58" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/client-python/issues/58</a></p>
|
<p>I want to enable Stackdriver logging with my Kubernetes cluster on GKE.</p>
<p>It's stated here: <a href="https://kubernetes.io/docs/user-guide/logging/stackdriver/" rel="nofollow noreferrer">https://kubernetes.io/docs/user-guide/logging/stackdriver/</a></p>
<blockquote>
<p>This article assumes that you have created a Kubernetes cluster with cluster-level logging support for sending logs to Stackdriver Logging. You can do this either by selecting the Enable Stackdriver Logging checkbox in the create cluster dialogue in GKE, or by setting the <code>KUBE_LOGGING_DESTINATION</code> flag to gcp when manually starting a cluster using <code>kube-up.sh</code>.</p>
</blockquote>
<p>But my cluster was created without this option enabled.</p>
<p>How do I change the environment variable while my cluster is running?</p>
| <p>Unfortunately, logging isn't a setting that can be enabled/disabled on a cluster once it is running. This is something that we hope to change in the near future, but in the mean time your best bet is to delete and recreate your cluster (sorry!). </p>
|
<p>I am trying to get the output of my pod logs into Stackdriver, but I am running into an issue where they are not being sent to Stackdriver. </p>
<p>If I look at the GKE cluster details, it is showing this:</p>
<pre><code>Stackdriver Logging - Disabled
Stackdriver Monitoring - Enabled
</code></pre>
<p>I cannot find any information on how to enable Stackdriver on a running cluster. </p>
<p>There is a running heapster pod, and I have run this command as this wasn't set:</p>
<pre><code>gcloud container clusters update <cluster> --monitoring-service=monitoring.googleapis.com
</code></pre>
<p>That is now showing the correct service, but this doesn't solve the logging issue. Is anyone able to shed any light on how to enable to logging?</p>
<p>Thanks</p>
| <p>Currently there is no support for enabling logging in GKE cluster after it's been created. We are aware of the problem and we're going to introduce such possibility.</p>
<p>In the meantime you can try the following workarounds:</p>
<ul>
<li>Create fluentd DaemonSet on your own using <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-gcp/fluentd-gcp-ds.yaml" rel="noreferrer">fluentd-gcp-ds.yaml</a>. You need to change namespace there to avoid interaction with <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager" rel="noreferrer">addon-manager</a>. The disadvantage of this approach is that GKE won't manage/upgrade your fluentd DaemonSet.</li>
<li>Migrate to a new cluster with logging disabled if this works for your.</li>
</ul>
<p>Please let me know if you have more questions. Apologies for the inconvenience.</p>
|
<p>To preface this I’m working on the GCE, and Kuberenetes. My goal is simply to expose all microservices on my cluster over SSL. Ideally it would work the same as when you expose a deployment via type=‘LoadBalancer’ and get a single external IP. That is my goal but SSL is not available with those basic load balancers. </p>
<p>From my research the best current solution would be to set up an nginx ingress controller, use ingress resources and services to expose my micro services. Below is a diagram I drew up with my understanding of this process. </p>
<p><a href="https://i.stack.imgur.com/bGt6B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bGt6B.png" alt="enter image description here"></a></p>
<p>I’ve got this all to successfully work over HTTP. I deployed the default nginx controller from here: <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx</a> . As well as the default backend and service for the default backend. The ingress for my own micro service has rules set as my domain name and path: /. </p>
<p>This was successful but there were two things that were confusing me a bit. </p>
<ol>
<li><p>When exposing the service resource for my backend (microservice) one guide I followed used type=‘NodePort’ and the other just put a port to reach the service. Both set the target port to the backend app port. I tried this both ways and they both seemed to work. Guide one is from the link above. Guide 2: <a href="http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html" rel="nofollow noreferrer">http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html</a>. What is the difference here?</p></li>
<li><p>Another point of confusion is that my ingress always gets two IPs. My initial thought process was that there should only be one external ip and that would hit my ingress which is then directed by nginx for the routing. Or is the ip directly to the nginx? Anyway the first IP address created seemed to give me the expected results where as visiting the second IP fails.</p></li>
</ol>
<p>Despite my confusion things seemed to work fine over HTTP. Over HTTPS not so much. At first when I made a web request over https things would just hang. I opened 443 on my firewall rules which seemed to work however I would hit my default backend rather than my microservice.</p>
<p>Reading led me to this from Kubernetes docs: Currently the Ingress resource only supports http rules.
This may explain why I am hitting the default backend because my rules are only for HTTP. But if so how am I supposed to use this approach for SSL?</p>
<p>Another thing I noticed is that if I write an ingress resource with no rules and give it my desired backend I still get directed to my original default backend. This is even more odd because kubectl describe ing updated and states that my default backend is my desired backend...</p>
<p>Any help or guidance would be much appreciated. Thanks!</p>
| <p>So, for #2, you've probably ended up provisioning a Google HTTP(S) LoadBalancer, probably because you're missing the <code>kubernetes.io/ingress.class: "nginx"</code> annotation as described here: <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers" rel="noreferrer">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers</a>. </p>
<p>GKE has it's own ingress controller that you need to override by sticking that annotation on your nginx deployment. <a href="https://beroux.com/english/articles/kubernetes/?part=3" rel="noreferrer">This article</a> has a good explanation about that stuff.</p>
<p>The <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="noreferrer">kubernetes docs</a> have a pretty good description of what <code>NodePort</code> means - basically, the service will allocate a port from a high range on each Node in your cluster, and Nodes will forward traffic from that port to your service. It's one way of setting up load balancers in different environments, but for your approach it's not necessary. You can just omit the <code>type</code> field of your microservice's Service and it will be assigned the default type, which is <code>ClusterIP</code>.</p>
<p>As for SSL, it could be a few different things. I would make sure you've got the Secret set up just as they describe in the <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#https" rel="noreferrer">nginx controller docs</a>, eg with a <code>tls.cert</code> and <code>tls.key</code> field.</p>
<p>I'd also check the logs of the nginx controller - find out which pod it's running as with <code>kubectl get pods</code>, and then tail it's logs: <code>kubectl logs nginx-pod-<some-random-hash> -f</code>. This will be helpful to find out if you've misconfigured anything, like if a service does not have any endpoints configured. Most of the time I've messed up the ingress stuff, it's been due to some pretty basic misconfiguration of Services/Deployments. </p>
<p>You'll also need to set up a DNS record for your hostname pointed at the LoadBalancer's static IP, or else ping your service with cURL's <code>-H</code> <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#http" rel="noreferrer">flag as they do in the docs</a>, otherwise you might end up getting routed to the default back end 404.</p>
|
<p>I am trying to run a Postgresql database using minikube with a persistent volume claim. These are the yaml specifications:</p>
<p><strong>minikube-persistent-volume.yaml:</strong></p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: hostpath
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/Users/jonathan/data"
</code></pre>
<p><strong>postgres-persistent-volume-claim.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-postgres
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 2Gi
</code></pre>
<p><strong>postgres-deployment.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.5
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-disk
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: keycloak
- name: POSTGRES_DATABASE
value: keycloak
- name: POSTGRES_PASSWORD
value: key
- name: POSTGRES_ROOT_PASSWORD
value: masterkey
volumes:
- name: postgres-disk
persistentVolumeClaim:
claimName: pv-postgres
</code></pre>
<p>when I start this I get the following in the logs from the deployment:</p>
<pre><code>[...]
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok
initdb: could not create directory "/var/lib/postgresql/data/pgdata/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
</code></pre>
<p>Why do I get this Permission denied error and what can I do about it?</p>
| <p>Maybe you're having a write permission issue with Virtualbox mounting those host folders.
Instead, use <code>/data/postgres</code> as a path and things will work.</p>
<p>Minikube automatically persists the following directories so they will be preserved even if your VM is rebooted/recreated:</p>
<ul>
<li><code>/data</code></li>
<li><code>/var/lib/localkube</code></li>
<li><code>/var/lib/docker</code></li>
</ul>
<p>Read these sections for more details:</p>
<ol>
<li><a href="https://github.com/kubernetes/minikube#persistent-volumes" rel="noreferrer">https://github.com/kubernetes/minikube#persistent-volumes</a></li>
<li><a href="https://github.com/kubernetes/minikube#mounted-host-folders" rel="noreferrer">https://github.com/kubernetes/minikube#mounted-host-folders</a></li>
</ol>
|
<p>I have installed Docker v1.13 and Kubernetes with Kubeadm v1.6. Then I installed Web UI (Dashboard). I can access it but its missing CPU/Memory usage graphs... Why could this happen? </p>
| <p>For me the usage graphs worked once I installed <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">heapster</a> as an addon. Heapster requires an influxdb as data sink for the metric storage. Luckily you can deploy all those easily in k8s with the following definitions in the <code>kube-system</code> namespace (tested it with k8s <a href="https://github.com/kubernetes/kubernetes/releases/tag/v1.4.6" rel="nofollow noreferrer">1.4.6</a>): </p>
<p><strong>heapster-service.yml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
</code></pre>
<p><strong>heapster-deployment.yml:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
version: v6
spec:
containers:
- name: heapster
image: kubernetes/heapster:canary
imagePullPolicy: Always
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb:8086
</code></pre>
<p><strong>influxdb-service.yml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
# type: NodePort
ports:
- name: api
port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
</code></pre>
<p><strong>infuxdb-deployment.yml:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
volumes:
- name: influxdb-storage
emptyDir: {}
containers:
- name: influxdb
image: kubernetes/heapster_influxdb:v0.6
resources:
requests:
memory: "256M"
cpu: "0.1"
limits:
memory: "1G"
cpu: "1.0"
volumeMounts:
- mountPath: /data
name: influxdb-storage
</code></pre>
|
<p><strong>2 VPC's:</strong></p>
<p><strong>Primary VPC:</strong> 10.111.0.0/22</p>
<p>Primary VPC Subnet contains 4 subnets:</p>
<pre><code>10.111.0.0/25
10.111.0.128/25
10.111.1.0/25
10.111.1.128/25
</code></pre>
<p><strong>Secondary VPC:</strong> Kubernetes Minion VPC (172.16.0.0/20)</p>
<p>Additional notes:
Primary VPC && Secondary VPC uses VPC peering to enable communication between the 2 VPC's</p>
<p><strong>The issue:</strong> Is it possible to separate minion instances / nodes / pods etc out in their own VPC in order to save network space in the primary VPC. If possible I'd love to have the master and service endpoints running under the primary vpc so they are directly routable without going over public internet, and having the nodes / pods etc in their own space not cluttering up the already small ip space we have.</p>
<p>PS: The primary VPC address space is only a /22 due to ip overlap restrictions with the main corporate network.</p>
| <p>As soon as you define a service endpoint that is reachable from outside your k8s cluster (no matter if you use the <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> or <a href="https://kubernetes.io/docs/user-guide/services/#type-loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a> option), k8s opens a service port on every node in your cluster (also on the master nodes).
Every node in your cluster runs a kube-proxy, which takes care that any request on a service port gets routed to a running Pod, even if that Pod is running in an entirely different node in another VPC (given that the node is reachable via peering of course).
Further, Pods run in virtual networks that have nothing to do with your physical network of your nodes - so the <strong>Pods do not exhaust your network's IP space</strong>, but the number of nodes in your VPC/network does.
So, I think you should just limit the number of nodes in your VPC which has limited IP space (you could just place the master nodes there the way you wanted) and put the worker nodes in the other VPC. </p>
<p><em>About Node affinity of Pods</em>: You could assign Pods to specific worker nodes (see <a href="https://kubernetes.io/docs/user-guide/node-selection/" rel="nofollow noreferrer">here</a>).
For instance you could assign all your Pods to worker nodes in a single VPC, and route any public traffic to nodes in the other VPC, which will then proxy the traffic to a running Pod, but this does not tackle your IP space problem at all.</p>
<p><strong>Update:</strong></p>
<p>Concerning the service endpoints: When you configure a service which is reachable from outside your k8s cluster the master node first allocates a port, which is from then on reserved for that service. That port is then opened on every node (master and worker nodes) in your cluster. The port is operated by the kube-proxy, which of course also resides on every node. The kube-proxy then takes care of the rest and proxies the incoming traffic from that port to a running pod for the corresponding service, even if that pod is running on a completely different node (internally k8s achieves this with some iptables magic). This means you can send your request to that port (lets call it now <code><service-port></code>) on any node in your cluster. Your service endpoint is basically <code><proto>://<any-worker-or-master-node-ip>:<service-port></code>. With this you could also easily setup an ELB and add all your nodes as instances, so you have a public internet-facing endpoint. All this is explained in more detail <a href="https://kubernetes.io/docs/user-guide/services/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">here</a>.</p>
|
<p>I stored a RSA key as a secret for my pod, which is used as an environment variable, but my pod keep reporting error because format of my RSA key is incorrect.</p>
<p>However; the RSA key work just fine when used locally, so I believe that the RSA key is correct.</p>
<p>How can I set an RSA key as an environment variable on my pod correctly?</p>
| <p>Look like the issue is <strong>\n</strong>. Just removing them from my RSA is enough to solve my issue.</p>
|
<p>What's the easiest way to run a configurable number of identical jobs on Kubernetes but give each of them different parameters (like job number)?</p>
| <ol>
<li>You could either just have a template job and use bash expansions to have multiple job specifications based of that initial template.</li>
</ol>
<p>As shown in the official <a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/" rel="nofollow noreferrer">Parallel Processing using Expansions</a> user guide:</p>
<pre><code>mkdir ./jobs
for i in apple banana cherry
do
cat job.yaml.txt | sed "s/\$ITEM/$i/" > ./jobs/job-$i.yaml
done
kubectl create -f ./jobs
</code></pre>
<hr />
<ol start="2">
<li>Or you could create a queue and have a specified number of parallel workers/jobs to empty the queue. The contents of the queue would then be the input for each worker and Kubernetes could spawn parallel jobs. That's best described in the <a href="https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/" rel="nofollow noreferrer">Coarse Parallel Processing using a Work Queue</a> user guide.</li>
</ol>
<hr />
<ul>
<li>The first approach is simple and straight forward but lacks flexibility</li>
<li>The second requires a message queue as "overhead" but you'll gain flexibility</li>
</ul>
|
<p>I have installed a Kubernetes with Kubeadm tool, and then followed the <a href="https://kubernetes.io/docs/user-guide/ui/" rel="nofollow noreferrer">documentation</a> to install the Web UI (Dashboard). Kubernetes is installed and running in one node instance which is a taint master node. </p>
<p>However, I'm not able to access the Web UI at <code>https://<kubernetes-master>/ui</code>. Instead I can access it on <code>https://<kubernetes-master>:6443/ui</code>.</p>
<p>How could I fix this?</p>
| <p>The URL you are using to access the dashboard is an endpoint on the API Server. By default, <code>kubeadm</code> deploys the API server on port <code>6443</code>, and not on <code>443</code>, which is what you would need to access the dashboard through <code>https</code> without specifying a port in the URL (i.e. <code>https://<kubernetes-master>/ui</code>)</p>
<p>There are various ways you can expose and access the dashboard. These are ordered by increasing complexity:</p>
<ul>
<li>If this is a dev/test cluster, you could try making <code>kubeadm</code> deploy the API server on port <code>443</code> by using the <code>--api-port</code> flag <a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/cmd/init.go#L83" rel="noreferrer">exposed</a> by <code>kubeadm</code>.</li>
<li>Expose the dashboard using a <a href="https://kubernetes.io/docs/user-guide/services/" rel="noreferrer">service</a> of type <code>NodePort</code>.</li>
<li>Deploy an <a href="https://kubernetes.io/docs/user-guide/ingress/" rel="noreferrer">ingress</a> controller and define an ingress point for the dashboard.</li>
</ul>
|
<p>I'm trying to set up a dev environment with Kubernetes via Minikube. I successfully mounted the same volume to the same data dir on the same image with Docker for Mac, but I'm having trouble with Minikube.</p>
<p>Relevant files and logs:
<code>db-pod.yml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
name: msyql
name: db
namespace: default
spec:
containers:
- name: mysqldev
image: mysql/mysql-server:5.6.32
ports:
- containerPort: 3306
hostPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: volumesnew
volumes:
- name: volumesnew
hostPath:
path: "/Users/eric/Volumes/mysql"
</code></pre>
<p><code>kubectl get pods</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
db 0/1 Error 1 3s
</code></pre>
<p><code>kubectl logs db</code>:</p>
<pre><code>2016-08-29 20:05:55 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-08-29 20:05:55 0 [Note] mysqld (mysqld 5.6.32) starting as process 1 ...
2016-08-29 20:05:55 1 [Warning] Setting lower_case_table_names=2 because file system for /var/lib/mysql/ is case insensitive
</code></pre>
<p><code>kubectl describe pods db</code>:</p>
<pre><code>Name: db
Namespace: default
Node: minikubevm/10.0.2.15
Start Time: Wed, 31 Aug 2016 07:48:39 -0700
Labels: name=msyql
Status: Running
IP: 172.17.0.3
Controllers: <none>
Containers:
mysqldev:
Container ID: docker://af0937edcd9aa00ebc278bc8be00bc37d60cbaa403c69f71bc1b378182569d3d
Image: mysql/mysql-server:5.6.32
Image ID: docker://sha256:0fb418d5a10c9632b7ace0f6e7f00ec2b8eb58a451ee77377954fedf6344abc5
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 31 Aug 2016 07:48:42 -0700
Finished: Wed, 31 Aug 2016 07:48:43 -0700
Ready: False
Restart Count: 1
Environment Variables:
MYSQL_ROOT_PASSWORD: test
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volumesnew:
Type: HostPath (bare host directory volume)
Path: /Users/eric/Volumes/newmysql
default-token-il74e:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-il74e
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
7s 7s 1 {default-scheduler } Normal Scheduled Successfully assigned db to minikubevm
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id 568f9112dce0
6s 6s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id 568f9112dce0
6s 4s 2 {kubelet minikubevm} spec.containers{mysqldev} Normal Pulled Container image "mysql/mysql-server:5.6.32" already present on machine
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Created Created container with docker id af0937edcd9a
4s 4s 1 {kubelet minikubevm} spec.containers{mysqldev} Normal Started Started container with docker id af0937edcd9a
3s 2s 2 {kubelet minikubevm} spec.containers{mysqldev} Warning BackOff Back-off restarting failed docker container
3s 2s 2 {kubelet minikubevm} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysqldev" with CrashLoopBackOff: "Back-off 10s restarting failed container=mysqldev pod=db_default(012d5178-6f8a-11e6-97e8-c2daf2e2520c)"
</code></pre>
<p>I was able to mount the data directory from the host to the container in a test directory, but I'm having trouble mounting to the MySQL data directory. Also, I tried to mount an empty directory to the container's data dir with the appropriate MySQL environment variables set, which in Docker for Mac allowed me to perform a SQL dump in the new dir, but I'm seeing the same errors in Minikube.</p>
<p>Any thought on what might be the cause, or if I'm not setting up my dev environment the preferred Kubernetes/Minikube way, please share your thoughts.</p>
| <p>I was able to resolve this with the following:</p>
<pre><code>echo "/Users -network 192.168.99.0 -mask 255.255.255.0 -alldirs -maproot=root:wheel" | sudo tee -a /etc/exports
sudo nfsd restart
minikube start
minikube ssh -- sudo umount /Users
minikube ssh -- sudo /usr/local/etc/init.d/nfs-client start
minikube ssh -- sudo mount 192.168.99.1:/Users /Users -o rw,async,noatime,rsize=32768,wsize=32768,proto=tcp
</code></pre>
<p>I am running Minikube in VirtualBox. I don't know if this will work with other VM drivers - xhyve, etc.</p>
<p>Reference: <a href="https://github.com/kubernetes/minikube/issues/2" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2</a></p>
<p>EDIT: I should mention that this works for minikube v0.14.0.</p>
|
<p>We have 254 nodeports currently in use (we have a number of environments sharing a cluster) and are using the default range. Is there a api parameter that I'm not seeing to bump this limit up?</p>
| <p>You probably need to change the <code>--service-cluster-ip-range=</code> on the <a href="https://kubernetes.io/docs/admin/kube-apiserver/" rel="nofollow noreferrer">kube-api-server</a>.</p>
|
<p>Current docker registry looks support one remote url in config.yml only like:</p>
<pre><code>proxy:
remoteurl: https://registry-1.docker.io
</code></pre>
<p>So if docker ask some other image like "gcr.io/google_containers/pause-amd64:3.0", it will not go to the mirror registry.</p>
<p>Is it possible to configure multiple remote urls in one docker registry config.yml?</p>
| <p>You need to setup a separate pull-through registry cache for each remote registry you want to proxy. If you were to do a pull on <code>gcr.io/google_containers/pause-amd64:3.0</code>, it will go directly to <code>grc.io</code>. To use the pull-through cache, you need to point to your local cache server instead.</p>
<p>If you didn't limit the server to only proxy a single source, since you are specifying the cache hostname instead of the remote server hostname, you would create the risk of name collisions with the same image from multiple sources. So only proxying a single source is a good thing.</p>
<p>Since the registry is shipped as a container, you can always run multiple instances, one for each upsteam source, on the same host, with either different exposed ports, or placing them behind a reverse proxy that would send traffic to each on depending on the hostname or path in the request. See nginx-proxy and traefik for examples of reverse proxies.</p>
|
<p>The example of <a href="https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/" rel="nofollow noreferrer">running ZooKeeper in Kubernetes</a> shows how pods can be rescheduled on different nodes following a node being drained from connections, typically because maintenance has to be performed on it.</p>
<p>The pods are rescheduled with the same identity that they have before, in this case the <code>myid</code> ZooKeeper servers, corresponding to the incremental number of each pod <code>zk-0</code>, <code>zk-1</code> and so on.</p>
<p>If a node is not responding (possibly due to overload or a network problem), is it possible for Kubernetes to reschedule a pod on another node while the original pod is still running?</p>
<p>It seems <a href="https://github.com/kubernetes/kubernetes/issues/27427" rel="nofollow noreferrer">some of this behavior has been specified explicitly</a>, but I don't know how to verify it short of setting up multiple nodes on a cloud provider and trying it.</p>
| <p>If a node is unresponsive, Kubernetes >=1.5 will <strong>not</strong> reschedule a pod with the same identity until it is confirmed that it has been terminated. The behavior with respect to StatefulSet is detailed <a href="https://kubernetes.io/docs/tasks/manage-stateful-set/delete-pods/#statefulset-considerations" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Kubernetes (versions 1.5 or newer) will not delete Pods just because a
Node is unreachable. The Pods running on an unreachable Node enter the
‘Terminating’ or ‘Unknown’ state after a timeout. Pods may also enter
these states when the user attempts graceful deletion of a Pod on an
unreachable Node. The only ways in which a Pod in such a state can be
removed from the apiserver are as follows: </p>
<ul>
<li>The Node object is
deleted (either by you, or by the Node Controller). </li>
<li>The kubelet on
the unresponsive Node starts responding, kills the Pod and removes the
entry from the apiserver. </li>
<li>Force deletion of the Pod by the user.</li>
</ul>
</blockquote>
<p>Since the name of the pod is a lock, we never creating two pods with the same identity, giving us 'at-most-one' semantics for StatefulSets. The user can override this behavior by performing force-deletions (and manually guaranteeing that the pod is fenced) but there is no automation that can lead to two pods with the same identity.</p>
<p>The changes in Kubernetes 1.5 ensure that we prioritize safety in case of StatefulSets. <a href="https://kubernetes.io/docs/admin/node/#condition" rel="nofollow noreferrer">Node Controller documentation</a> has some details about the change. You can read the full proposal <a href="https://github.com/kubernetes/kubernetes/pull/34160/files" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have multiple Kubernetes pods running on a server. One of the pods contains a database application that only accepts connections from a specific subnet (i.e. other Kubernetes pods).</p>
<p>I'm trying to connect to the DB application from the server itself, but the connection is refused because the server's IP is not part of the allowed subnet.</p>
<p>Is there a way to create a simple pod that accepts connections from the server and forwards them to the pod containing the DB app?</p>
<p>Unfortunately, the DB app cannot be reconfigured to accept other connections.</p>
<p>Thank you</p>
| <p>The easiest solution is probably to add another container to your pod running socat or something similar and make it listen and connect to your local pod's IP (important: connect to the pod ip, not 127.0.0.1 if your database program is configured to only accept connections from the overlay network).
Then modify the service you have for these pods and add the extra port.</p>
<p>The example below assumes port 2000 is running your program and 2001 will be the port that is forwarded to 2000 inside the pod.</p>
<p>Example (the example is running netcat simulating your database program):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: database
labels:
app: database
spec:
containers:
- name: alpine
image: alpine
command: ["nc","-v","-n","-l","-p","2000"]
ports:
- containerPort: 2000
- name: socat
image: toughiq/socat
ports:
- containerPort: 2001
env:
- name: LISTEN_PROTO
value: "TCP4"
- name: LISTEN_PORT
value: "2001"
- name: TARGET_PROTO
value: "TCP4"
- name: TARGET_HOST
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: TARGET_PORT
value: "2000"
---
apiVersion: v1
kind: Service
metadata:
name: database
spec:
selector:
app: database
ports:
- name: myport
port: 2000
targetPort: 2000
protocol: TCP
- name: socat
port: 2001
targetPort: 2001
protocol: TCP
externalIPs: [xxxxxx]
</code></pre>
|
<p>I have a Kubernetes <code>Service</code> with a static <code>External IP</code> assigned to a <code>Replication Controller</code> managing 1 application distributed to 2 <code>Pods</code>. I can access the application using the external IP, this part works fine.</p>
<p>I'd like now to have the application inside the Pods using the same IP when making HTTP requests to external applications (outside the cluster).</p>
<p>A simple call to <code>https://api.ipify.org/</code> shows that the IP of this application is completely different from the external IP it answers at. How can I make it use the same IP?</p>
| <p>according to the <a href="https://kubernetes.io/docs/user-guide/services/#external-ips" rel="nofollow noreferrer">documentation</a> the <code>externalIP</code> assignment for a <code>Service</code> is just meant for ingress traffic. Along with that the somewhat related <a href="https://docs.openshift.org/latest/dev_guide/integrating_external_services.html" rel="nofollow noreferrer">Integrating External Services</a> documentation from OpenShift doesn't mention any options to proxy the egress traffic through the defined <code>Endpoint</code>. Therefore it seems that you're trying something which doesn't work with Kubernetes out of the box.</p>
|
<p>I have a k8 cluster deployed in AWS using kube-aws. When I deploy a service, a new ELB is added for exposing the service to internet. Can I use ingress-controller to replace ELB or is there any other way to expose services other than ELB?</p>
| <p>First, replace <code>type: LoadBalancer</code> with <code>type: ClusterIP</code> in your service definition. Then you have to configure the <a href="https://kubernetes.io/docs/user-guide/ingress/" rel="nofollow noreferrer">ingress</a> and deploy a controller, like <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow noreferrer">Nginx</a></p>
<p>If you are looking for a full example, I have one here: <a href="https://github.com/camilb/prometheus-kubernetes/tree/master/definitions/k8s/ingress" rel="nofollow noreferrer">nginx-ingress-controller</a>.</p>
<p>The ingress will expose you services using some of your workers public IPs, usually 2 of them. Just check your ingress <code>kubectl get ing -o wide</code> and create the DNS records.</p>
|
<p>I was playing around with Kubernetes on AWS with t2.medium EC2 instances having 20GB of disk space and one of the nodes ran out of disk space after a few days. It seems to be caused by a combination of Docker images and logs. </p>
<p>From what I've read, Kubernetes has its own Docker GC to manage Docker's disk usage, and log rotation. I'm guessing 20GB is not enough for Kubernetes to self-manage disk usage. What's a safe disk size for a production environment?</p>
| <p>When following the standard installation with the GKE as described in the <a href="https://cloud.google.com/container-engine/docs/quickstart" rel="noreferrer">quickstart guide</a> you'll end up with 3 x n1-standard-1 nodes (see <a href="https://cloud.google.com/compute/docs/machine-types#standard_machine_types" rel="noreferrer">machine types</a>) with 100 GB of storage per node.</p>
<p>Looking at the nodes right after the cluster creation then gives you these numbers for the diskspace:</p>
<pre><code>$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.2G 455M 767M 38% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmp 1.9G 24K 1.9G 1% /tmp
run 1.9G 684K 1.9G 1% /run
shmfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 95G 2.4G 92G 3% /var
/dev/sda8 12M 28K 12M 1% /usr/share/oem
media 1.9G 0 1.9G 0% /media
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 256K 0 256K 0% /mnt/disks
tmpfs 1.0M 120K 904K 12% /var/lib/cloud
overlayfs 1.0M 124K 900K 13% /etc
</code></pre>
<p>These numbers might give you a starting point, but as others have pointed out already, the rest depends on your specific requirements.</p>
|
<p><strong>The question:</strong>
I have a VOIP application running in a kubernetes pod. I need to set in the code the public IP of the host machine on which the pod is running. How can I get this IP via an API call or an environmental variable in kubernetes? (I'm using Google Container Engine if that's relevant.) Thanks a lot!</p>
<p><strong>Why do I need this?</strong>
The application is a SIP-based VOIP application. When a SIP request comes in and does a handshake, the application needs to send a SIP invite request back out that contains the public IP and port which the remote server must use to set up a RTP connection for the audio part of the call.</p>
<p><strong>Please note:</strong>
I'm aware of kubernetes services and the general best-practise of exposing those via a load balancer. However I would like to use hostNetwork=true and expose ports on the host for the remote application to send RTP audio packets (via UDP) directly. This kubernetes issue (<a href="https://github.com/kubernetes/kubernetes/issues/23864" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/23864</a>) contains a discussion of various people running SIP-based VOIP applications on kubernetes and the general concessus is to use hostNetwork=true (primarily for performance and due to limitations of load balancing UDP I believe). </p>
| <p>You can query the API server for information about the Node running your pod like it's addresses. Since you are using hostNetwork=true the $HOSTNAME environment variable identifies the node already.</p>
<p>There is an example below that was tested on GCE.</p>
<p>The code needs to be run in a pod. You need to install some python dependencies first (in the pod):</p>
<pre><code>pip install kubernetes
</code></pre>
<p>There is more information available at:
<a href="https://github.com/kubernetes-incubator/client-python" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/client-python</a></p>
<pre class="lang-py prettyprint-override"><code>import os
from kubernetes import client, config
config.load_incluster_config()
v1=client.CoreV1Api()
for address in v1.read_node(os.environ['HOSTNAME']).status.addresses:
if address.type == 'ExternalIP':
print address.address
</code></pre>
|
<p>I am becoming more familiar with Kubernetes by the day, but am still at a basic level. I am also not a networking guy.</p>
<p>I am staring at the following snippet of a Service definition, and I can't form the right picture in my mind of what is being declared:</p>
<pre><code>spec:
type: NodePort
ports:
- port: 27018
targetPort: 27017
protocol: TCP
</code></pre>
<p>Referencing the <a href="https://kubernetes.io/docs/resources-reference/v1.5/#serviceport-v1" rel="noreferrer">ServicePort documentation</a>, which reads in part:</p>
<pre><code>nodePort The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually
integer assigned by the system. If specified, it will be allocated to the service if unused or else creation of the
service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info:
http://kubernetes.io/docs/user-guide/services#type--nodeport
port The port that will be exposed by this service.
integer
targetPort Number or name of the port to access on the pods targeted by the service. Number must be in the range 1
IntOrString to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the
target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map).
This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field.
More info: http://kubernetes.io/docs/user-guide/services#defining-a-service
</code></pre>
<p>My understanding is that the port that a client outside of the cluster will "see" will be the dynamically assigned one in the range of <code>30000</code>-<code>32767</code>, as defined <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="noreferrer">in the documentation</a>. This will, using some black magic that I do not yet understand, flow to the <code>targetPort</code> on a given node (<code>27017</code> in this case).</p>
<p>So what is the <code>port</code> used for here? </p>
| <p><code>nodePort</code> is the port that a client outside of the cluster will "see". <code>nodePort</code> is opened on every node in your cluster via <a href="https://kubernetes.io/docs/user-guide/services/#type-nodeport" rel="noreferrer">kube-proxy</a>. With iptables magic Kubernetes (k8s) then routes traffic from that port to a matching service pod (even if that pod is running on a completely different node).</p>
<p><code>port</code> is the port your service listens on inside the cluster. Let's take this example: </p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- port: 8080
targetPort: 8070
nodePort: 31222
protocol: TCP
selector:
component: my-service-app
</code></pre>
<p>From inside my k8s cluster this service will be reachable via <code>my-service.default.svc.cluster.local:8080</code> (service to service communication inside your cluster) and any request reaching there is forwarded to a running pod on <code>targetPort</code> 8070.</p>
<p><code>tagetPort</code> is also by default the same value as <code>port</code> if not specified otherwise.</p>
|
<p>I am creating a MongoDB cluster on Kubernetes using Stateful sets. I have 3 mongo replicas configured in my Stateful set. </p>
<p>Now I create Stateful set and service using just one command</p>
<blockquote>
<p>kubectl create -f mongo-stateful.yaml</p>
</blockquote>
<p>Then I use the mongo client to initiate mongo replica set members.</p>
<pre><code>rs.initiate(
{
_id: "replicaset1",
version: 1,
members: [
{ _id: 0, host:port1 },
{ _id: 1, host : host:port1 },
{ _id: 2, host : host:port1 }
]
}
)
</code></pre>
<p>All of this works except I would like to automate this step of configuring replica set members.</p>
<p>My questions is whether this step can be automated and if we can add this to the yaml file?</p>
| <p>This might be what you're looking for: </p>
<p><a href="http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html" rel="nofollow noreferrer">http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html</a></p>
|
<p>I have a dockerfile of a ruby app that runs a puma server on port 5000. (It EXPOSE the port and also uses the RUN command to run puma -p 5000.</p>
<p>In my deployment.yaml I have to set <code>containerPort</code> to be 5000 to match this port.</p>
<p>This seems odd to me that my configuration lists the port in 2 different places. If I need to change the port it means I am changing the configuration in multiple places which is against the principles of the 12 factor app where configs are all in the same place.</p>
<p>Is there a way to set the port in only 1 place?</p>
| <p>In your deployment.yaml you actually don't have to specify the <code>containerPort</code>; all ports are exposed. From the <a href="https://kubernetes.io/docs/api-reference/v1.5/#container-v1" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>ports <em>ContainerPort array</em></p>
<p>List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.</p>
</blockquote>
|
<p>We have a case where we need to make sure that pods in k8s have the latest version possible. <em>What is the best way to accomplish this</em>?</p>
<p>First idea was to kill the pod after some point, knowing that the new ones will come up pulling the latest image. Here is <a href="https://github.com/kubernetes/kubernetes/pull/7868/commits/82163326117f095e75b4f1f1d830eb95667e86e6" rel="nofollow noreferrer">what we found so far</a>. Still don't know how to do it.</p>
<p>Another idea is having <code>rolling-update</code> executed in intervals, like every 5 hours. Is there a way to do this?</p>
| <p>As mentioned by @svenwltr using <code>activeDeadlineSeconds</code> is an easy option but comes with the risk of loosing all pods at once. To mitigate that risk I'd use a <code>deployment</code> to manage the pods and their rollout, and configure a small second container along with the actual application. The small helper could be configured like this (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">following the official docs</a>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: app-liveness
spec:
containers:
- name: liveness
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep $(( RANDOM % (3600) + 1800 )); rm -rf /tmp/healthy; sleep 600
image: gcr.io/google_containers/busybox
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
- name: yourapplication
imagePullPolicy: Always
image: nginx:alpine
</code></pre>
<p>With this configuration every pod would break randomly within the configured timeframe (here between 30 and 90mins) and that would trigger the start of a new pod. The <code>imagePullPolicy: Always</code> would then make sure that the image is updated during that cycle.</p>
<p>This of course assumes that your application versions are always available under the same name/tag.</p>
|
<p>In kubernetes pod yaml specification file, you can set a pod to use the host machine's network using <code>hostNetwork:true</code>.</p>
<p>I can't find anywhere a good (suitable for a beginner) explanation of what the <code>hostPID:true</code> and <code>hostIPC:true</code> options mean. Please could someone explain this, assuming little knowledge in linux networking and such. Thanks.</p>
<pre><code>spec:
template:
metadata:
labels:
name: podName
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
</code></pre>
<p>Source: <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/newrelic/newrelic-daemonset.yaml" rel="nofollow noreferrer">github link here</a></p>
| <p>they're roughly described within the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="noreferrer">Pod Security Policies</a></p>
<blockquote>
<p><code>hostPID</code> - Use the host’s pid namespace. Optional: Default to false.</p>
<p><code>hostIPC</code> - Use the host’s ipc namespace. Optional: Default to false.</p>
</blockquote>
<p>Those are related to the <code>SecurityContext</code> of the <code>Pod</code>. You'll find some more information in the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/pod-security-context.md" rel="noreferrer">Pod Security design document</a>.</p>
|
<p>I have a Kubernetes <code>Service</code> with a static <code>External IP</code> assigned to a <code>Replication Controller</code> managing 1 application distributed to 2 <code>Pods</code>. I can access the application using the external IP, this part works fine.</p>
<p>I'd like now to have the application inside the Pods using the same IP when making HTTP requests to external applications (outside the cluster).</p>
<p>A simple call to <code>https://api.ipify.org/</code> shows that the IP of this application is completely different from the external IP it answers at. How can I make it use the same IP?</p>
| <p>Theshort and simple answer is - you can't. Your pods are assigned IPs that are internal to your cluster, most likely from a per node range of particular cluster wide network address space. When talking to external world, pods traffic originates on an internally bridged interface/IP and if the destination is not part of the cluster it leaves by means of particular <em>node</em> default route, ending in being SNATed to the node IP, or to the IP of the NAT gateway if your traffic goes through such.</p>
<p>Even if you were to create custom SNAT rules for pods by means of some k8s watching etc. (much like ingress controller), the traffic would still bounce off of loadbalancer rather then reach your pods.</p>
<p>If you need a maintained IP address, what you can do, is pass your pods traffic via NAT gateway and make sure it NATs as you expect (this would not give you the same IP as your service, just a "stable" one), or make traffic like http requests go via PROXY with a stable IP(s).</p>
<p>All in all, while to some extent possible, it's unlikely to be worth the headache of setting it up.</p>
|
<p>I have a 5-nodes Kubernetes cluster of which 1 is the master (set up with kubeadm). When I first deployed the master node I deployed also the kubernetes dashboard so it's running on the same machine. After that I joined the other nodes to the cluster.</p>
<p>Now when I deploy a pod using a YAML file it stays in the <code>ContainerCreating</code> state. So I describe the pod and saw the machine where it was deployed. I SSH'd in the machine and checked first <code>docker ps -a</code> I could determine that the image was not started nor even was it pulled. So I looked into the kubelet logs (I didn't copy everything but this will give a pretty good idea):</p>
<pre><code>E0131 11:05:40.486422 2873 server.go:459] Kubelet needs to run as uid `0`. It is being run as 1000
W0131 11:05:40.486616 2873 server.go:469] write /proc/self/oom_score_adj: permission denied
W0131 11:05:40.486978 2873 server.go:669] No api server defined - no events will be sent to API server.
W0131 11:05:40.491423 2873 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0131 11:05:40.491498 2873 kubelet.go:477] Hairpin mode set to "hairpin-veth"
W0131 11:05:40.495353 2873 plugins.go:210] can't set sysctl net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: permission denied
I0131 11:05:40.503259 2873 docker_manager.go:257] Setting dockerRoot to /var/lib/docker
I0131 11:05:40.503308 2873 docker_manager.go:260] Setting cgroupDriver to cgroupfs
I0131 11:05:40.506028 2873 server.go:770] Started kubelet v1.5.2
E0131 11:05:40.506209 2873 server.go:481] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use
E0131 11:05:40.506300 2873 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
I0131 11:05:40.506413 2873 server.go:123] Starting to listen on 0.0.0.0:10250
W0131 11:05:40.506445 2873 kubelet.go:1224] No api server defined - no node status update will be sent.
E0131 11:05:40.507209 2873 kubelet.go:1228] error creating pods directory: mkdir /var/lib/kubelet/pods: permission denied
I0131 11:05:40.509613 2873 status_manager.go:125] Kubernetes client is nil, not starting status manager.
I0131 11:05:40.509656 2873 kubelet.go:1714] Starting kubelet main sync loop.
I0131 11:05:40.509710 2873 kubelet.go:1725] skipping pod synchronization - [error creating pods directory: mkdir /var/lib/kubelet/pods: permission denied container runtime is down]
F0131 11:05:40.509522 2873 server.go:148] listen tcp 0.0.0.0:10255: bind: address already in use
</code></pre>
<p>There are a lot of permission issues. I have no idea how to fix this. I've added root and the user account to the docker group to see if it fixes it, but it doesn't.</p>
<h1>Update</h1>
<p>Above I did a <code>kubelet logs</code> and that is why you get the uid message. When I execute <code>sudo kubelet logs</code> I get these results:</p>
<pre><code>I0201 15:36:01.386564 5082 feature_gate.go:181] feature gates: map[]
W0201 15:36:01.386861 5082 server.go:400] No API client: no api servers specified
I0201 15:36:01.386953 5082 docker.go:356] Connecting to docker on unix:///var/run/docker.sock
I0201 15:36:01.386991 5082 docker.go:376] Start docker client with request timeout=2m0s
I0201 15:36:01.401737 5082 manager.go:143] cAdvisor running in container: "/user.slice"
W0201 15:36:01.415664 5082 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I0201 15:36:01.431725 5082 fs.go:117] Filesystem partitions: map[/dev/mmcblk0p2:{mountpoint:/var/lib/docker/aufs major:179 minor:2 fsType:ext4 blockSize:0}]
I0201 15:36:01.440439 5082 manager.go:198] Machine: {NumCores:4 CpuFrequency:1920000 MemoryCapacity:3519315968 MachineID:a9807123b38d1f069a44f0b7588b5884 SystemUUID:03000200-0400-0500-0006-000700080009 BootID:7e71fe9b-a9d8-4921-80c7-9d09e49ed1ef Filesystems:[{Device:/dev/mmcblk0p2 Capacity:57295605760 Type:vfs Inodes:3563520 HasInodes:true}] DiskMap:map[179:0:{Name:mmcblk0 Major:179 Minor:0 Size:62545461248 Scheduler:deadline} 179:8:{Name:mmcblk0boot0 Major:179 Minor:8 Size:4194304 Scheduler:deadline} 179:16:{Name:mmcblk0boot1 Major:179 Minor:16 Size:4194304 Scheduler:deadline} 179:24:{Name:mmcblk0rpmb Major:179 Minor:24 Size:4194304 Scheduler:deadline}] NetworkDevices:[{Name:datapath MacAddress:72:36:99:b2:ba:be Speed:0 Mtu:1410} {Name:dummy0 MacAddress:ea:c7:5e:6d:29:75 Speed:0 Mtu:1500} {Name:enp1s0 MacAddress:00:07:32:3e:17:8c Speed:1000 Mtu:1500} {Name:vxlan-6784 MacAddress:5a:81:bb:f6:00:d7 Speed:0 Mtu:1500} {Name:weave MacAddress:92:64:f5:c5:57:a7 Speed:0 Mtu:1410}] Topology:[{Id:0 Memory:3519315968 Cores:[{Id:0 Threads:[0] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:1 Threads:[1] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:2 Threads:[2] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:3 Threads:[3] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0201 15:36:01.442170 5082 manager.go:204] Version: {KernelVersion:4.4.0-31-generic ContainerOsVersion:Ubuntu 16.04.1 LTS DockerVersion:1.12.3 CadvisorVersion: CadvisorRevision:}
I0201 15:36:01.444559 5082 cadvisor_linux.go:152] Failed to register cAdvisor on port 4194, retrying. Error: listen tcp :4194: bind: address already in use
W0201 15:36:01.449146 5082 container_manager_linux.go:205] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
W0201 15:36:01.449653 5082 server.go:669] No api server defined - no events will be sent to API server.
W0201 15:36:01.457574 5082 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0201 15:36:01.457658 5082 kubelet.go:477] Hairpin mode set to "hairpin-veth"
I0201 15:36:01.471512 5082 docker_manager.go:257] Setting dockerRoot to /var/lib/docker
I0201 15:36:01.471570 5082 docker_manager.go:260] Setting cgroupDriver to cgroupfs
I0201 15:36:01.474678 5082 server.go:770] Started kubelet v1.5.2
E0201 15:36:01.474926 5082 server.go:481] Starting health server failed: listen tcp 127.0.0.1:10248: bind: address already in use
E0201 15:36:01.475062 5082 kubelet.go:1145] Image garbage collection failed: unable to find data for container /
W0201 15:36:01.475208 5082 kubelet.go:1224] No api server defined - no node status update will be sent.
I0201 15:36:01.475702 5082 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
I0201 15:36:01.479587 5082 server.go:123] Starting to listen on 0.0.0.0:10250
F0201 15:36:01.481605 5082 server.go:148] listen tcp 0.0.0.0:10255: bind: address already in use
</code></pre>
| <p>You need to run kubelet as root (see first line of log). This a a known limitation at the moment:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/4869" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/4869</a></p>
|
<p>While running Prometheus in Kubernetes, I'm pushing out new config via a <code>ConfigMap</code>. ConfigMaps are exposed as files in the container.</p>
<p>I would love Prometheus to automatically reload its configuration when the file changes.</p>
<p>Would something like this work?</p>
<pre><code>inotifywait -q -m -e close_write /etc/prometheus/config.yml |
while read -r filename event; do
curl -X POST http://localhost:9090/-/reload
done
</code></pre>
| <p>(Edit: I took some time to get this fully to work)
this works with a small sidecar container. The configuration could look like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
template:
....
spec:
containers:
... (your actual container config goes here) ...
- name: refresh
imagePullPolicy: Always
args:
- /etc/prometheus/config.yml
- http://localhost:9090/-/reload
image: tolleiv/k8s-prometheus-reload
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
volumes:
- name: config-volume
configMap:
name: prometheus
</code></pre>
<p>The actual check is done with this script where the observed file and the URL are passed as parameters:</p>
<pre><code>#!/bin/sh
while true; do
inotifywait "$(readlink -f $1)"
echo "[$(date +%s)] Trigger refresh"
curl -sSL -X POST "$2" > /dev/null
done
</code></pre>
<p>Everything can be found in <a href="https://hub.docker.com/r/tolleiv/k8s-prometheus-reload/" rel="nofollow noreferrer">this container on Dockerhub</a></p>
<p>Keeping a single <code>inotifywait</code> with the <code>-m</code> didn't work because of symlink juggling which is done by Kubernetes when the <code>ConfigMap</code> changes.</p>
|
<p>I have the following questions:</p>
<ol>
<li><p>I am logged into a Kubernetes pod using the following command:</p>
<pre><code> ./cluster/kubectl.sh exec my-nginx-0onux -c my-nginx -it bash
</code></pre>
<p>The 'ip addr show' command shows its assigned the ip of the pod. Since pod is a logical concept, I am assuming I am logged into a docker container and not a pod, In which case, the pod IP is same as docker container IP. Is that understanding correct?</p>
</li>
<li><p>from a Kubernetes node, I do <code>sudo docker ps</code> and then do the following:-</p>
<pre><code> sudo docker exec 71721cb14283 -it '/bin/bash'
</code></pre>
<p>This doesn't work. Does someone know what I am doing wrong?</p>
</li>
<li><p>I want to access the nginx service I created, from within the pod using curl. How can I install curl within this pod or container to access the service from inside. I want to do this to understand the network connectivity.</p>
</li>
</ol>
| <p>Here is how you get a curl command line within a kubernetes network to test and explore your internal REST endpoints.</p>
<p>To get a prompt of a busybox running inside the network, execute the following command. (A tip is to use one unique container per developer.)</p>
<pre class="lang-bash prettyprint-override"><code>kubectl run curl-<YOUR NAME> --image=radial/busyboxplus:curl -i --tty --rm
</code></pre>
<p>You may omit the --rm and keep the instance running for later re-usage. To reuse it later, type:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl attach <POD ID> -c curl-<YOUR NAME> -i -t
</code></pre>
<p>Using the command <code>kubectl get pods</code> you can see all running POD's. The <code><POD ID></code> is something similar to <code>curl-yourname-944940652-fvj28</code>.</p>
<p><strong>EDIT:</strong> Note that you need to login to google cloud from your terminal (once) before you can do this! Here is an example, make sure to put in your zone, cluster and project:</p>
<pre class="lang-bash prettyprint-override"><code>gcloud container clusters get-credentials example-cluster --zone europe-west1-c --project example-148812
</code></pre>
|
<p>I have a kubernetes cluster with 5 nodes. When I add a simple nginx pod it will be scheduled to one of the nodes but it will not start up. It will not even pull the image.</p>
<p>This is the nginx.yaml file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>when I describe the pod there is one event: <code>Successfully assigned busybox to up02</code> When I log in to the up02 and check to see if there are any images pulled I see it didn't get pulled so I pulled it manually (I thought maybe it needs some kick start ;) )</p>
<p>The pod will allways stay in the Container creating state. It's not only with this pod, the problem is with any pod I try to add.</p>
<p>There are some pods running on the machine which is necessary for Kubernetes to operate:</p>
<pre><code>up@up01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 0/1 ContainerCreating 0 11m
default nginx 0/1 ContainerCreating 0 22m
kube-system dummy-2088944543-n1cd5 1/1 Running 0 5d
kube-system etcd-up01 1/1 Running 0 5d
kube-system kube-apiserver-up01 1/1 Running 0 5d
kube-system kube-controller-manager-up01 1/1 Running 0 5d
kube-system kube-discovery-1769846148-xfpls 1/1 Running 0 5d
kube-system kube-dns-2924299975-5rzz8 4/4 Running 0 5d
kube-system kube-proxy-17bpl 1/1 Running 2 3d
kube-system kube-proxy-3pk63 1/1 Running 0 3d
kube-system kube-proxy-h3wrj 1/1 Running 0 5d
kube-system kube-proxy-wzqv4 1/1 Running 0 3d
kube-system kube-proxy-z3xxx 1/1 Running 0 3d
kube-system kube-scheduler-up01 1/1 Running 0 5d
kube-system kubernetes-dashboard-3203831700-3xfbd 1/1 Running 0 5d
kube-system weave-net-6c0nr 2/2 Running 0 3d
kube-system weave-net-dchhf 2/2 Running 0 5d
kube-system weave-net-hshvg 2/2 Running 4 3d
kube-system weave-net-n684c 2/2 Running 1 3d
kube-system weave-net-r5319 2/2 Running 0 3d
</code></pre>
| <p>You can do</p>
<pre><code>kubectl describe pods <pod>
</code></pre>
<p>to get more info on what's happening.</p>
|
<p>We are running a Kubernetes Cluster in AWS and we are collecting the metrics in DataDog using the dd-agent DaemonSet.</p>
<p>We have a Pod being displayed in our metrics tagged as "no_pod" and it is using a lot of resources, Memory/CPU/NetworkTx/NetworkRX.</p>
<p>Is there any explanation to what this pod is, how I can find it, kill it, restart it etc?</p>
<p>I have found the dd-agent <a href="https://github.com/DataDog/dd-agent/blob/master/checks.d/docker_daemon.py#L410" rel="noreferrer">source code</a> which seems to define the "no_pod" label but I can't make much sense of why it is there, where it is coming from and how I can find it through kubectl etc.</p>
<p><a href="https://i.stack.imgur.com/ZRY2W.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZRY2W.png" alt="enter image description here"></a></p>
| <p>After speaking to the support team at DataDog, I managed to find out the following information relating to what the no_pod pods were.</p>
<blockquote>
<p>Our Kubernetes check is getting the list of containers from the Kubernetes API, which exposes aggregated data. In the metric explorer configuration here, you can see a couple of containers named /docker and / that are getting picked up along with the other containers. Metrics with pod_name:no_pod that come from container_name:/ and container_name:/docker are just metrics aggregated across multiple containers. (So it makes sense that these are the highest values in your graphs.) If you don't want your graphs to show these aggregated container metrics though, you can clone the dashboard and then exclude these pods from the query. To do so, on the cloned dashboard, just edit the query in the JSON tab, and in the tag scope, add !pod_name:no_pod.</p>
</blockquote>
<p>So it appears that these pods are the docker and root level containers running outside of the cluster and will always display unless you want to filter them out specifically which I now do.</p>
<p>Many thanks to the support guys at DataDog for looking into the issue for me and giving me a great explanation as to what the pods were and essentially confirming that I can just safely filter these out and not worry about them.</p>
|
<p>I looked into the service account ca.crt files that get deployed into the container and they don't contain any SAN with the IP or dns name of the master nodes. This results in SSL errors like: <a href="https://0.0.0.0:10250/metrics" rel="nofollow noreferrer">https://0.0.0.0:10250/metrics</a>: x509: cannot validate certificate for 0.0.0.0 because it doesn't contain any IP SANs.</p>
<p>Did anybody else run into this problem? How did you solve it other then allowing insecure ssl?</p>
| <p>The ca cert provided by a service account (<code>/run/secrets/kubernetes.io/serviceaccount/ca.crt</code>) is for communicating with the apiserver (served by the master on GKE).</p>
<pre><code># curl --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt https://kubernetes -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)"
{
"paths": [
"/api",
...
]
</code></pre>
<p>It looks like you are trying to talk to the kubelet API (port 10250). The kubelet API serves a self-signed cert, so insecure is the only way.</p>
|
Subsets and Splits