prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I am trying to deploy a "Hello world" Spring Boot app on Kubernetes (Minikube). The app is really simple, just one method, which is mapped on a GET resource. I even do not specify a port.</p> <p>I am now trying to deploy the app on Minikube, and making it available using a Service:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: server spec: selector: app: server ports: - protocol: TCP port: 8080 type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: server spec: selector: matchLabels: app: server replicas: 3 template: metadata: labels: app: server spec: containers: - name: server image: kubernetes-server:latest imagePullPolicy: Never ports: - name: http containerPort: 8080 </code></pre> <p>If I start the deployment using this configuration (i. e. the service is started first, then the Deployment), the pods fail during startup. In the logs, I can find the following message:</p> <pre><code>*************************** APPLICATION FAILED TO START *************************** Description: Binding to target org.springframework.boot.autoconfigure.web.ServerProperties@42f93a98 failed: Property: server.port Value: tcp://10.98.151.181:8080 Reason: Failed to convert property value of type 'java.lang.String' to required type 'java.lang.Integer' for property 'port'; nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.String] to type [java.lang.Integer] </code></pre> <p>Note: 10.98.151.181 is the cluster IP of the Service, as a can see in the Minikube dashboard.</p> <p>If I first trigger the actual Deployment, the app starts successfully, and after that, I can start the Service. However, the official documentation recommends to start the service first, and after that the deplyoment: <a href="https://kubernetes.io/docs/concepts/configuration/overview/#services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/overview/#services</a></p> <p>For me, it looks like the Service sets a property <em>server.port</em> as environment variable, and the Spring Boot app, which start after the Service, interprets that accidentially as the Spring <em>server.port</em>.</p> <p>Any ideas how to solve that?</p>
<blockquote> <p>For me, it looks like the Service sets a property server.port as environment variable</p> </blockquote> <p>No, kubernetes it is exposing "docker compatible" <a href="https://docs.docker.com/network/links/#environment-variables" rel="noreferrer">link env-vars</a> which, because your <code>Service</code> is named <code>server</code>, end up being <code>SERVER_PORT=tcp://thing:8080</code> because it is trying to be "helpful"</p> <p>The solution is either give your <code>Service</code> a much more descriptive name, or mask-off the offending env-var:</p> <pre class="lang-yaml prettyprint-override"><code>containers: - name: server env: - name: SERVER_PORT value: '' # you can try the empty string, # or actually place the port value with # value: '8080' # ensure it is a **string** and not `value: 8080` </code></pre>
<p>I just setup kubernetes using minikube on my local mac.</p> <p>Created a service with type NodePort and able to access my service outside the cluster using the url <code>&lt;Cluster_IP&gt;:&lt;NodePort&gt;</code>.</p> <p>Enabled ingress on my minikube and able to route calls from outside the cluster to the service in the cluster.</p> <p>Going to setup Kubernetes cluster on our private cloud. We are not using AWS/Google/Azure cloud. It is our own cloud with Linux VM's.</p> <p>Using netscalar for creating VIP's and routing requests to applications deployed in Linux VM's.</p> <p>Do i need to still continue creating VIP for my application and routes calls to ingress from VIP or to the NodePort?</p> <p>Is there any other better approach without creating a VIP in netscalar?</p>
<p>It is by far the best approach to use your infrastructure potential to the fullest. If you have a network loadbalancer where you can configure VIP and point it to your nodes (NodePort) then for http(s) services I would strongly advise doing just that. </p> <p>For convenience I would configure one IP and point it to the NodePort service of your clusters ingress controller and then use Ingress to expose your services to the outside world.</p>
<p>My goal is to model a hybrid/heterogeneous Kubernetes cluster, where I have the following setup:</p> <ul> <li>Master node runs on AWS (cloud) - ip-172-31-28-6</li> <li>Slave node runs on my laptop - osboxes</li> <li>Slave node runs on a Raspberry Pi - edge-1</li> </ul> <p>Running a Kubernetes cluster with three VMs locally on my laptop is no problem and works fine with both Weave Net. However, there are some communication problems (I guess), when modelling my Kubernetes cluster as depicted above.</p> <p>As Kubernetes is designed to run on nodes, such that all nodes are located in the same network, I set up an OpenVPN server on AWS and connect with both my laptop and Raspberry Pi to it. I was hoping that this would be enough to run Kubernetes on a heterogeneous setup, when the slave nodes are in a different network. Of course, this was an incorrect assumption.</p> <p>If I run the Kubernetes dashboard on a slave node and try to access it, I get a timeout. If I run it on the Master node, everything works as expected.</p> <p>I set up the cluster on AWS with kubeadm init --apiserver-advertise-address= and used kubeadm join to connect with the nodes.</p> <p>$ kubectl get pods --all-namespaces -o wide:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system kube-apiserver-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system kube-controller-manager-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system kube-dns-6f4fd4bdf-w6ctf 0/3 ContainerCreating 0 15h &lt;none&gt; osboxes kube-system kube-proxy-2pl2f 1/1 Running 0 15h 172.31.28.6 ip-172-31-28-6 kube-system kube-proxy-7b89c 0/1 CrashLoopBackOff 15 15h 192.168.2.106 edge-1 kube-system kube-proxy-qg69g 1/1 Running 1 15h 10.0.2.15 osboxes kube-system kube-scheduler-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system weave-net-pqxfp 1/2 CrashLoopBackOff 189 15h 172.31.28.6 ip-172-31-28-6 kube-system weave-net-thhzr 1/2 CrashLoopBackOff 12 36m 192.168.2.106 edge-1 kube-system weave-net-v69hj 2/2 Running 7 15h 10.0.2.15 osboxes </code></pre> <p>$ kubectl -n kube-system logs --v=7 kube-dns-6f4fd4bdf-w6ctf -c kubedns:</p> <pre><code>... I0321 09:04:25.620580 23936 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/kube-dns-6f4fd4bdf-w6ctf/log?container=kubedns I0321 09:04:25.620605 23936 round_trippers.go:421] Request Headers: I0321 09:04:25.620611 23936 round_trippers.go:424] Accept: application/json, */* I0321 09:04:25.620616 23936 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:04:25.713821 23936 round_trippers.go:439] Response Status: 400 Bad Request in 93 milliseconds I0321 09:04:25.714106 23936 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "container \"kubedns\" in pod \"kube-dns-6f4fd4bdf-w6ctf\" is waiting to start: ContainerCreating", "reason": "BadRequest", "code": 400 }] F0321 09:04:25.714134 23936 helpers.go:119] Error from server (BadRequest): container "kubedns" in pod "kube-dns-6f4fd4bdf-w6ctf" is waiting to start: ContainerCreating </code></pre> <p>kubectl -n kube-system logs --v=7 kube-proxy-7b89c:</p> <pre><code>... I0321 09:06:51.803852 24289 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/kube-proxy-7b89c/log I0321 09:06:51.803879 24289 round_trippers.go:421] Request Headers: I0321 09:06:51.803891 24289 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:06:51.803900 24289 round_trippers.go:424] Accept: application/json, */* I0321 09:08:59.110869 24289 round_trippers.go:439] Response Status: 500 Internal Server Error in 127306 milliseconds I0321 09:08:59.111129 24289 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out", "code": 500 }] F0321 09:08:59.111156 24289 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out </code></pre> <p>kubectl -n kube-system logs --v=7 weave-net-pqxfp -c weave:</p> <pre><code>... I0321 09:12:08.047206 24847 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/weave-net-pqxfp/log?container=weave I0321 09:12:08.047233 24847 round_trippers.go:421] Request Headers: I0321 09:12:08.047335 24847 round_trippers.go:424] Accept: application/json, */* I0321 09:12:08.047347 24847 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:12:08.062494 24847 round_trippers.go:439] Response Status: 200 OK in 15 milliseconds DEBU: 2018/03/21 09:11:26.847013 [kube-peers] Checking peer "fa:10:a4:97:7e:7b" against list &amp;{[{6e:fd:f4:ef:1e:f5 osboxes}]} Peer not in list; removing persisted data INFO: 2018/03/21 09:11:26.880946 Command line options: map[expect-npc:true ipalloc-init:consensus=3 db-prefix:/weavedb/weave-net http-addr:127.0.0.1:6784 ipalloc-range:10.32.0.0/12 nickname:ip-172-31-28-6 host-root:/host name:fa:10:a4:97:7e:7b no-dns:true status-addr:0.0.0.0:6782 datapath:datapath docker-api: port:6783 conn-limit:30] INFO: 2018/03/21 09:11:26.880995 weave 2.2.1 FATA: 2018/03/21 09:11:26.881117 Inconsistent bridge state detected. Please do 'weave reset' and try again </code></pre> <p>kubectl -n kube-system logs --v=7 weave-net-thhzr -c weave:</p> <pre><code>... I0321 09:15:13.787905 25113 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/weave-net-thhzr/log?container=weave I0321 09:15:13.787932 25113 round_trippers.go:421] Request Headers: I0321 09:15:13.787938 25113 round_trippers.go:424] Accept: application/json, */* I0321 09:15:13.787946 25113 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:17:21.126863 25113 round_trippers.go:439] Response Status: 500 Internal Server Error in 127338 milliseconds I0321 09:17:21.127140 25113 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out", "code": 500 }] F0321 09:17:21.127167 25113 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out </code></pre> <p>$ ifconfig (Kubernetes master on AWS):</p> <pre><code>datapath Link encap:Ethernet HWaddr ae:90:9a:b2:7e:d9 inet6 addr: fe80::ac90:9aff:feb2:7ed9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:29 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:1904 (1.9 KB) TX bytes:1188 (1.1 KB) docker0 Link encap:Ethernet HWaddr 02:42:50:39:1f:c7 inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 06:a3:d0:8e:19:72 inet addr:172.31.28.6 Bcast:172.31.31.255 Mask:255.255.240.0 inet6 addr: fe80::4a3:d0ff:fe8e:1972/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 RX packets:10323322 errors:0 dropped:0 overruns:0 frame:0 TX packets:9418208 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3652314289 (3.6 GB) TX bytes:3117288442 (3.1 GB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:11388236 errors:0 dropped:0 overruns:0 frame:0 TX packets:11388236 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:2687297929 (2.6 GB) TX bytes:2687297929 (2.6 GB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:97222 errors:0 dropped:0 overruns:0 frame:0 TX packets:164607 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:13381022 (13.3 MB) TX bytes:209129403 (209.1 MB) vethwe-bridge Link encap:Ethernet HWaddr 12:59:54:73:0f:91 inet6 addr: fe80::1059:54ff:fe73:f91/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:36 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1476 (1.4 KB) TX bytes:2940 (2.9 KB) vethwe-datapath Link encap:Ethernet HWaddr 8e:75:1c:92:93:0d inet6 addr: fe80::8c75:1cff:fe92:930d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:36 errors:0 dropped:0 overruns:0 frame:0 TX packets:18 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2940 (2.9 KB) TX bytes:1476 (1.4 KB) vxlan-6784 Link encap:Ethernet HWaddr a6:02:da:5e:d5:2a inet6 addr: fe80::a402:daff:fe5e:d52a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:65485 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:8 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) </code></pre> <p>$ sudo systemctl status kubelet.service (on AWS):</p> <pre><code>Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: W0321 09:34:59.202058 19676 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: E0321 09:34:59.202452 19676 kubelet.go:2109] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.535541 19676 kuberuntime_manager.go:514] Container {Name:weave Image:weaveworks/weave-kube:2.2.1 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value: ValueFrom:&amp;EnvVarSource{FieldRef:&amp;ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:&lt;nil&gt;} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:weavedb ReadOnly:false MountPath:/weavedb SubPath: MountPropagation:&lt;nil&gt;} {Name:cni-bin ReadOnly:false MountPath:/host/opt SubPath: MountPropagation:&lt;nil&gt;} {Name:cni-bin2 ReadOnly:false MountPath:/host/home SubPath: MountPropagation:&lt;nil&gt;} {Name:cni-conf ReadOnly:false MountPath:/host/etc SubPath: MountPropagation:&lt;nil&gt;} {Name:dbus ReadOnly:false MountPath:/host/var/lib/dbus SubPath: MountPropagation:&lt;nil&gt;} {Name:lib-modules ReadOnly:false MountPath:/lib/modules SubPath: MountPropagation:&lt;nil&gt;} {Name:weave-net-token-vn8rh ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/status,Port:6784,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&amp;SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536504 19676 kuberuntime_manager.go:758] checking backoff for container "weave" in pod "weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)" Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536636 19676 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972) Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: E0321 09:35:01.536664 19676 pod_workers.go:186] Error syncing pod c6450070-2c61-11e8-a50d-06a3d08e1972 ("weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "StartContainer" for "weave" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)" </code></pre> <p>$ sudo systemctl status kubelet.service (on Laptop)</p> <pre><code>Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.662670 715 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663412 715 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663869 715 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.664295 715 pod_workers.go:186] Error syncing pod 11886465-2c61-11e8-a50d-06a3d08e1972 ("kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "CreatePodSandbox" for "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 21 05:47:20 osboxes kubelet[715]: W0321 05:47:20.536161 715 pod_container_deletor.go:77] Container "bbf490835face43b70c24dbcb67c3f75872e7831b5e2605dc8bb71210910e273" not found in pod's containers </code></pre> <p>$ sudo systemctl status kubelet.service (on Raspberry Pi):</p> <pre><code>Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.188199 339 kuberuntime_manager.go:514] Container {Name:kube-proxy Image:gcr.io/google_containers/kube-proxy-amd64:v1.9.5 Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:kube-proxy ReadOnly:false MountPath:/var/lib/kube-proxy SubPath: MountPropagation:&lt;nil&gt;} {Name:xtables-lock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:&lt;nil&gt;} {Name:lib-modules ReadOnly:true MountPath:/lib/modules SubPath: MountPropagation:&lt;nil&gt;} {Name:kube-proxy-token-px7dt ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&amp;SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.189023 339 kuberuntime_manager.go:758] checking backoff for container "kube-proxy" in pod "kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)" Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.190174 339 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972) Mar 21 09:29:01 edge-1 kubelet[339]: E0321 09:29:01.190518 339 pod_workers.go:186] Error syncing pod 5bebafa1-2c61-11e8-a50d-06a3d08e1972 ("kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)" Mar 21 09:29:02 edge-1 kubelet[339]: W0321 09:29:02.278342 339 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Mar 21 09:29:02 edge-1 kubelet[339]: E0321 09:29:02.282534 339 kubelet.go:2120] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized </code></pre>
<p>You definitely have a problem with networking between Kubernetes master and nodes.</p> <p>But, first of all, that is not a best idea to create that kind of hybrid installation. You must have a stable networking between master(s) and nodes, or it will cause many problems. But that is a hard to achieve using Internet connection.</p> <p>If you want to prepare a Hybrid installation you can use <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">Federation</a> between Kubernetes cluster in AWS and on your local hardware.</p> <p>But, regarded to your problem, I see that you have a problem with Weave net on a Master and on a <code>edge-1</code> node. It is not clear from logs which kind of problem you have, try to run Weave container with <code>WEAVE_DEBUG=1</code> environment variable. Without networking other pods like <code>kube-dns</code> will not work properly.</p> <p>Also, how did you setup OpenVPN. You must have routing between subnet on AWS and client-to-client. So, all addresses which you using for setup your cluster on all nodes has to be routed between each other. Check another one time to which address you bind Kubernetes components and Weave and are that addresses routable. </p>
<p>My goal is to model a hybrid/heterogeneous Kubernetes cluster, where I have the following setup:</p> <ul> <li>Master node runs on AWS (cloud) - ip-172-31-28-6</li> <li>Slave node runs on my laptop - osboxes</li> <li>Slave node runs on a Raspberry Pi - edge-1</li> </ul> <p>Running a Kubernetes cluster with three VMs locally on my laptop is no problem and works fine with both Weave Net. However, there are some communication problems (I guess), when modelling my Kubernetes cluster as depicted above.</p> <p>As Kubernetes is designed to run on nodes, such that all nodes are located in the same network, I set up an OpenVPN server on AWS and connect with both my laptop and Raspberry Pi to it. I was hoping that this would be enough to run Kubernetes on a heterogeneous setup, when the slave nodes are in a different network. Of course, this was an incorrect assumption.</p> <p>If I run the Kubernetes dashboard on a slave node and try to access it, I get a timeout. If I run it on the Master node, everything works as expected.</p> <p>I set up the cluster on AWS with kubeadm init --apiserver-advertise-address= and used kubeadm join to connect with the nodes.</p> <p>$ kubectl get pods --all-namespaces -o wide:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system kube-apiserver-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system kube-controller-manager-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system kube-dns-6f4fd4bdf-w6ctf 0/3 ContainerCreating 0 15h &lt;none&gt; osboxes kube-system kube-proxy-2pl2f 1/1 Running 0 15h 172.31.28.6 ip-172-31-28-6 kube-system kube-proxy-7b89c 0/1 CrashLoopBackOff 15 15h 192.168.2.106 edge-1 kube-system kube-proxy-qg69g 1/1 Running 1 15h 10.0.2.15 osboxes kube-system kube-scheduler-ip-172-31-28-6 1/1 Running 0 5m 172.31.28.6 ip-172-31-28-6 kube-system weave-net-pqxfp 1/2 CrashLoopBackOff 189 15h 172.31.28.6 ip-172-31-28-6 kube-system weave-net-thhzr 1/2 CrashLoopBackOff 12 36m 192.168.2.106 edge-1 kube-system weave-net-v69hj 2/2 Running 7 15h 10.0.2.15 osboxes </code></pre> <p>$ kubectl -n kube-system logs --v=7 kube-dns-6f4fd4bdf-w6ctf -c kubedns:</p> <pre><code>... I0321 09:04:25.620580 23936 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/kube-dns-6f4fd4bdf-w6ctf/log?container=kubedns I0321 09:04:25.620605 23936 round_trippers.go:421] Request Headers: I0321 09:04:25.620611 23936 round_trippers.go:424] Accept: application/json, */* I0321 09:04:25.620616 23936 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:04:25.713821 23936 round_trippers.go:439] Response Status: 400 Bad Request in 93 milliseconds I0321 09:04:25.714106 23936 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "container \"kubedns\" in pod \"kube-dns-6f4fd4bdf-w6ctf\" is waiting to start: ContainerCreating", "reason": "BadRequest", "code": 400 }] F0321 09:04:25.714134 23936 helpers.go:119] Error from server (BadRequest): container "kubedns" in pod "kube-dns-6f4fd4bdf-w6ctf" is waiting to start: ContainerCreating </code></pre> <p>kubectl -n kube-system logs --v=7 kube-proxy-7b89c:</p> <pre><code>... I0321 09:06:51.803852 24289 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/kube-proxy-7b89c/log I0321 09:06:51.803879 24289 round_trippers.go:421] Request Headers: I0321 09:06:51.803891 24289 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:06:51.803900 24289 round_trippers.go:424] Accept: application/json, */* I0321 09:08:59.110869 24289 round_trippers.go:439] Response Status: 500 Internal Server Error in 127306 milliseconds I0321 09:08:59.111129 24289 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out", "code": 500 }] F0321 09:08:59.111156 24289 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out </code></pre> <p>kubectl -n kube-system logs --v=7 weave-net-pqxfp -c weave:</p> <pre><code>... I0321 09:12:08.047206 24847 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/weave-net-pqxfp/log?container=weave I0321 09:12:08.047233 24847 round_trippers.go:421] Request Headers: I0321 09:12:08.047335 24847 round_trippers.go:424] Accept: application/json, */* I0321 09:12:08.047347 24847 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:12:08.062494 24847 round_trippers.go:439] Response Status: 200 OK in 15 milliseconds DEBU: 2018/03/21 09:11:26.847013 [kube-peers] Checking peer "fa:10:a4:97:7e:7b" against list &amp;{[{6e:fd:f4:ef:1e:f5 osboxes}]} Peer not in list; removing persisted data INFO: 2018/03/21 09:11:26.880946 Command line options: map[expect-npc:true ipalloc-init:consensus=3 db-prefix:/weavedb/weave-net http-addr:127.0.0.1:6784 ipalloc-range:10.32.0.0/12 nickname:ip-172-31-28-6 host-root:/host name:fa:10:a4:97:7e:7b no-dns:true status-addr:0.0.0.0:6782 datapath:datapath docker-api: port:6783 conn-limit:30] INFO: 2018/03/21 09:11:26.880995 weave 2.2.1 FATA: 2018/03/21 09:11:26.881117 Inconsistent bridge state detected. Please do 'weave reset' and try again </code></pre> <p>kubectl -n kube-system logs --v=7 weave-net-thhzr -c weave:</p> <pre><code>... I0321 09:15:13.787905 25113 round_trippers.go:414] GET https://&lt;PUBLIC_IP&gt;:6443/api/v1/namespaces/kube-system/pods/weave-net-thhzr/log?container=weave I0321 09:15:13.787932 25113 round_trippers.go:421] Request Headers: I0321 09:15:13.787938 25113 round_trippers.go:424] Accept: application/json, */* I0321 09:15:13.787946 25113 round_trippers.go:424] User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15 I0321 09:17:21.126863 25113 round_trippers.go:439] Response Status: 500 Internal Server Error in 127338 milliseconds I0321 09:17:21.127140 25113 helpers.go:201] server response object: [{ "metadata": {}, "status": "Failure", "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out", "code": 500 }] F0321 09:17:21.127167 25113 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out </code></pre> <p>$ ifconfig (Kubernetes master on AWS):</p> <pre><code>datapath Link encap:Ethernet HWaddr ae:90:9a:b2:7e:d9 inet6 addr: fe80::ac90:9aff:feb2:7ed9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:29 errors:0 dropped:0 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:1904 (1.9 KB) TX bytes:1188 (1.1 KB) docker0 Link encap:Ethernet HWaddr 02:42:50:39:1f:c7 inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 06:a3:d0:8e:19:72 inet addr:172.31.28.6 Bcast:172.31.31.255 Mask:255.255.240.0 inet6 addr: fe80::4a3:d0ff:fe8e:1972/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 RX packets:10323322 errors:0 dropped:0 overruns:0 frame:0 TX packets:9418208 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3652314289 (3.6 GB) TX bytes:3117288442 (3.1 GB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:11388236 errors:0 dropped:0 overruns:0 frame:0 TX packets:11388236 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:2687297929 (2.6 GB) TX bytes:2687297929 (2.6 GB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:97222 errors:0 dropped:0 overruns:0 frame:0 TX packets:164607 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:13381022 (13.3 MB) TX bytes:209129403 (209.1 MB) vethwe-bridge Link encap:Ethernet HWaddr 12:59:54:73:0f:91 inet6 addr: fe80::1059:54ff:fe73:f91/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:36 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1476 (1.4 KB) TX bytes:2940 (2.9 KB) vethwe-datapath Link encap:Ethernet HWaddr 8e:75:1c:92:93:0d inet6 addr: fe80::8c75:1cff:fe92:930d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1376 Metric:1 RX packets:36 errors:0 dropped:0 overruns:0 frame:0 TX packets:18 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2940 (2.9 KB) TX bytes:1476 (1.4 KB) vxlan-6784 Link encap:Ethernet HWaddr a6:02:da:5e:d5:2a inet6 addr: fe80::a402:daff:fe5e:d52a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:65485 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:8 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) </code></pre> <p>$ sudo systemctl status kubelet.service (on AWS):</p> <pre><code>Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: W0321 09:34:59.202058 19676 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: E0321 09:34:59.202452 19676 kubelet.go:2109] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.535541 19676 kuberuntime_manager.go:514] Container {Name:weave Image:weaveworks/weave-kube:2.2.1 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value: ValueFrom:&amp;EnvVarSource{FieldRef:&amp;ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:&lt;nil&gt;} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:weavedb ReadOnly:false MountPath:/weavedb SubPath: MountPropagation:&lt;nil&gt;} {Name:cni-bin ReadOnly:false MountPath:/host/opt SubPath: MountPropagation:&lt;nil&gt;} {Name:cni-bin2 ReadOnly:false MountPath:/host/home SubPath: MountPropagation:&lt;nil&gt;} {Name:cni-conf ReadOnly:false MountPath:/host/etc SubPath: MountPropagation:&lt;nil&gt;} {Name:dbus ReadOnly:false MountPath:/host/var/lib/dbus SubPath: MountPropagation:&lt;nil&gt;} {Name:lib-modules ReadOnly:false MountPath:/lib/modules SubPath: MountPropagation:&lt;nil&gt;} {Name:weave-net-token-vn8rh ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:&amp;Probe{Handler:Handler{Exec:nil,HTTPGet:&amp;HTTPGetAction{Path:/status,Port:6784,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&amp;SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536504 19676 kuberuntime_manager.go:758] checking backoff for container "weave" in pod "weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)" Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536636 19676 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972) Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: E0321 09:35:01.536664 19676 pod_workers.go:186] Error syncing pod c6450070-2c61-11e8-a50d-06a3d08e1972 ("weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "StartContainer" for "weave" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)" </code></pre> <p>$ sudo systemctl status kubelet.service (on Laptop)</p> <pre><code>Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.662670 715 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663412 715 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663869 715 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.664295 715 pod_workers.go:186] Error syncing pod 11886465-2c61-11e8-a50d-06a3d08e1972 ("kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "CreatePodSandbox" for "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 21 05:47:20 osboxes kubelet[715]: W0321 05:47:20.536161 715 pod_container_deletor.go:77] Container "bbf490835face43b70c24dbcb67c3f75872e7831b5e2605dc8bb71210910e273" not found in pod's containers </code></pre> <p>$ sudo systemctl status kubelet.service (on Raspberry Pi):</p> <pre><code>Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.188199 339 kuberuntime_manager.go:514] Container {Name:kube-proxy Image:gcr.io/google_containers/kube-proxy-amd64:v1.9.5 Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:kube-proxy ReadOnly:false MountPath:/var/lib/kube-proxy SubPath: MountPropagation:&lt;nil&gt;} {Name:xtables-lock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:&lt;nil&gt;} {Name:lib-modules ReadOnly:true MountPath:/lib/modules SubPath: MountPropagation:&lt;nil&gt;} {Name:kube-proxy-token-px7dt ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:&lt;nil&gt;}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&amp;SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.189023 339 kuberuntime_manager.go:758] checking backoff for container "kube-proxy" in pod "kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)" Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.190174 339 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972) Mar 21 09:29:01 edge-1 kubelet[339]: E0321 09:29:01.190518 339 pod_workers.go:186] Error syncing pod 5bebafa1-2c61-11e8-a50d-06a3d08e1972 ("kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)" Mar 21 09:29:02 edge-1 kubelet[339]: W0321 09:29:02.278342 339 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Mar 21 09:29:02 edge-1 kubelet[339]: E0321 09:29:02.282534 339 kubelet.go:2120] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized </code></pre>
<ol> <li>This message explains one of the crashes:</li> </ol> <p><code>FATA: 2018/03/21 09:11:26.881117 Inconsistent bridge state detected. Please do 'weave reset' and try again </code></p> <p>Since it's slightly complicated to run the <code>weave</code> command on a Kubernetes node, just reboot the node and the bridge should be recreated from scratch.</p> <ol start="2"> <li>This message says it couldn't contact the node to get logs:</li> </ol> <p><code>F0321 09:08:59.111156 24289 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out</code></p> <p>Consider whether those hosts can reach each other on their regular network.</p>
<p>I am trying to follow the instructions at <a href="https://docs.docker.com/docker-for-windows/kubernetes/#use-docker-commands" rel="nofollow noreferrer">https://docs.docker.com/docker-for-windows/kubernetes/#use-docker-commands</a> for running a docker-compose.yml file against kubernetes on Docker for Windows.</p> <p>I am using the Edge version of Docker for Windows -- 18.03.0-ce-rc4 -- and I have kubernetes enabled.</p> <p>I am using the example docker-compose app at <a href="https://docs.docker.com/compose/gettingstarted/#step-3-define-services-in-a-compose-file" rel="nofollow noreferrer">https://docs.docker.com/compose/gettingstarted/#step-3-define-services-in-a-compose-file</a>, i.e.</p> <pre><code>version: '3.3' services: web: build: . ports: - '5000:5000' redis: image: redis </code></pre> <p>This example works fine with <code>docker-compose build</code> and <code>docker-compose up</code></p> <p>But following the documentation linked above for <code>docker stack</code>, I get the following:</p> <pre><code>PS C:\dev\projects\python\kubetest&gt; docker stack deploy --compose-file .\docker-compose.yml mystack Ignoring unsupported options: build Stack mystack was created Waiting for the stack to be stable and running... - Service redis has one container running PS C:\dev\projects\python\kubetest&gt; kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5d redis ClusterIP None &lt;none&gt; 55555/TCP 8s </code></pre> <p>Note that it doesn't create the web service, along with that "ignoring unsupported options: build" error</p> <p>I also tried using the sample docker-compose.yml file in that documentation linked above, and it didn't work, either, with a totally different error.</p> <p>In short, by following the documentation, I'm unable to get anything deployed to kubernetes on Docker for Windows.</p>
<p>Due to the lack of support for a <code>build</code> there would be no <code>image</code> to run for the <code>web</code> service containers. </p> <p>Compose can manage the build for you on a single Docker host. As Swarm and Kubernetes are normally run across multiple nodes, an <code>image</code> should reference a registry available on the network so all nodes can access the same image.</p> <p><a href="https://docs.docker.com/engine/swarm/stack-deploy/#create-the-example-application" rel="nofollow noreferrer">Dockers <code>stack deploy</code> example</a> includes a step to <a href="https://docs.docker.com/engine/swarm/stack-deploy/#set-up-a-docker-registry" rel="nofollow noreferrer">setup a private registry</a> and use that for source of the image:</p> <pre><code>services: web: image: 127.0.0.1:5000/stackdemo </code></pre> <h3>Workaround</h3> <p>In this instance, it <em>might</em> be possible to get away with building the image manually and referencing that image name due to everything running under the one Docker instance, it depends on how Kubernetes is setup.</p> <pre><code>version: '3.3' services: web: build: . image: me/web ports: - '5000:5000' redis: image: redis </code></pre> <p>Build the image externally</p> <pre><code>docker-compose build web </code></pre> <p>or directly with <code>docker</code>:</p> <pre><code>docker build -t me/web . </code></pre>
<p>Using test config with Ignite 2.4 and k8s 1.9:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"&gt; &lt;bean class="org.apache.ignite.configuration.IgniteConfiguration"&gt; &lt;property name="discoverySpi"&gt; &lt;bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"&gt; &lt;property name="ipFinder"&gt; &lt;bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"/&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/beans&gt; </code></pre> <p>Unable to find Kubernetes API Server at <a href="https://kubernetes.default.svc.cluster.local:443" rel="noreferrer">https://kubernetes.default.svc.cluster.local:443</a> Can I set the API Server URL in the XML config file? How?</p>
<p>@Denis was right. </p> <p>Kubernetes using RBAC access controlling system and you need to authorize your pod to access to API. </p> <p>For that, you need to add a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Service Account</a> to your pod.</p> <p>So, for do that you need:</p> <ol> <li><p>Create a service account and set role for it:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: ignite namespace: &lt;Your namespace&gt; </code></pre></li> <li><p>I am not sure that permissions to access only pods will be enough for Ignite, but if not - you can add as more permissions as you want. <a href="https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-roles.yaml" rel="nofollow noreferrer">Here</a> is example of different kind of roles with large list of permissions. So, now we create Cluster Role for your app:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ignite namespace: &lt;Your namespace&gt; rules: - apiGroups: - "" resources: - pods # Here is resources you can access verbs: # That is what you can do with them - get - list - watch </code></pre></li> <li><p>Create binding for that role:</p> <pre><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: ignite roleRef: kind: ClusterRole name: ignite apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: ignite namespace: &lt;Your namespace&gt; </code></pre></li> <li><p>Now, you need to associate ServiceAccount to pods with your application:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: .... spec: template: spec: serviceAccountName: ignite </code></pre></li> </ol> <p>After that, your application will have an access to K8s API. P.S. Do not forget to change <code>&lt;Your namespace&gt;</code> to namespace where you running Ignition.</p>
<p>Im using kubernetes cluster on AWS to run spark jobs ,im using spark 2.3 ,now i want to run spark-submit from AWS lambda function to k8s master,would like to know if there is any REST interface to run Spark submit on k8s Master?</p>
<p>Unfortunately, it is not possible for Spark 2.3, in case you are using native Kubernetes support.</p> <p>Based on description from deployment instruction, submission process contains several steps:</p> <ul> <li>Spark creates a Spark driver running within a Kubernetes pod.</li> <li><p>The driver creates executors which are also running within Kubernetes pods</p></li> <li><p>The driver connects to them, and executes application code</p></li> </ul> <p>When the application completes, executor pods terminate and are cleaned up, but the driver pod persists its logs and remains in “completed” state in the Kubernetes API until it’s eventually garbage collected or manually cleaned up.</p> <p>So, in fact, you have no place to submit a job until you start a submission process, which will launch the first Spark's pod (driver) for you. Only once application completes, everything is terminated.</p> <p>Please also see similar answer for this question under <a href="https://stackoverflow.com/questions/49263299/spark-submit-2-3-on-kubernetes-cluster-from-python/49347108#49347108">the link</a></p>
<p>I am trying to deploy my microservices using Kubernetes. And also I have one Ubuntu 16.04 machine as AWS Ec2. In that AWS Ec2 I need to use Kubernetes and deploy my microservices developed using Spring Boot. I already explored the architecture of Kubernetes. But when I am learning how to install Kubernetes in Ubuntu.</p> <p>It showing that need at least two machine, one for master and another for nodes (worker machines). I am adding the one or two links that I readed for installing Kubernetes below:</p> <ol> <li><a href="https://medium.com/@Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36" rel="nofollow noreferrer">https://medium.com/@Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36</a></li> <li><a href="https://medium.com/@SystemMining/setup-kubenetes-cluster-on-ubuntu-16-04-with-kubeadm-336f4061d929" rel="nofollow noreferrer">https://medium.com/@SystemMining/setup-kubenetes-cluster-on-ubuntu-16-04-with-kubeadm-336f4061d929</a></li> </ol> <p>And I am here need to clarify my confusions related with Kubernetes and its installation. I am adding confusions below section:</p> <ol> <li>Can I use one Ubuntu 16.04 machine for both master and worker for my microservice deployment?</li> <li>Can I integrate Kubernetes with Jenkins in the same ubuntu 16.04 machine, since I am planning to choose Ec2 Ubuntu 16.04 LTS for this?</li> <li>If master and node in same machine is possible (doubt 1), then how I can create different number of nodes when I am initializing my cluster by using kubeadm init?</li> </ol> <p><a href="https://i.stack.imgur.com/Z6N42.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z6N42.png" alt="enter image description here"></a></p> <p>I am only a beginner with this.</p>
<p>Let's clarify one by one.</p> <blockquote> <p>Can I use one ubuntu 16.04 machine for both master and worker for my microservice deployment?</p> </blockquote> <p>Yes, you can use one server for all components, but if you will run your master and node in different VMs or containers. Theoretically, it is possible to create all-in-one server without that, but its a tricky way and I don't recommend it to you.</p> <blockquote> <p>Can I integrate Kubernetes with Jenkins in the same ubuntu 16.04 machine? , Since I am planning to choose Ec2 Ubuntu 16.04 LTS for this</p> </blockquote> <p>You can, as example, install Jenkins inside a Kubernetes, or install it somewhere else and integrate. So - yes, you can. <a href="https://akomljen.com/set-up-jenkins-ci-cd-pipeline-with-kubernetes/" rel="nofollow noreferrer">Here</a> is one of the articles about it.</p> <blockquote> <p>If master and node in same machine is possible (Doubt 1), Then How I can create different number of nodes when I am initializing my cluster by using kubeadm init?</p> </blockquote> <p>You cannot create multiple nodes on a single machine without docker-in-docker solution or VMs.</p> <p>Actually, I highly recommend you <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">Minikube</a> for single node Kubernetes. It will automatically create for you a local cluster in VMs in one click.</p>
<p>I have a kubernetes cluster that is running on AWS EC2 instances and weave as networking(cni). I have disabled the docker networking(ipmask and iptables) as it is managed by weave(to avoid network conflicts). <br></p> <p>I have deployed my Jenkins on this cluster as K8s pod and this jenkins uses jenkins kubernetes plugin to spawn dynamic slaves based on pod and container template which I have defined. These slaves container have docker client in it which connects to the host docker engine via docker.sock <br></p> <p>So when I run any job in Jenkins it starts a slave and on this it clones a git repo and starts building the Dockerfile present inside the repo. </p> <p>My sample dockerfile looks like this:</p> <pre><code>FROM abc:123 RUN yum update </code></pre> <p>So when container starts building this it tries connecting to redhat repo to update the local repo and fails here. To debug I logged in to this container and try wget/CURL some packages and finds that there is no internet connectivity in this container.</p> <p>I suspect that while building docker starts intermediate containers and those containers are not managed by weave so they do not have internet connectivity. <br></p> <p>Need suggestions. <br></p> <p>Related question: <a href="https://stackoverflow.com/questions/47003225/internet-connection-inside-docker-container-in-kubernetes/47008722#47008722">Internet connection inside Docker container in Kubernetes</a></p>
<p>Ok finally after lot of struggle I find the solution.<br> So when ever K8s starts a pod it starts a sidecart container whose role is basically to provide network to pod containers.<br> So while running docker build if I pass it's container ID as network then my intermediate contexts start getting internet connectivity via this container. So changes looks something like this:</p> <pre><code>docker build -t "some name" --network container:\$(docker ps | grep \$(hostname) | grep k8s_POD | cut -d\" \" -f1) -f infra/docker/Dockerfile . </code></pre> <p>Hope this helps. :D</p>
<p>I have 3 nodes in kubernetes cluster. I create a <code>daemonset</code> and deployed it in all the 3 devices. This <code>daemonset</code> created 3 pods and they were successfully running. But for some reasons, one of the pod failed. </p> <p>I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?</p> <p>Thanks</p>
<p><code>kubectl delete pod &lt;podname&gt;</code> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place</p>
<p>Can someone please let me know what are the steps that needs to be followed to mount an Azure fileshare that has files in it, and will need to mount it on a pod in Kubernetes.</p>
<p>As I see in the <a href="https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-linux" rel="nofollow noreferrer">documentation</a>, Azure using SMB protocol for access a data:</p> <blockquote> <p>Azure Files can be mounted either via SMB 2.1 and SMB 3.0. For connections coming from clients on-premises or in other Azure regions, Azure Files will reject SMB 2.1 (or SMB 3.0 without encryption). If secure transfer required is enabled for a storage account, Azure Files will only allow connections using SMB 3.0 with encryption.</p> </blockquote> <p>Kubernetes do not support SMB directly, but has support of special type of volumes - <code>AzureFile</code>, which will provide a SMB configuration automatically for you.</p> <p>Next, for mount a storage, you need <a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md" rel="nofollow noreferrer">to</a>:</p> <ol> <li><p>Install packages to your nodes: <code>yum -y install cifs-utils</code>. If you using Debian-like distribs like Ubuntu, check how to install that packages in your OS (probably, they have same names).</p></li> <li><p>No, you need to:</p> <blockquote> <p>Obtain an Microsoft Azure storage account and create a secret that contains the base64 encoded Azure Storage account name and key. In the secret file, base64-encode Azure Storage account name and pair it with name azurestorageaccountname, and base64-encode Azure Storage access key and pair it with name azurestorageaccountkey.</p> </blockquote> <p>After that, you can create a Kubernetes secret with that file:</p></li> </ol> <p><code>apiVersion: v1 kind: Secret metadata: name: azure-secret type: Opaque data: azurestorageaccountname: azhzdGVzdA== azurestorageaccountkey: eElGMXpKYm5ub2pGTE1Ta0JwNTBteDAyckhzTUsyc2pVN21GdDRMMTNob0I3ZHJBYUo4akQ2K0E0NDNqSm9nVjd5MkZVT2hRQ1dQbU02WWFOSHk3cWc9PQ== </code></p> <ol start="3"> <li>Now you can mount a share into your pod:</li> </ol> <p><code>apiVersion: v1 kind: Pod metadata: name: azure spec: containers: - image: kubernetes/pause name: azure volumeMounts: - name: azure mountPath: /mnt/azure volumes: - name: azure azureFile: secretName: azure-secret shareName: k8stest readOnly: false </code></p>
<p>We have a Kubernetes 1.7.8 clusters deployed with Kops 1.7 in HA with three masters. The cluster has 10 nodes and around 400 pods.</p> <p>The cluster has heapster, prometheus, and ELK (collecting logs for some pods).</p> <p>We are seeing a very high activity in the masters, over 90% of CPU used by the api-server.</p> <p>Checking prometheus numbers we can see that near 5000 requests to the kube-apiserver are WATCH verbs, the rest are less than 50 request (GET, LIST, PATCH, PUT).</p> <p>Almost all requests are reported with client "Go-Http-client/2.0" (the default User Agent for the Go HTTP library).</p> <p>Is this a normal situation?</p> <p>How can we debug which are the pods sending these requests? (How can we add the source IP to the kube-apiserver logs?)</p> <p>[kube-apiserver.manifest][1]</p> <p>Thanks, Charles</p> <pre><code>[1]: https://pastebin.com/nGxSXuZb </code></pre>
<p>Regarding the Kubernetes architecture this is a normal behavior because all kubernetes cluster components are calling the api-server to watch for changes. </p> <p>That is why you have more than 5000 WATCH entries in your logs. Please take a look how the <a href="https://kubernetes.io/docs/admin/high-availability/building/" rel="nofollow noreferrer">kubernetes cluster is managed by kube api server</a> and how the <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/" rel="nofollow noreferrer">master-node comunication is organized</a></p>
<p>I am experimenting with GKE cluster upgrades in a 6 nodes (in two node pools) test cluster before I try it on our staging or production cluster. Upgrading when I only had a 12 replicas nginx deployment, the nginx ingress controller and cert-manager (as helm chart) installed took 10 minutes per node pool (3 nodes). I was very satisfied. I decided to try again with something that looks more like our setup. I removed the nginx deploy and added 2 node.js deployments, the following helm charts: mongodb-0.4.27, mcrouter-0.1.0 (as a statefulset), redis-ha-2.0.0, and my own www-redirect-0.0.1 chart (simple nginx which does redirect). The problem seems to be with mcrouter. Once the node starts draining, the status of that node changes to <code>Ready,SchedulingDisabled</code> (which seems normal) but the following pods remains:</p> <ul> <li>mcrouter-memcached-0</li> <li>fluentd-gcp-v2.0.9-4f87t</li> <li>kube-proxy-gke-test-upgrade-cluster-default-pool-74f8edac-wblf</li> </ul> <p>I do not know why those two kube-system pods remains, but that mcrouter is mine and it won't go quickly enough. If I wait long enough (1 hour+) then it eventually work, I am not sure why. The current node pool (of 3 nodes) started upgrading 2h46 minutes ago and 2 nodes are upgraded, the 3rd one is still upgrading but nothing is moving... I presume it will complete in the next 1-2 hours... I tried to run the drain command with <code>--ignore-daemonsets --force</code> but it told me it was already drained. I tried to delete the pods, but they just come back and the upgrade does not move any faster. Any thoughts?</p> <h1>Update #1</h1> <p>The mcrouter helm chart was installed like this:</p> <p><code>helm install stable/mcrouter --name mcrouter --set controller=statefulset</code></p> <p>The statefulsets it created for mcrouter part is:</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: labels: app: mcrouter-mcrouter chart: mcrouter-0.1.0 heritage: Tiller release: mcrouter name: mcrouter-mcrouter spec: podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: mcrouter-mcrouter chart: mcrouter-0.1.0 heritage: Tiller release: mcrouter serviceName: mcrouter-mcrouter template: metadata: labels: app: mcrouter-mcrouter chart: mcrouter-0.1.0 heritage: Tiller release: mcrouter spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: mcrouter-mcrouter release: mcrouter topologyKey: kubernetes.io/hostname containers: - args: - -p 5000 - --config-file=/etc/mcrouter/config.json command: - mcrouter image: jphalip/mcrouter:0.36.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: mcrouter-port timeoutSeconds: 5 name: mcrouter-mcrouter ports: - containerPort: 5000 name: mcrouter-port protocol: TCP readinessProbe: failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 tcpSocket: port: mcrouter-port timeoutSeconds: 1 resources: limits: cpu: 256m memory: 512Mi requests: cpu: 100m memory: 128Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/mcrouter name: config dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: mcrouter-mcrouter name: config updateStrategy: type: OnDelete </code></pre> <p>and here is the memcached statefulset:</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: labels: app: mcrouter-memcached chart: memcached-1.2.1 heritage: Tiller release: mcrouter name: mcrouter-memcached spec: podManagementPolicy: OrderedReady replicas: 5 revisionHistoryLimit: 10 selector: matchLabels: app: mcrouter-memcached chart: memcached-1.2.1 heritage: Tiller release: mcrouter serviceName: mcrouter-memcached template: metadata: labels: app: mcrouter-memcached chart: memcached-1.2.1 heritage: Tiller release: mcrouter spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: mcrouter-memcached release: mcrouter topologyKey: kubernetes.io/hostname containers: - command: - memcached - -m 64 - -o - modern - -v image: memcached:1.4.36-alpine imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: memcache timeoutSeconds: 5 name: mcrouter-memcached ports: - containerPort: 11211 name: memcache protocol: TCP readinessProbe: failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 tcpSocket: port: memcache timeoutSeconds: 1 resources: requests: cpu: 50m memory: 64Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 updateStrategy: type: OnDelete status: replicas: 0 </code></pre>
<p>That is a bit complex question and I am definitely not sure that it is like how I thinking, but... Let's try to understand what is happening.</p> <p>You have an upgrade process and have 6 nodes in the cluster. The system will upgrade it one by one using Drain to remove all workload from the pod.</p> <p>Drain process itself respecting your settings and number of replicas and <strong>desired state</strong> of workload has <strong>higher priority</strong> than the drain of the node itself. </p> <p>During the drain process, Kubernetes will try to schedule all your workload on resources where scheduling available. Scheduling on a node which system want to drain is disabled, you can see it in its state - <code>Ready,SchedulingDisabled</code>.</p> <p>So, Kubernetes scheduler trying to find a right place for your workload on all available nodes. It will wait as long as it needs to place everything you describe in a cluster configuration.</p> <p>Now the most important thing. You set that you need <code>replicas: 5</code> for your <code>mcrouter-memcached</code>. It cannot run more than one replica per node because of <code>podAntiAffinity</code> and a node for a running it should have enough resources for that, which is calculated using <code>resources:</code> block of <code>ReplicaSet</code>.</p> <p>So, I think, that your cluster just does not has enough resource for a run new replica of <code>mcrouter-memcached</code> on the remaining 5 nodes. As an example, on the last node where a replica of it still not running, you have not enough memory because of other workloads.</p> <p>I think if you will set <code>replicaset</code> for <code>mcrouter-memcached</code> to 4, it will solve a problem. Or you can try to use a bit more powerful instances for that workload, or add one more node to the cluster, it also should help.</p> <p>Hope I gave enough explanation of my logic, ask me if something not clear to you. But first please try to solve an issue by provided solution:)</p>
<p>I have nodejs code running inside a pod. From inside the pod I want to find the zone of the node where this pod is running. What is the best way do do that? Do I need extra permissions?</p>
<p>I have not been able to find a library but I post the code that does it below. The getContent function was slightly adapted from this <a href="https://www.tomas-dvorak.cz/posts/nodejs-request-without-dependencies/" rel="nofollow noreferrer">post</a> This code should work inside a GKE pod or and GCE host.</p> <p>Use it as following:</p> <pre><code>const gcp = require('./gcp.js') gcp.zone().then(z =&gt; console.log('Zone is: ' + z)) </code></pre> <p>Module: gcp.js</p> <pre><code>const getContent = function(lib, options) { // return new pending promise return new Promise((resolve, reject) =&gt; { // select http or https module, depending on reqested url const request = lib.get(options, (response) =&gt; { // handle http errors if (response.statusCode &lt; 200 || response.statusCode &gt; 299) { reject(new Error('Failed to load page, status code: ' + response.statusCode)); } // temporary data holder const body = []; // on every content chunk, push it to the data array response.on('data', (chunk) =&gt; body.push(chunk)); // we are done, resolve promise with those joined chunks response.on('end', () =&gt; resolve(body.join(''))); }); // handle connection errors of the request request.on('error', (err) =&gt; reject(err)) }) }; exports.zone = () =&gt; { return getContent( require('http'), { hostname: 'metadata.google.internal', path: '/computeMetadata/v1/instance/zone', headers: { 'Metadata-Flavor': 'Google' }, method: 'GET' }) } </code></pre>
<p>I saw some example where the Kubernetes cluster is installed with ingress controller and then the ingress class is added with annotations and host as below.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: testsvc.k8s.privatecloud.com http: </code></pre> <p>I am not sure which service is installed and which IP is configured with the DNS <strong>"k8s.privatecloud.com"</strong> so as to route requests? How the DNS routing <strong>"k8s.privatecloud.com"</strong> routes requests to Kubernetes cluster? How the ingress to kubernetes bridging works?</p> <p>Also, There could be many services configured with the hosts rule like,</p> <pre><code>testsvc.k8s.privatecloud.com testsvc1.k8s.privatecloud.com testsvc2.k8s.privatecloud.com </code></pre> <p>How the subdomain routing works here when we hit the service testsvc.k8s.privatecloud.com or testsvc1.k8s.privatecloud.com ...</p> <p>Thanks</p>
<p>The DNS for all the hostnames in your given example (e.g. <code>testsvc.k8s.privatecloud.com</code>) would point to the machine or load-balancer through which traffic will reach the Ingress controller's nginx, as is described in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">the kuberetes Ingress documentation</a></p> <p>Subdomain routing is traditionally done via "virtual-hosting", sometimes called "v-host-ing", and the nginx ingress uses the HTTP <code>Host:</code> header to know which backend service should receive that traffic. Some Ingress controllers are able to use SNI for that same trick over https.</p>
<p>Posting this question on behalf of a customer:</p> <p>We are trying to integrate Kubernetes OIDC authentication with Azure AD. According to the documentation in order to use User groups we need to pass the following option to the Kubernetes API service: <code>--oidc-groups-claim user_roles</code> Api service uses <code>user_roles</code> to look up the group names of the groups the user is a member of in the JWT returned by Azure AD.</p> <p>However, when we decode the JWT returned by Azure AD, we can't find any field called <code>user_roles</code> in the returned JWT. The decoded JWT looks like this (redacted): </p> <pre><code>{ "aud": "spn:XXX", "iss": "https://sts.windows.net/XXX/ ", "iat": XXX, "nbf": XXX, "exp": XXX, "acr": "1", "aio": "XXX", "amr": [ "pwd", "mfa" ], "appid": "XXX", "appidacr": "0", "family_name": "Foo", "given_name": "Bar", "groups": [ "gid1", "gid2" ], "ipaddr": "XXX", "name": "Foo Bar", "oid": "XXX", "onprem_sid": "XXX", "scp": "user_impersonation", "sub": "XXX", "tid": "XXX", "unique_name": "XXX", "upn": "XXX", "uti": "XXX", "ver": "1.0" } </code></pre> <p>As you can see there is no user_role field present in the returned JWT. Is there anything we are missing ie. should we enable some settings in the Azure AD that will get the Azure return user_role populated with the group names the user is a member of? JWT we are hoping to get should look something like this (please note the user_role field): </p> <p><a href="https://github.com/kubernetes/kubernetes/issues/33290#issue-178672086" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/33290#issue-178672086</a> </p> <pre><code>{ "iss": "XXX", "aud": "XXX", "exp": XXX, "jti": "XXX", "iat": XXX, "nbf": XXX, "sub": "mmosley", "user_role": [ "admin", "users", "approvers" ], "email": "XXX" } </code></pre> <p>Any help or pointers would be greatly appreciated.</p>
<p>First,as I known, AAD <code>id_token</code> only supports <code>role</code> claim, NOT <code>user_role</code>. It can be added into <code>id_token</code> by adding <code>appRoles</code> property of the AAD application manifest. And bit in theconfig is needed to match the audience from tokens retrieved by the Azure AAD. </p> <p>Second,<code>--oidc-groups-claim</code> should not use <code>user_role</code> claim. According to my understanding, it should be <code>groups</code> which match the <code>groups</code> claim in <code>id_token</code>.</p> <p>Also, you can refer to <a href="https://pepperprovesapoint.com/2018/01/03/integrating-kubernetes-rbac-with-azure-ad/" rel="nofollow noreferrer"><strong>this blog</strong></a> and <a href="https://github.com/colemickens/azure-ad-k8s-oidc-example" rel="nofollow noreferrer"><strong>this sample</strong></a> to Integrate Kubernetes RBAC with Azure AD.</p> <p>See more details about RBAC authentication for Kubernetes in <a href="https://kubernetes.io/docs/admin/authorization/#rbac-mode" rel="nofollow noreferrer">this document.</a></p>
<p>Working on getting development environment setup in Minikube and ran across an issue pulling images from the <code>https://quay.io/v2/</code> registry. </p> <p>I have ran the command:<br> <code>eval $(minikube docker-env)</code> . </p> <p>Which allows me to build my local <code>Dockerfile</code> in Minikube and it does a great job with that and deployments work great with local images.</p> <p>I then used helm to install <code>helm install stable/mssql-linux</code> . </p> <p>Which worked fine and its image points to this <code>microsoft/mssql-server-linux:2017-CU3</code> <a href="https://hub.docker.com/r/microsoft/mssql-server-linux/" rel="nofollow noreferrer">HERE</a></p> <p>I am also working with <a href="https://github.com/kubernetes/charts/tree/master/stable/redis-ha" rel="nofollow noreferrer">redis-ha</a> and installed like so:<br> <code>helm install stable/redis-ha --set="rbac.create=false"</code> </p> <p>The <code>rbac.create=false</code> seems to allow it to install in Minikube without causing all sorts of issues. However, despite creating deployments and services...the deployments ultimately fail because it cant pull the image.</p> <p>I get the following error: <code> Failed to pull image "quay.io/smile/redis:4.0.8r0": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></p> <p>The deployments point to this <a href="https://quay.io/repository/smile/redis?tab=info" rel="nofollow noreferrer">registry</a> image: <code>quay.io/smile/redis:4.0.8r0</code></p> <p>I have changed my DNS pretty much everywhere I could to point to <code>8.8.8.8</code> as it seems like it cant resolve the URL. It could also just be that I need to add the registry someplace? I kind of feel that its registry specific since Minikube docker daemon appears to be able to pull from <code>docker hub</code> but not <code>quay.io</code>. </p> <p>If I use a terminal that is not running <code>eval $(minikube docker-env)</code> and use the docker daemon on my host computer I can pull the <code>quay.io/smile/redis:4.0.8r0</code> image just fine...ssh into minikube and try and it cant pull.</p> <p><strong>Minikube version</strong> <code>minikube version: v0.25.0</code></p> <p><strong>Docker for Mac</strong> <code>Version 17.12.0-ce-mac55 (23011)</code></p>
<blockquote> <p>as it seems like it cant resolve the URL</p> </blockquote> <p>What lead you to believe that, when the error clearly states that it has a <code>Client.Timeout exceeded while awaiting headers</code>? It resolved the registry to an IP address, and even apparently opened a network connection to what it thinks is the registry's IP and port. But after that, the networking stack in minikube did not, in fact, allow the traffic out. Observe that the error wasn't DNS, and it wasn't connection refused, it was connection timed out. That is almost always a firewall-esque behavior.</p> <p>That smells very, very much like a corporate HTTP proxy, since your machine can interact with the Internet but minikube cannot.</p> <p>There are a ton of troubleshooting steps one could go through, however, if you are interested in a very quick win, you can, from your working host computer, run <code>docker save quay.io/smile/redis:4.0.8r0 | ssh-into-minikube "docker load"</code> and treat minikube as if it were airgapped.</p>
<p>I saw some example where the Kubernetes cluster is installed with ingress controller and then the ingress class is added with annotations and host as below.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: testsvc.k8s.privatecloud.com http: </code></pre> <p>I am not sure which service is installed and which IP is configured with the DNS <strong>"k8s.privatecloud.com"</strong> so as to route requests? How the DNS routing <strong>"k8s.privatecloud.com"</strong> routes requests to Kubernetes cluster? How the ingress to kubernetes bridging works?</p> <p>Also, There could be many services configured with the hosts rule like,</p> <pre><code>testsvc.k8s.privatecloud.com testsvc1.k8s.privatecloud.com testsvc2.k8s.privatecloud.com </code></pre> <p>How the subdomain routing works here when we hit the service testsvc.k8s.privatecloud.com or testsvc1.k8s.privatecloud.com ...</p> <p>Thanks</p>
<p>In addition to @Matthew L Daniel answer. The kubernetes Ingress works as a proxy between external network and your cluster. The behavior of the ingress is explained in the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">object ingress</a>. For example: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80 </code></pre> <p>Above it`s explained how to route traffic between 2 backends s1 and s2. Ingress does not hold any information about services except its name and port, every time it needs more details it would need to be requested from the api-server. </p>
<p>I'm doing a prototype where one service depends on an availability of other. Scenario:</p> <ul> <li>Service A is assumed to be already available in a local network. It was either deployed by K8S or manually (or even a managed one provided by AWS etc.).</li> <li>Service B depends on environment variable <code>SERVICE_A_IP</code> and won't start without it. It's treated as a black box and can't be modified.</li> </ul> <p>I want to pass Service A IP to Service B through K8S YAML configuration file. Perfect syntax for this occasion:</p> <pre><code>... env: - name: SERVICE_A_IP valueFrom: k8sDeployment: name: service_a key: deploymentIP ... </code></pre> <p>During the prototyping stage Service A is an another K8S deployment but it might not be so in a production environment. Thus I need to decouple from <code>SERVICE_A_SERVICE_IP</code> that will be available to Service B (given it's deployed after Service A). I'm not into DNS discovery as well as it would require container modification which is far from a perfect solution.</p> <p>If I would do it manually with <code>kubectl</code> (or with a shell script) it would be like the following:</p> <pre><code>$ kubectl run service_a --image=service_a:latest --port=8080 $ kubectl expose deployment service_a $ SERVICE_A_IP="$(kubectl describe service service_a | \ grep IP: | \ cut -f2 -d ':' | \ xargs)" $ kubectl run service_b --image=service_b:latest --port=8080 \ --env="SERVICE_A_IP=${SERVICE_A_IP}" </code></pre> <p>It works. Though I want to do the same using YAML configuration without injecting <code>SERVICE_A_IP</code> into configuration file with shell (basically modifying the file).</p> <p>Is there any way to do so? Please take the above setting as set in stone.</p> <p><strong>UPDATE</strong></p> <p>Not the best way though still:</p> <pre><code>$ kubectl create -f service_a.yml deployment "service_a" created service "service_a" created $ SERVICE_A_IP="$(kubectl describe service service_a | \ grep IP: | \ cut -f2 -d ':' | \ xargs)" $ kubectl create configmap service_a_meta \ --from-literal="SERVICE_A_IP=${SERVICE_A_IP}" </code></pre> <p>And then in <code>service_b.yml</code>:</p> <pre><code>... env: - name: SERVICE_A_IP valueFrom: configMapKeyRef: name: service_a_meta key: SERVICE_A_IP ... </code></pre> <p>That will work but still involves some shell and generally feels way too hax.</p>
<p>You can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">attach handlers to lifecycle events</a> for update your environment variables on start.</p> <p>Here is an example: <code> apiVersion: v1 kind: Pod metadata: name: appB spec: containers: - name: appB image: nginx lifecycle: postStart: exec: command: ["/bin/sh", "-c", "export SERVICE_B_IP=$(host &lt;SERVICE_B&gt;.&lt;SERVICE_B_NAMESPACE&gt;.svc.cluster.local)"]</code></p> <p>Kubernetes will run <code>preStart</code> script each time when pod with your appB container is starting right in appB container before execution of the main application.</p> <p>But, because of <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">that description</a>:</p> <blockquote> <p><strong>PostStart</strong></p> <p>This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.</p> </blockquote> <p>You need to add some <code>sleep</code> for your main app before the real start just to be sure that hook will be finished before application will be started.</p>
<p>I've created a Kubernetes deployment. However, there seem to be additional pods running - that I'm hoping to be able to delete the unnecessary ones. </p> <p>I see no need to run the dashboard container. I'd like to remove it to free up CPU resources.</p> <p>How can I disable this container from starting up? Preferably from the deployment config.</p> <p>Essentially the following pod:</p> <pre><code>kubectl get pods --all-namespaces | grep "dashboard" kube-system kubernetes-dashboard-490794276-sb6qs 1/1 Running 1 3d </code></pre> <p><strong>Additional information:</strong></p> <p>Output of <code>kubectl --namespace kube-system get deployment</code>:</p> <pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE heapster-v1.3.0 1 1 1 1 3d kube-dns 2 2 2 2 3d kube-dns-autoscaler 1 1 1 1 3d kubernetes-dashboard 1 1 1 1 11m l7-default-backend 1 1 1 1 3d </code></pre> <p>Output of <code>kubectl --namespace kube-system get rs</code>:</p> <pre><code>NAME DESIRED CURRENT READY AGE heapster-v1.3.0-191291410 1 1 1 3d heapster-v1.3.0-3272732411 0 0 0 3d heapster-v1.3.0-3742215525 0 0 0 3d kube-dns-1829567597 2 2 2 3d kube-dns-autoscaler-2501648610 1 1 1 3d kubernetes-dashboard-490794276 1 1 1 12m l7-default-backend-3574702981 1 1 1 3d </code></pre>
<h2>Update 2023-03</h2> <p>To have a clean removal you must to delete a lot of objects. Overtime removing the dashboard has been a common problem, so you can now do this:</p> <pre><code>kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml </code></pre> <p>If you don't want to do a blind thing and you want to know what are you removing just try to execute this:</p> <pre><code>kubectl get secret,sa,role,rolebinding,services,deployments --namespace=kube-system | grep dashboard </code></pre> <p>If the output is empty, just double check your dashboard namespace's with the command</p> <pre><code>kubectl get namespaces </code></pre> <p>The dashboard is stored in a separate namespace and, depending of your context, the dashboard can be not always in the same namespace. If you want have a deeper look start trying with <code>kubernetes-dashboard</code> or <code>kube-system</code> and always specify the namespace while callin <code>kubectl</code>.</p>
<p>I'm receiving the following error when the runner is trying to retrieve a resource:</p> <pre><code>checking failed - Expected to find variables: git </code></pre> <p>my resource looks similar to:</p> <pre><code>- name: resource-repo type: git source: uri: https://[url] branch: master tag_filter: '*' username: ((git.username)) password: ((git.password)) </code></pre> <p>my values.yaml for the helm chart includes:</p> <pre><code>rbac: create: false credentialManager: kubernetes: namespacePrefix: concourse </code></pre> <p>(regardless, the release name is concourse)</p> <p>under namespace <code>concourse-main</code> i have the the secret: </p> <pre><code>Details Name: git Namespace: concourse-main Type: Opaque Data password: bytes username: bytes </code></pre> <p>further information:</p> <ul> <li>k8s 1.8.6</li> <li>kops 1.8.1 </li> <li>weavenet </li> <li>Concourse 3.9.1</li> </ul>
<p>Based on information from the <a href="https://github.com/kubernetes/charts/tree/master/stable/concourse#kubernetes-secrets" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>By default, this chart will use Kubernetes Secrets as a credential manager. For a given Concourse team, a pipeline will look for secrets in a namespace named [namespacePrefix][teamName]. The namespace prefix is the release name hyphen by default, and can be overridden with the value <code>credentialManager.kubernetes.namespacePrefix</code>.</p> </blockquote> <p>In your configuration, I see the secret in a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespace</a> of your secret is <code>concourse-main</code>, but a default <code>namespacePrefix</code> is <code>concourse</code>. </p> <p>So, Concourse trying to get your secret from a wrong namespace.</p> <p>You can fix it using two ways:</p> <ol> <li>Create a secret in the namespace <code>concourse</code>.</li> <li>Change <code>namespacePrefix</code> to <code>concourse-main</code>.</li> </ol>
<p>We have deployed a K8S cluster using ACS engine in an Azure public cloud. We are able to create deployments and services but when we enter a pod using "<strong>kubectl exec -ti (pod name) (command)</strong>" we are receiving the below error,</p> <blockquote> <p><strong>Error from server: error dialing backend: dial tcp: lookup (node hostname) on 168.63.129.16:53: no such host</strong></p> </blockquote> <p>I looked all over the internet and performed all I could to fix this issue but no luck so far. The OS is Ubuntu and 168.63.129.16 is a public IP from Azure used for DNS.(refer below link)</p> <blockquote> <p><a href="https://blogs.msdn.microsoft.com/mast/2015/05/18/what-is-the-ip-address-168-63-129-16/" rel="noreferrer">https://blogs.msdn.microsoft.com/mast/2015/05/18/what-is-the-ip-address-168-63-129-16/</a></p> </blockquote> <p>I've already added host entries to <strong>/etc/hosts</strong> and entries into <strong>resolv.conf</strong> of the master/node server and nslookup resolves the same. I've also tested by adding <strong>--resolv-conf</strong> flag to the kubelet but still it fails. I'm hoping that someone from this community can help us fix this issue.</p>
<p>Verify the node on which your pod is running can be resolved and reached from inside the API server container. If you added entries to <code>/etc/resolv.conf</code> on the master node verify they are visible in the APIserver container, if they are not, restarting the API server pod might be helpful </p>
<p>My service dispatches to multiple replicas of a deployment. Usually the load will of course be balanced round-robin style (K8s default).</p> <p>But what happens if one of the backend instances is temporarily offline, i.e. it closes its port (<code>80</code> in that case) for some time but the pod still running? Will the service automatically skip it for new requests and include it again when it is listening on its port again? Or will requests still continue to go to this pod and fail?</p> <p>I was unable to find the answer in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">docs</a>.</p>
<p>I think you're looking for <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">readiness probes</a>. By configuring a readiness probe for your pod, Kubernetes will periodically check your pod to figure out if it's able to serve traffic. You can configure how it determines if a pod is ready (ping a port, run a command, make an http request, etc) as well as how often it checks. If the readiness probe fails, the pod will be marked as Not Ready and traffic won't be routed to it until it's Ready again.</p>
<p>I'm getting the following error</p> <pre><code>Error: found in requirements.yaml, but missing in charts/ directory: dependency-chart </code></pre> <p>when I try to install a chart. The chart has a dependency on <code>dependency-chart</code>. </p> <p><code>requirements.yaml</code>:</p> <pre><code>dependencies: - name: dependency-chart repository: "@some-repo" version: 0.1.0 </code></pre> <p>Commands performed:</p> <pre><code>rm -rf charts helm dep up helm upgrade --install chart-to-install . --debug </code></pre> <p>Output:</p> <pre><code>Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "some-repo" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 1 charts Downloading dependency-chart from repo gs://some-repo Deleting outdated charts [debug] Created tunnel using local port: '65477' [debug] SERVER: "127.0.0.1:65477" Error: found in requirements.yaml, but missing in charts/ directory: dependency-chart </code></pre> <p><code>charts/</code> directory contains <code>dependency-chart-0.1.0.tgz</code></p> <p>I have many other charts which depends on <code>dependency-chart</code> and they work just fine. <code>helm lint</code> does not help:</p> <pre><code>==&gt; Linting . [ERROR] Chart.yaml: directory name (helm) and chart name (dependency-chart) must be the same [INFO] Chart.yaml: icon is recommended [WARNING] templates/: directory not found Error: 1 chart(s) linted, 1 chart(s) failed </code></pre> <p>The ERROR is reported by the linter in other charts where the helm install command works, so it's not really helping me.</p> <p>I've tried to point to the local chart in the <code>requirements.yaml</code> instead of from the aliased repository, same result.</p> <p>I've run out of things to try to debug the issue as well, any suggestion?</p>
<p>I've started debugging the Helm project locally and I stumbled upon this:</p> <pre><code>// If a .helmignore file matches, skip this file. if rules.Ignore(n, fi) { return nil } </code></pre> <p>This reminded me that, for some reason, I've added <code>charts/</code> to the <code>.helmignore</code> file. And that is the reason why it was not seeing the downloaded dependencies :(</p>
<p>There is a cluster Kubernetes and IBM Cloud Private with two workers. I have one deployment which creates two pods. How can I force deployment to install its pods on two different workers? In this case if I lost one icp worker I always have other with need pod.</p>
<p>If you want pods to not schedule on the same node, the correct concept that you will want to use is inter-pod anti-affinity. <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature</a></p> <p>Observe:</p> <pre><code>spec: replicas: 2 selector: app: my-app template: metadata: labels: app: my-app spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - my-app topologyKey: kubernetes.io/hostname </code></pre>
<p>We decided to move our apps from Service:LoadBalancer to Ingress, and I've chosen <strong>Nginx Ingress Controller</strong>, as I'm familiar with it, and because it's one of the most popular controllers in Kubernetes world</p> <p>Previously we had Nginx => Uwsgi combination that stands behind ELB. We compile nginx from source, as we need some 3rd party modules and Lua support.</p> <pre><code>ELB =&gt; Nginx Server =&gt; UWSGI ELB =&gt; Nginx Ingress (Load Balancer) =&gt; Nginx (Server) =&gt; UWSGI </code></pre> <p>My question is: is it okay to have 2 nginx in a proxy chain? </p> <p>I understand that one plays the role of LoadBalancer, and another is a server itself. But for me it comes with a pain, like if I change some options in Server nginx.conf, like increase the size of client body to 8MB, I should do the same on Nginx-Ingress. Also I'm wondering how to set timeouts: as there is a timeout between ingress=>server and server=>uwsgi, and in general how to tune the performance while having 3 proxies before request hits the app?</p> <p>Is it a good practice to remove Nginx Server, so Ingress Controller acts like a server and loadbalancer at the same time? What about 3rd party modules that we use?</p>
<ol> <li>There's nothing wrong in principle with having 2 or more nginx in a proxy chain, other than, as alluded to in the question and in the below, the extra complexity.</li> <li>It is a pain to maintain consistent configuration across multiple proxies, and in particular to have upstream configuration bleed into ingress. It can get very complicated when the same ingress serves multiple upstreams each with different traffic requirements. But this is often nevertheless unavoidable. </li> <li>Each hop will have its own distinct timeout and retry configuration, and managing them can be complicated, especially the downstream timeout when upstream has retries. One can wind up with very strange failure patterns. </li> <li>It is not a good idea to bundle an application with an ingress controller. Ingress is about offering a stable entry point into the cluster for out-of-cluster traffic, and distributing that traffic to multiple upstream applications in the cluster. If there is only one upstream application, one really does not need ingress, so if possible much better to just expose it as a Service, either using NodePort or LoadBalancer, depending on circumstance. </li> </ol>
<p>I have a backend nodeJS application running in a kubernetes cluster. Now I want to run two cron jobs to be scheduled every month. The cron jobs are in a JS file. How do I create a job that runs those JS files in the pod running that service every month using Kubernetes ? <br> This link gives a basic understanding of how it works but I am a little confused on how to run it for a specific service and on a specific pod<br><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#writing-a-cron-job-spec" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#writing-a-cron-job-spec</a></p>
<p>Unfortunately, you cannot run the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> inside a container of your application.</p> <p>The only thing you can do is create a container which will contain your cronjob and necessary environment for running it and schedule to run that pod by a CronJob.</p> <p>Here is an example of the configuration:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" # syntax is a same as in system cron jobs jobTemplate: spec: template: spec: containers: - name: hello image: image_with_cronjob_code # here is an image with your cronjob args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre>
<p>A <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> dispatches to multiple replicas of a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> (default round-robin).</p> <p>The backend instances temporarily have to go offline, i.e. they will close their port 80, take some time, and then open the port again.</p> <p><code>deployment.yaml</code> is using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer"><code>readinessProbe</code></a> to regularly check which backend instances are ready to serve requests.</p> <p>But what happens in the following scenario?</p> <pre><code>1) readiness check backend A: OK 2) backend A goes offline 3) requests to service is forwarded to backend A 4) readiness check backend A: fail </code></pre> <p>Will the service send the request again but to another backend instance, or is it lost?</p>
<p>It depends on the type of Service. </p> <p>If the Service is a ClusterIP or NodePort, it's instantiated as iptables rules. Packets destined for the now offline pod will be undeliverable, causing the request to timeout. </p> <p>If the Service is a LoadBalancer, the implementation is an application, like nginx or an equivalent. It will watch for timeouts, and generally speaking- though dependent on configuration- will retry, allowing the request to make it to an online pod.</p>
<ol> <li>I use a local kubernetes bundled with docker on Mac OS.</li> <li>I've installed the <a href="https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes" rel="nofollow noreferrer">nginx-ingress-controller</a>.</li> <li>I managed to send external http <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="nofollow noreferrer">request via ingress</a> to my kubernetes managed containers (e.g. from my local browser). All request are sent via the nginx ports 80 or 443.</li> </ol> <p>The problem is, that I can only route http or https requests via my ngnix controller. How can I send non HTTP Requests (e.g. database or corba) via ingress to my containers?</p>
<p>This is not well supported via the ingress mechanism and is an <a href="https://github.com/kubernetes/kubernetes/issues/23291" rel="noreferrer">open issue</a>.<br> There is a work around for tcp or udp traffic using nginx-ingress which will map an exposed port to a kubernetes service using a configmap.<br> See <a href="https://github.com/kubernetes/ingress-nginx/blob/e8d81034b8f6b3308caac85280e4cf3d93baee1c/docs/user-guide/exposing-tcp-udp-services.md" rel="noreferrer">this doc</a>. </p> <p>Start the ingress controller with the <code>tcp-services-configmap</code> (and/or <code>udp-services-configmap</code>) argument. </p> <pre><code>args: - "/nginx-ingress-controller" - "--tcp-services-configmap=default/nginx-tcp-configmap" - "--v=2" </code></pre> <p>deploy configmap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-tcp-configmap data: 9000: "default/example-service:8080" </code></pre> <p>where <code>9000</code> is the exposed port and <code>8080</code> is the service port</p>
<p>I noticed every node in a cluster has an external IP assigned to it. That seems to be the default behavior of Google Kubernetes Engine.</p> <p>I thought the nodes in my cluster should be reachable from the local network only (through its virtual IPs), but I could even connect directly to a mongo server running on a pod from my home computer just by connecting to its hosting node (without using a LoadBalancer).</p> <p>I tried to make Container Engine not to assign external IPs to newly created nodes by changing the cluster instance template settings (changing property "External IP" from "Ephemeral" to "None"). But after I did that GCE was not able to start any pods (Got "Does not have minimum availability" error). The new instances did not even show in the list of nodes in my cluster. </p> <p>After switching back to the default instance template with external IP everything went fine again. So it seems for some reason Google Kubernetes Engine requires cluster nodes to be public.</p> <p>Could you explain why is that and whether there is a way to prevent GKE exposing cluster nodes to the Internet? Should I set up a firewall? What rules should I use (since nodes are dynamically created)?</p> <p>I think Google not allowing private nodes is kind of a security issue... Suppose someone discovers a security hole on a database management system. We'd feel much more comfortable to work on fixing that (applying patches, upgrading versions) if our database nodes are not exposed to the Internet.</p>
<p>GKE recently added a new feature allowing you to create <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">private clusters</a>, which are clusters where nodes do not have public IP addresses. </p>
<p>I'm new to Kubernetes and I'm having trouble with a Deployment that specifies the backend database for the system. For some reason, the Pod hosting the database container gets randomly recreated, and so the data get wiped out. I'm adding a config for the database deployment to use a separate <code>emptyDir</code> volume, but I'm not sure how persistent that is. Will it survive recreation of the Pod? Also, is there a way to check why the Pod seems to get randomly recreated? How do I force Kubernetes to recreate the Pod so I can check if <code>emptyDir</code> volume persists?</p>
<p>Kubernetes' Deployment abstraction is not a good fit for stateful programs such as a database. It's not entirely impossible to that right, but that's not the right tool for the job.</p> <p>I suggest you look into <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>, which as the the name suggests is meant for stateful loads.</p> <p>With either, to force a recreation of a pod, simply delete the pod hosting your database, and observe the difference between a pod created by a Deployment and one created by a StatefulSet controller.</p>
<p>I installed minikube on my windows laptop and everything was fine, but when I tried to run <code>kubectl get pod</code> or any other kubectl commands I am getting this message:</p> <pre><code>kubectl get pod error: You must be logged in to the server (Unauthorized) </code></pre> <p>I do not know what am I doing wrong even though I added the credentials to my configuration:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: C:\Users\robert\.minikube\ca.crt server: https://192.168.99.100:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: as-user-extra: {} client-certificate: C:\Users\robert\.minikube\client.crt client-key: C:\Users\robert\.minikube\client.key </code></pre> <h2>minikube info</h2> <pre><code>minikube version minikube version: v0.25.1 </code></pre> <h2>minikube upgrade</h2> <pre><code>minikube version: v0.25.2 </code></pre> <h2>Kubernetes info</h2> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"} error: You must be logged in to the server (the server has asked for the client to provide credentials) </code></pre> <h2>minikube directories and files</h2> <pre><code> Directory: C:\Users\robert\.minikube Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 3/22/2018 1:40 PM addons d----- 3/22/2018 1:40 PM cache d----- 3/22/2018 1:40 PM certs d----- 3/22/2018 1:47 PM config d----- 3/22/2018 1:40 PM files d----- 3/22/2018 1:40 PM logs d----- 3/22/2018 9:42 PM machines d----- 3/22/2018 2:32 PM profiles -a---- 3/22/2018 10:56 PM 1298 apiserver.crt -a---- 3/22/2018 10:56 PM 1679 apiserver.key -a---- 3/22/2018 2:33 PM 1066 ca.crt -a---- 3/22/2018 2:33 PM 1675 ca.key -ar--- 3/22/2018 10:55 PM 1054 ca.pem -ar--- 3/22/2018 10:55 PM 1094 cert.pem -a---- 3/22/2018 10:56 PM 1103 client.crt -a---- 3/22/2018 10:56 PM 1675 client.key -ar--- 3/22/2018 10:55 PM 1679 key.pem -a---- 3/22/2018 7:29 PM 29 last_update_check -a---- 3/22/2018 2:33 PM 1074 proxy-client-ca.crt -a---- 3/22/2018 2:33 PM 1675 proxy-client-ca.key -a---- 3/22/2018 10:56 PM 1103 proxy-client.crt -a---- 3/22/2018 10:56 PM 1675 proxy-client.key </code></pre> <h2>minikube logs</h2> <pre><code>Mar 23 03:04:18 minikube localkube[2997]: I0323 03:04:18.212816 2997 ready.go:30] Performing healthcheck on https://localhost:8443/healthz Mar 23 03:04:18 minikube localkube[2997]: E0323 03:04:18.219072 2997 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: x509: certificate has expired or is not yet valid Mar 23 03:04:18 minikube localkube[2997]: http: TLS handshake error from 127.0.0.1:41884: remote error: tls: bad certificate Mar 23 03:04:19 minikube localkube[2997]: I0323 03:04:19.212786 2997 ready.go:30] Performing healthcheck on https://localhost:8443/healthz Mar 23 03:04:19 minikube localkube[2997]: E0323 03:04:19.219637 2997 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: x509: certificate has expired or is not yet valid Mar 23 03:04:19 minikube localkube[2997]: http: TLS handshake error from 127.0.0.1:41886: remote error: tls: bad certificate </code></pre> <h2>Certificate details</h2> <pre><code>openssl x509 -in apiserver.crt -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha256WithRSAEncryption Issuer: CN=minikubeCA Validity Not Before: Mar 23 18:05:14 2018 GMT Not After : Mar 23 18:05:14 2019 GMT Subject: O=system:masters, CN=minikube Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:fa:...fa:e7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Alternative Name: DNS:minikubeCA, DNS:kubernetes.default.svc.cluster.local, DNS:kubernetes.default.svc, DNS:kubernetes.default, DNS:kubernetes, DNS:localhost, IP Address:192.168.99.100, IP Address:10.96.0.1, IP Address:10.0.0.1 Signature Algorithm: sha256WithRSAEncryption 47:88:26:93:d2..... ab:b1:8a:43 </code></pre>
<p>First, as reported <a href="https://github.com/kubernetes/minikube/issues/2317#issuecomment-351497305" rel="nofollow noreferrer">here</a> or <a href="https://github.com/kubernetes/minikube/issues/2464#issuecomment-361143511" rel="nofollow noreferrer">here</a>, check if the issue persists in the latest version (0.25.2, which <em>just</em> got released a few hours ago)</p> <p>Second, check how your certificate is formed (as <a href="https://github.com/kubernetes/minikube/issues/2361#issuecomment-356208073" rel="nofollow noreferrer">in this issue</a>)</p> <pre><code>openssl x509 -in apiserver.crt -text -noout </code></pre> <p>And/or try to <a href="https://github.com/kubernetes/minikube/issues/288#issuecomment-231423876" rel="nofollow noreferrer">regenerate the token</a>.</p>
<p>We are using Dockers and Kubernetes to containerize my .Net web application and scale up to 8-9 replicas. So we installed Dockers (in my Win 10 machine) and used Linux containers.</p> <p>Orchestration is done by Kubernetes. I would like to get the name/id of the container in which my web application is running to serve the current user request.</p> <p>Could not find a way to get the id/name of the container from my .Net Web application. Please share me your thoughts on this.</p> <p>Regards</p>
<p>You can expose Pod fields as environment variables, the field <code>metadata.name</code> will contain the name of the Pod.</p> <p>Documentation and example: <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-container-fields-as-values-for-environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-container-fields-as-values-for-environment-variables</a></p> <p>Edit: You can also use <code>System.Environment.MachineName</code> as @vahdet commented earlier. The hostname of the container is the pod id.</p>
<p>K8s has this feature to <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer">encrypt secret data</a>, which requires modification of kube-apiserver config, how can i do this in GKE?</p>
<p>Short answer is, you can't.</p> <p>The Kubernetes Engine master is managed by Google, so you can't change its runtime parameters. Nonetheless, while the data may not be encrypted inside the etcd running on the master node, <a href="https://cloud.google.com/security/encryption-at-rest/default-encryption/#encryption_of_data_at_rest" rel="nofollow noreferrer">the contents of the master node itself are encrypted</a> as the link Will pointed to explains.</p>
<p>I have a Elastic search setup.</p> <p>I need to deploy the same in kubernetes</p> <p>Purpose : Automatically scale up the ES server with all its data to 3 instances if the CPU/RAM reaches 90% and scale it down to one instance with all its data</p> <p>Using <code>ReplicationController</code> scaling can be possible</p> <p>I need , How to setup ES with all it's current data in kubernetes</p> <p>Can anyone help me on this out?</p>
<p>You will want <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer"><code>StatefulSet</code></a>s (<a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">tutorial</a>) for part of that problem (they honor <code>kubectl scale sts --replicas=$foo</code>, just like an <code>rc</code>)</p> <p>Having members with predictable hostnames (as a <code>StatefulSet</code> will do) makes the <code>discovery.zen.ping.hostnames</code> a ton easier to configure -- even if there is only one ES node actually alive at the time (that is: you can specify <code>my-es-0.my-es.my-ns.svc.cluster.local,my-es-1.my-es.my-ns.svc.cluster.local,etc</code> in the <code>zen</code> list, and the first ES node will whine, but the 2nd and upward will benefit from the actual <em>discovery</em> part)</p> <blockquote> <p>scale it down to one instance with all its data</p> </blockquote> <p>Good luck with that; you'd have to re-balance the shards of every index with each resizing operation, since ES wants to distribute the load out across all members of its cluster -- the exact opposite of your consolidation objective.</p> <p>I'd bet you could make a custom Horizontal Pod Autoscaler implementation that waited until the rebalancing operations had completed before destroying the Pods, but I'm not aware right now of any out-of-the-box mechanism for doing that.</p>
<p>I am using the Kubernetes-client java client to create Deployments on a Kubernetes cluster. THis is the code</p> <pre><code>Deployment deployment = new DeploymentBuilder() .withNewMetadata() .withName("first-deployment") .endMetadata() .withNewSpec() .withReplicas(3) .withNewTemplate() .withNewMetadata() .addToLabels(namespaceID, "hello-world-example") .endMetadata() .withNewSpec() .addNewContainer() .withName("nginx-one") .withImage("nginx") .addNewPort() .withContainerPort(80) .endPort() .withResources(resourceRequirements) .endContainer() .endSpec() .endTemplate() .endSpec() .build(); deployment = client.extensions().deployments().inNamespace(namespace).create(deployment); </code></pre> <p>I add a3 min wait time and then test the status of the pod</p> <pre><code>PodList podList = client.pods().withLabel(namespaceID, "hello-world-example").list(); System.out.println("Number of pods " + podList.getItems().size()); for (Pod pod : podList.getItems()) { System.out.println("Name " + pod.getMetadata().getName() + " Status " + pod.getStatus().getPhase() + " Reason " + pod.getStatus().getReason() + " Containers " + pod.getSpec().getContainers().get(0).getResources().getLimits()); } </code></pre> <p>This returns the following sttaus</p> <pre><code>Name first-deployment-2418943216-9915m Status Pending Reason null Containers null Name first-deployment-2418943216-fnk21 Status Pending Reason null Containers null Name first-deployment-2418943216-zb5hr Status Pending Reason null Containers null </code></pre> <p>However from the commandline if I get <code>kubectl get pods --all-namespaces</code>. It returns the pod state as running . Am I using the right API? what did I miss?</p>
<p>Maybe a better way to check this is to have a loop and sleep inside to loop and continuously keep checking the status until all pods are up are running. I had done something similar to check if all the required pods were up by checking the status. But you might also want to consider adding the liveness and readiness probe on the pods before you make such a check. There are additional details provided here.</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a></p>
<p>normally I think it's a best practice for an incorrectly configured application to simply die on startup with a detailed error message describing the problem.</p> <p>For example, if an expected environment variable is missing, meaning the application can't run properly, instead of letting it run in a zombie state where it will never function, I advocate for failing loudly and killing the application with an error message: </p> <p><code>Critical Error: Environment variable [REDIS_HOST] not set.</code></p> <p>In kubernetes, this ends up in a constant <code>CrashLoop Backoff</code> loop. This isn't great as it's hard to get to that error message as the pod keeps getting restarted and the logs disappear.</p> <p>Any thoughts or suggestions on the proper way to handle this?</p> <p>thanks</p>
<p>You can customise a container's termination message by writing to <code>/dev/termination-log</code> by default. When your container terminates you can use <code>kubectl get pods &lt;podName&gt; -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"</code> to retrieve the message. More information about this can be found <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure#customizing-the-termination-message" rel="nofollow noreferrer">here</a>.</p> <p>You can also use <code>kubectl logs &lt;podName&gt; -c &lt;containerName&gt; --previous</code> to view the output of the previous instance of a particular container in a Pod - this may be more useful for you as you won't have to change your application to write error messages to <code>/dev/termination-log</code>.</p>
<p>Background:</p> <p>I'm using Drone to test an application. Drone is deployed to Kubernetes, with with a <code>docker</code> (dind / docker-in-docker) container side-carred.</p> <p>After the test completes, I use drone again to build &amp; push several docker images of about ~40mb each to us.gcr.io</p> <p>When Drone creates the docker container to test my application, and the separate container to build my application and images, it creates a docker network to link the containers to build services, like a temporary test database (pretty standard in a CI pipeline).</p> <p>However, the combination of Kubernetes pod networking, and Docker-in-Docker results in the following when trying to push to gcr:</p> <pre><code>time="2018-03-19T03:31:12.037507241Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662-&gt;z.y.x.w:443: write: broken pipe" time="2018-03-19T03:31:17.208009069Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662-&gt;z.y.x.w:443: write: broken pipe" time="2018-03-19T03:31:17.216232506Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662-&gt;z.y.x.w:443: write: broken pipe" time="2018-03-19T03:31:17.407608372Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662-&gt;z.y.x.w:443: write: broken pipe" time="2018-03-19T03:31:17.410403394Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662-&gt;z.y.x.w:443: write: broken pipe" time="2018-03-19T03:31:23.432621075Z" level=error msg="Upload failed, retrying: unexpected EOF" </code></pre> <p>However, when pushing to (what I assume is) an older registry version, then it works perfectly.</p> <p>When pushing to gcr while there is no docker container networking enabled, then it also works perfectly.</p> <p>Here are the docker commands being ran. Obviously the sensitive data has been omitted.</p> <pre><code>docker network create test-network &amp;&amp; \ docker run --network=test-network -d cockroachdb/cockroach:v1.1.2 -c /cockroach sql --insecure &amp;&amp; \ docker run --rm -it -e GKE_CLUSTER_NAME=my-cluster-1 -e GKE_CLUSTER_ZONE=us-east1-b -e GCP_PROJECT=my-gcp-project -e DOCKER_USE_GCP=true -v /var/run/docker.sock:/var/run/docker.sock --network=test-network us.gcr.io/my-project/runner /bin/sh -c 'mkdir -p src/git.example.com/project &amp;&amp; git clone https://user:[email protected]/project/project $GOPATH/src/git.example.com/project/project &amp;&amp; cd $GOPATH/src/git.example.com/project/project &amp;&amp; git checkout gcr &amp;&amp; jules -stage deploy_docker' </code></pre> <p>The <code>jules -stage deploy_docker</code> command runs a <code>go build</code>, <code>docker build</code>, and then <code>gcloud docker -- push...</code> on 8 different directories simultaneously.</p> <p>So, summary:</p> <p>Kubernetes pod + docker-in-docker + gcloud docker push results in a consistently interrupted connection.</p> <p>Is there something I could do with docker daemon or kubernetes network settings or something to mitigate this? At the very least I want to understand why this is happening.</p> <p>Thanks!</p> <hr> <p>Update:</p> <p>This doesn't even require Kubernetes to happen!</p> <p>I just tried it with a fresh GCE instance running Ubuntu and it happens there, too.</p>
<p>I contacted GCR support about this issue, as it seemed to only happen with GCR, and they informed me that the IAM account that was attempting to push to the registry was actually the default service account for GCE instances, and not the account that I provided to my Dockerfile.</p> <p>However, that did not explain the "Broken pipe" and "EOF" errors when I should have been getting <code>401 - Unauthorized</code>.</p> <p>I attempted the same push with the <code>google/cloud-sdk</code> docker image <a href="https://hub.docker.com/r/google/cloud-sdk/" rel="nofollow noreferrer">here</a> and it worked fine when I provided it the same key in a similar environment, so that told me that the way I installed gcloud on my docker image was bad.</p> <p>Here's what I had:</p> <pre><code>RUN wget https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.tar.gz RUN tar -xvf google-cloud-sdk.tar.gz RUN rm google-cloud-sdk.tar.gz RUN google-cloud-sdk/install.sh --usage-reporting=false \ --path-update=false \ --bash-completion=false ENV PATH="/go/google-cloud-sdk/bin:${PATH}" RUN gcloud components install kubectl RUN gcloud components install docker-credential-gcr </code></pre> <p>And here's what <code>google/cloud-sdk</code> had. Updating my Dockerfile to install it this way fixed my problem.</p> <pre><code># Install gcloud ENV CLOUD_SDK_VERSION 193.0.0 ARG INSTALL_COMPONENTS RUN easy_install -U pip &amp;&amp; \ pip install -U crcmod &amp;&amp; \ export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" &amp;&amp; \ echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" &gt; /etc/apt/sources.list.d/google-cloud-sdk.list &amp;&amp; \ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - &amp;&amp; \ apt-get update &amp;&amp; apt-get install -y google-cloud-sdk=${CLOUD_SDK_VERSION}-0 $INSTALL_COMPONENTS &amp;&amp; \ gcloud config set core/disable_usage_reporting true &amp;&amp; \ gcloud config set component_manager/disable_update_check true &amp;&amp; \ gcloud config set metrics/environment github_docker_image &amp;&amp; \ gcloud --version </code></pre> <p>I'm still clueless as to <em>why</em> this did it for me, so if anyone has any insight that'd be great.</p>
<p>I have a use case where I need a Docker container under kubernetes to access a hostPath. I'm using minikube, and the container is able to access a folder in the minikube VirtualBox VM. But I can't figure out how to get it to access a folder on the host itself.</p> <p>I do these commands on the host to create /opt/foo for sharing in the VM:</p> <pre><code>$ sudo touch /opt/foo/FOO $ ls /opt/foo FOO $ minikube mount -v 5 /opt/foo:/opt/foo Mounting /opt/foo into /opt/foo on the minikubeVM This daemon process needs to stay alive for the mount to still be accessible... ufs starting </code></pre> <p>In another window I look in the minikube VM</p> <pre><code>$ minikube ssh -- sudo ls -la /opt/foo total 0 drwxrwxr-x 2 root root 0 Jun 1 14:44 . drwxr-xr-x 5 root root 0 Jun 1 14:44 .. </code></pre> <p>Is there another step needed to make the files in that directory accessible?</p> <p>FYI - use case is a container process creating files that a host process is harvesting. Thus I do not want to use nfs or PersistentVolumes. Host is Centos7. minikube version: v0.19.0.</p>
<p>The problem is the firewall, in Ubuntu this worked for me:</p> <pre><code>sudo ufw allow in on virbr1 sudo ufw reload </code></pre> <p>But you need to figure out the correct interface name via <code>ifconfig</code>.</p> <p>In my case I did <code>minikube ip</code> to realize the interface was <code>virbr1</code></p> <p>I found the solution because in the past I had connectivity problems with docker which got resolved with <code>sudo ufw allow in on docker0</code></p>
<p>In this <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-work-by-ip" rel="nofollow noreferrer">doc</a> they use <code>u@node$</code> to define that the command is done from a node in a cluster. But how do you get to the node from kubectl?</p> <p>It is well described how to get to a pod <code>u@pod$</code></p>
<p>The apiserver can be used as a HTTP proxy (as described <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster#accessing-services-running-on-the-cluster" rel="nofollow noreferrer">here</a>) to hit endpoints on Nodes, but I assume you need an SSH session which won't help.</p> <p>On GKE you can SSH into your nodes using <code>gcloud</code> as follows:</p> <ul> <li><code>gcloud compute instances list</code></li> <li><code>gcloud compute ssh &lt;nodeName&gt;</code></li> </ul>
<p>I'm a noob with Kubernetes. I'm trying to follow some recipes to get a small cluster up and running, but I'm having troubles ...</p> <p>I have a master and (4) nodes, all running Ubuntu 16.04</p> <p>installed docker on all nodes:</p> <pre><code>$ sudo apt-get update $ sudo apt-get install -y \ apt-transport-https \ ca-certificates \ curl \ software-properties-common $ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - $ sudo add-apt-repository \ "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \ $(lsb_release -cs) \ stable" $ sudo apt-get update &amp;&amp; apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}') $ sudo docker version Client: Version: 17.12.1-ce API version: 1.35 Go version: go1.9.4 Git commit: 7390fc6 Built: Tue Feb 27 22:17:40 2018 OS/Arch: linux/amd64 Server: Engine: Version: 17.12.1-ce API version: 1.35 (minimum version 1.12) Go version: go1.9.4 Git commit: 7390fc6 Built: Tue Feb 27 22:16:13 2018 OS/Arch: linux/amd64 Experimental: false </code></pre> <p>turned off swap on all nodes</p> <pre><code>$ sudo swapoff -a </code></pre> <p>commented out the swap mounts in /etc/fstab</p> <pre><code>$ sudo vi /etc/fstab $ mount -a </code></pre> <p>installed kubeadm &amp; kubectl on all nodes:</p> <pre><code>$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ sudo cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF $ sudo apt-get update $ sudo apt-get install -y kubeadm kubectl $ kubeadm version kubeadm version: &amp;version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:21:35Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>downloaded and unpacked this into /usr/local/bin on master and all nodes: <a href="https://github.com/kubernetes-incubator/cri-tools/releases" rel="noreferrer">https://github.com/kubernetes-incubator/cri-tools/releases</a></p> <p>installed etcd 3.3.0 on all nodes:</p> <pre><code>$ sudo groupadd --system etcd $ sudo useradd --home-dir "/var/lib/etcd" \ --system \ --shell /bin/false \ -g etcd \ etcd $ sudo mkdir -p /etc/etcd $ sudo chown etcd:etcd /etc/etcd $ sudo mkdir -p /var/lib/etcd $ sudo chown etcd:etcd /var/lib/etcd $ sudo rm -rf /tmp/etcd &amp;&amp; mkdir -p /tmp/etcd $ sudo curl -L https://github.com/coreos/etcd/releases/download/v3.3.0/etcd- v3.3.0-linux-amd64.tar.gz -o /tmp/etcd-3.3.0-linux-amd64.tar.gz $ sudo tar xzvf /tmp/etcd-3.3.0-linux-amd64.tar.gz -C /tmp/etcd --strip-components=1 $ sudo cp /tmp/etcd/etcd /usr/bin/etcd $ sudo cp /tmp/etcd/etcdctl /usr/bin/etcdctl </code></pre> <p>noted the IP of the master:</p> <pre><code>$ sudo ifconfig -a eth0 eth0 Link encap:Ethernet HWaddr 1e:00:51:00:00:28 inet addr:172.20.43.30 Bcast:172.20.43.255 Mask:255.255.254.0 inet6 addr: fe80::27b5:3d06:94c9:9d0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3194023 errors:0 dropped:0 overruns:0 frame:0 TX packets:3306456 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:338523846 (338.5 MB) TX bytes:3682444019 (3.6 GB) </code></pre> <p>initialized kubernetes on the master:</p> <pre><code>$ sudo kubeadm init --pod-network-cidr=172.20.43.0/16 \ --apiserver-advertise-address=172.20.43.30 \ --ignore-preflight-errors=cri \ --kubernetes-version stable-1.9 [init] Using Kubernetes version: v1.9.4 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1 [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [jenkins-kube- master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.20.43.30] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 37.502640 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node jenkins-kube-master as master by adding a label and a taint [markmaster] Master jenkins-kube-master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 6be4b1.9a8dacf89f71e53c [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 6be4b1.9a8dacf89f71e53c 172.20.43.30:6443 --discovery-token-ca-cert-hash sha256:524d29b032d7bfd319b147ab03a936bd429805258425bccca749de71bcb1efaf </code></pre> <p>on the master node:</p> <pre><code>$ sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config $ export KUBECONFIG=$HOME/.kube/config $ echo "export KUBECONFIG=$HOME/.kube/config" | tee -a ~/.bashrc </code></pre> <p>setup flannel for networking on master:</p> <pre><code>$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml clusterrole "flannel" created clusterrolebinding "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created $ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml clusterrole "flannel" configured clusterrolebinding "flannel" configured </code></pre> <p>join the nodes to the cluster running this on each:</p> <pre><code>$ sudo kubeadm join --token 6be4b1.9a8dacf89f71e53c 172.20.43.30:6443 \ --discovery-token-ca-cert-hash sha256:524d29b032d7bfd319b147ab03a936bd429805258425bccca749de71bcb1efaf \ --ignore-preflight-errors=cri </code></pre> <p>installed the dashboard on the master:</p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml secret "kubernetes-dashboard-certs" created serviceaccount "kubernetes-dashboard" created role "kubernetes-dashboard-minimal" created rolebinding "kubernetes-dashboard-minimal" created deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created </code></pre> <p>started the proxy:</p> <pre><code>$ kubectl proxy Starting to serve on 127.0.0.1:8001 </code></pre> <p>opened another ssh to master with -L 8001:127.0.0.1:8001 and opened a local browser window for <a href="http://localhost:8001/ui" rel="noreferrer">http://localhost:8001/ui</a></p> <p>it redirects to <a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a> and says:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "no endpoints available for service \"https:kubernetes- dashboard:\"", "reason": "ServiceUnavailable", "code": 503 } </code></pre> <p>checking the pods ...</p> <pre><code>$ sudo kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default guids-74487d79cf-zsj8q 1/1 Running 0 4h kube-system etcd-jenkins-kube-master 1/1 Running 1 21h kube-system kube-apiserver-jenkins-kube-master 1/1 Running 1 21h kube-system kube-controller-manager-jenkins-kube-master 1/1 Running 2 21h kube-system kube-dns-6f4fd4bdf-7pr9q 3/3 Running 0 1d kube-system kube-flannel-ds-pvk8m 1/1 Running 0 4h kube-system kube-flannel-ds-q4fsl 1/1 Running 0 4h kube-system kube-flannel-ds-qhxn6 1/1 Running 0 21h kube-system kube-flannel-ds-tkspz 1/1 Running 0 4h kube-system kube-flannel-ds-vgqsb 1/1 Running 0 4h kube-system kube-proxy-7np9b 1/1 Running 0 4h kube-system kube-proxy-9lx8h 1/1 Running 1 1d kube-system kube-proxy-f46d8 1/1 Running 0 4h kube-system kube-proxy-fdtx9 1/1 Running 0 4h kube-system kube-proxy-kmnjf 1/1 Running 0 4h kube-system kube-scheduler-jenkins-kube-master 1/1 Running 1 21h kube-system kubernetes-dashboard-5bd6f767c7-xf42n 0/1 CrashLoopBackOff 53 4h </code></pre> <p>checking the log ...</p> <pre><code>$ sudo kubectl logs kubernetes-dashboard-5bd6f767c7-xf42n --namespace=kube-system 2018/03/20 17:56:25 Starting overwatch 2018/03/20 17:56:25 Using in-cluster config to connect to apiserver 2018/03/20 17:56:25 Using service account token for csrf signing 2018/03/20 17:56:25 No request provided. Skipping authorization 2018/03/20 17:56:55 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ </code></pre> <p>I find this reference to 10.96.0.1 rather odd. I don't have that on my network anywhere that I'm aware of.</p> <p>I put the output of <code>sudo kubectl describe pod --namespace=kube-system</code> on pastebin: <a href="https://pastebin.com/cPppPkRw" rel="noreferrer">https://pastebin.com/cPppPkRw</a></p> <p>Thanks in advance for any pointers.</p> <p>-Steve Maring</p> <p>Orlando, FL</p>
<blockquote> <p><code>--service-cluster-ip-range=10.96.0.0/12</code></p> </blockquote> <p>Line 76 of your pastebin shows the Service CIDR to be that, which squares with how kubernetes thinks of the world: <code>.1</code> in the Service CIDR is always kubernetes (IIRC <code>kube-dns</code> gets a pretty low IP assignment, too, but I can't recall if it is always fixed like the kubernetes one is)</p> <p>You'll want to either change both the Service and Pod CIDRs to fit within the <a href="https://github.com/coreos/flannel/blob/c0648a8ff/Documentation/kube-flannel.yml#L76" rel="nofollow noreferrer">10.244.0.0/16 subnet</a> that flannel created as a side-effect of deploying that yaml, or change its <code>ConfigMap</code> (err, at your peril now that the network has already been pushed into <code>etcd</code>) to align with the Service and Pod CIDR specified to your <code>apiserver</code>.</p>
<p>I'm trying to use minikube and kitematic for testing kubernetes on my local machine. However, kubernetes fail to pull image in my local repository (<code>ImagePullBackOff</code>).</p> <p>I tried to solve it with this : <a href="https://stackoverflow.com/questions/38748717/can-not-pull-docker-image-from-private-repo-when-using-minikube">Can not pull docker image from private repo when using Minikube</a></p> <p>But I have no <code>/etc/init.d/docker</code>, I think it's because of kinematic ? (I am on OS X)</p> <p><strong>EDIT :</strong></p> <p>I installed <a href="https://github.com/docker/docker-registry" rel="noreferrer">https://github.com/docker/docker-registry</a>, and</p> <pre><code>docker tag local-image-build localhost:5000/local-image-build docker push localhost:5000/local-image-build </code></pre> <p>My kubernetes yaml contains :</p> <pre><code>spec: containers: - name: backend-nginx image: localhost:5000/local-image-build:latest imagePullPolicy: Always </code></pre> <p>But it's still not working... Logs :</p> <pre><code>Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error while pulling image: Get http://127.0.0.1:5000/v1/repositories/local-image-build/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused </code></pre> <p><strong>EDIT 2 :</strong></p> <p>I don't know if I'm on the good path, but I find this :</p> <p><a href="http://kubernetes.io/docs/user-guide/images/" rel="noreferrer">http://kubernetes.io/docs/user-guide/images/</a></p> <p>But I don't know what is my DOCKER_USER...</p> <pre><code>kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL </code></pre> <p><strong>EDIT 3</strong></p> <p>now I got on my pod :</p> <pre><code>Failed to pull image "local-image-build:latest": Error: image library/local-image-build not found Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error: image library/local-image-build not found" </code></pre> <p>Help me I'm going crazy.</p> <p><strong>EDIT 4</strong></p> <pre><code>Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error response from daemon: Get https://192.168.99.101:5000/v1/_ping: tls: oversized record received with length 20527" </code></pre> <p>I added :</p> <pre><code>EXTRA_ARGS=' --label provider=virtualbox --insecure-registry=192.168.99.101:5000 </code></pre> <p>to my docker config, but it's still don't work, the same message....</p> <p>By the way, I changed my yaml :</p> <pre><code> spec: containers: - name: backend-nginx image: 192.168.99.101:5000/local-image-build:latest imagePullPolicy: Always </code></pre> <p>And I run my registry like that :</p> <pre><code>docker run -d -p 5000:5000 --restart=always --name myregistry registry:2 </code></pre>
<p>Use the minikube docker registry instead of your local docker</p> <p><a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-docker-container-image" rel="noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-docker-container-image</a></p> <h2>Set docker to point to minikube</h2> <p><code>eval $(minikube docker-env)</code></p> <h2>Push to minikube docker</h2> <p><code>docker build -t hello-node:v1 .</code></p> <h2>Set your deployment to not pull IfNotPresent</h2> <p>K8S default is set to "Always" Change to "IfNotPresent"</p> <p><code>imagePullPolicy: IfNotPresent</code></p> <p><a href="https://stackoverflow.com/questions/40144138/pull-a-local-image-to-run-a-pod-in-kubernetes">Related Issue</a></p>
<p>I have an app deployment called 'backend-app' running in pods that are on several different nodes. I also have a service that exposes the 'backend-app' to be accessed by other cluster internal pods as my 'frontend-app' pods.</p> <p>If I use DNS to connect to the 'backend-app' from my different app deployment called 'frontend-app' will the requests be load balanced to each 'backend-app' pod on each node?</p> <p>It sounds like a NodePort service will only connect to one node and not load balance my requests to others.</p>
<p>For each Service with <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer"><code>type: NodePort</code></a> a port is opened on all nodes (the same port on each). The port is open whether a pod of that service is running on a node or not. The load balancing is done among all pods of all nodes with no preference to a pod that happens to run on the same node to which you connected on the node port (if there is one there at all).</p>
<p>Recently we've experienced issues with both Non-Production and Production clusters where the nodes encountered 'System OOM encountered' issue.</p> <p>The nodes within the Non-Production cluster don't seem to be sharing the pods. It seems like a given node is running all the pods and putting a load on the system.</p> <p>Also, the Pods are stuck in this status: 'Waiting: ContainerCreating'.</p> <p>Any help/guidance with the above issues would be greatly appreciated. We are building more and more services in this cluster and want to make sure there's no instability and/or environment issues and place proper checks/configuration in place before we go live.</p>
<p>"I would recommend you manage container compute resources properly within your Kubernetes cluster. When creating a Pod, you can optionally specify how much CPU and memory (RAM) each Container needs to avoid OOM situations.</p> <p>When Containers have resource requests specified, the scheduler can make better decisions about which nodes to place Pods on. And when Containers have their limits specified, contention for resources on a node can be handled in a specified manner. CPU specifications are in units of cores, and memory is specified in units of bytes.</p> <p>An event is produced each time the scheduler fails, use the command below to see the status of events:</p> <pre><code>$ kubectl describe pod &lt;pod-name&gt;| grep Events </code></pre> <p>Also, read the official Kubernetes guide on “Configure Out Of Resource Handling”. Always make sure to:</p> <p>reserve <code>10-20%</code> of memory capacity for system daemons like kubelet and OS kernel identify pods which can be evicted at <code>90-95%</code> memory utilization to reduce thrashing and incidence of system OOM.</p> <p>To facilitate this kind of scenario, the kubelet would be launched with options like below:</p> <pre><code>--eviction-hard=memory.available&lt;xMi --system-reserved=memory=yGi </code></pre> <p>Replacing x and y with actual memory values.</p> <p>Having <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">Heapster container monitoring</a> in place should be helpful for visualization".</p> <p>Read more reading on <a href="https://www.aquasec.com/wiki/display/containers/Kubernetes" rel="nofollow noreferrer">Kubernetes and Docker Administration</a></p>
<p>I'm a newbie here, and i am in the process of preparing my development and production environment for a webapp Microservices architecture. I have injected environment variables through my deployment files in order to control the environment by which the app will be running. For example a Microservice running in development mode will work with development databases. So my question is, what do you think is the best practice to control multiple environment deployments. Should i have multiple deployment files, one for each environment for the same Microservice? Or is there a better practice? Thanks</p>
<p>Typically you want that applications running on the <code>dev</code> environment don't interfere with applications running on the <code>production</code> environment. Using Kubernetes, you can get this kind of isolation <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">using different namespaces</a> for different environments. That way, objects on the <code>production</code> namespace are different from objects on the <code>dev</code> namespace. Another, more expensive approach would be to use different k8s clusters for different environments.</p> <p>Having this setup, you would deploy your application in the namespace for the specific environment where you want to deploy to, creating the <code>Deployment</code> object on that namespace. This <code>Deployment</code> would make use of a <code>ConfigMap</code> object containing your application environment variables. The variables inside the <code>ConfigMap</code> will be different on each namespace/environment. That way, your <code>Deployment</code> object is the same across different environments, which let's you be more confident that what you are testing on <code>dev</code> is the same that you will use on <code>production</code>. The <code>Deployment</code> object is the same, but it uses different variables to run.</p> <p>Using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">the example Pod from the documentation</a></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: # Define the environment variable - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY name: special-config # Specify the key associated with the value key: special.how restartPolicy: Never </code></pre> <p>Saving that content into <code>pod.yaml</code>, we can create it in our namespaces with <code>kubectl -n my-namespace-development create pod.yaml</code>, and <code>kubectl -n my-namespace-production create pod.yaml</code>.</p> <p>This Pod would use a ConfigMap called <code>special-config</code>. This configmap can be created like <code>kubectl -n my-namespace-production create configmap special-config --from-literal=special.how=very</code> in production, and <code>kubectl -n my-namespace-development create configmap special-config --from-literal=special.how=very-dev</code>.</p> <p>As you can see, the configmap content is different on each namespace. So the Pod created on the namespace called <code>my-namespace-production</code> will use the value <code>very</code>, but the pod created on the namespace <code>my-namespace-development</code> will use the value <code>very-dev</code>.</p>
<p>I am working on an application that is hosted on Kubernetes on google cloud, it uses Ingress to manage the routing to each service, then inside each service there's some extra routing using golang.</p> <p><strong>My problem is:</strong> when I try sending an API request to <code>/api/auth/healthz</code> I am expecting to reach the <code>statusHandler()</code> function, instead I always hit the <code>requestHandler()</code> function, even though I have an http.HandleFunc in there: <code>http.HandleFunc("/healthz", statusHandler)</code></p> <p>Here's my code in go</p> <pre><code>func main() { flag.Parse() // initialize the "database" err := store.Initialise() if err != nil { glog.Error(err) glog.Error("Store failed to initialise") } defer store.Uninitialise() // Initialize the default TTL for JWT access tokens to 1 day tokenTTL = 24 * time.Hour glog.Infof("JWT access tokens time-to-live set to: %s", tokenTTL.String()) http.HandleFunc("/healthz", statusHandler) http.HandleFunc("/", requestHandler) glog.Infof("Authentication service started on port 8080...\n") http.ListenAndServe(":8080", nil) } func statusHandler(w http.ResponseWriter, r *http.Request) { glog.Infof("Reached the status handler") statusObj := serviceStatus{ GitSHA: os.Getenv("GIT_SHA"), BuildTimestamp: os.Getenv("BUILD_TIMESTAMP"), DeployEnv: os.Getenv("DEPLOY_ENV"), } statusJSON, _ := json.Marshal(statusObj) w.Header().Set("Content-Type", "application/json") fmt.Fprintf(w, "%s", statusJSON) } func requestHandler(w http.ResponseWriter, r *http.Request) { glog.Infof("Reached the request handler") ... } </code></pre> <p>And here's my Ingress code:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: # https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-staging nginx.ingress.kubernetes.io/enable-cors: "false" nginx.ingress.kubernetes.io/cors-enable-origin: "*" nginx.ingress.kubernetes.io/auth-type: "basic" nginx.ingress.kubernetes.io/auth-secret: "ci-ingress-auth" nginx.ingress.kubernetes.io/proxy-body-size: "4g" name: ci-api-ingress namespace: ci spec: rules: - host: ci.omitted.io http: paths: - backend: serviceName: auth-service &lt;&lt; This is the service I am working on servicePort: 8080 path: /api/auth - backend: serviceName: data-import-service servicePort: 8080 path: /api/data-import - backend: serviceName: log-service servicePort: 8080 path: /api/log - backend: serviceName: project-service servicePort: 8080 path: /api/project - backend: serviceName: orchestration-service servicePort: 8080 path: /api/orchestration - backend: serviceName: public servicePort: 80 path: / </code></pre> <hr> <p>Based on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">Kubernetes' documentation</a> that should work as what they have in there is something similar to my implementation:</p> <pre><code>http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { duration := time.Now().Sub(started) if duration.Seconds() &gt; 10 { w.WriteHeader(500) w.Write([]byte(fmt.Sprintf("error: %v", duration.Seconds()))) } else { w.WriteHeader(200) w.Write([]byte("ok")) } }) </code></pre>
<p><strike>based on your ingress rules the path for requests should be: <code>/api/auth/healthz</code> not <code>/api/auth-service/healthz</code></strike></p> <p>The ingress path <code>/api/auth/</code> is kept in the request uri passed to the application server. </p> <p>Adding <code>rewrite-target</code> to the ingress annotations will ensure the path is passed to your underlying server as you'd expect e.g. as <code>/&lt;path&gt;</code> rather than <code>/api/auth/&lt;path&gt;</code>. Inspecting the application server logs should show this is the case.</p> <p><code>ingress.kubernetes.io/rewrite-target: /</code> </p>
<p>I have a Spring Boot app running with Spring Actuator enabled. I am using the Spring Actuator health endpoint to serve as the readiness and liveliness checks. All works fine with a single replica. When I scale out to 2 replicas both pods crash. They both fail readiness checks and end up in an endless destroy/re-create loop. If I scale them back in to 1 replica the cluster recovers and the Spring Boot app becomes available. Any ideas what might be causing this issue?</p> <p>Here is the deployment config (the context root of the Spring Boot app is /dept):</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: gl-dept-deployment labels: app: gl-dept spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 selector: matchLabels: app: gl-dept template: metadata: labels: app: gl-dept spec: containers: - name: gl-dept image: zmad5306/gl-dept:latest imagePullPolicy: Always ports: - containerPort: 8080 livenessProbe: httpGet: path: /dept/actuator/health port: 8080 initialDelaySeconds: 15 periodSeconds: 10 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /dept/actuator/health port: 8080 initialDelaySeconds: 15 periodSeconds: 10 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 5 </code></pre>
<blockquote> <p>The curl command hangs. It appears the entire minikube server hangs, dashboard quits responding</p> </blockquote> <p>So in that case, I would guess the VM backing <code>minikube</code> is sized too small to handle all the items that are deployed inside it. I haven't played with minikube in order to know how much it carries over from its libmachine underpinnings, but in the case of <code>docker-machine</code>, one can provide <code>--virtualbox-memory=4096</code> (or set an environment variable <code>env VIRTUALBOX_MEMORY_SIZE=4096 docker-machine ...</code>). And, of course, one should use the memory settings that correspond to the driver in use by minikube (so, HyperKit, xhyve, HyperV, whatever).</p>
<p>I have my kubernetes cluster setup on AWS where I am trying to monitor several pods, using cAdvisor + Prometheus + Alert manager. What I want to do it launch an email alert (with service/container name) if a container/pod goes down or stuck in Error or CarshLoopBackOff state or stcuk in anyother state apart from running.</p>
<p>Prometheus collects <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="noreferrer">a wide range of metrics</a>. As an example, you can use a metric <code>kube_pod_container_status_restarts_total</code> for monitoring restarts, which will reflect your problem.</p> <p>It containing tags which you can use in the alert:</p> <ul> <li>container=<code>container-name</code></li> <li>namespace=<code>pod-namespace</code></li> <li>pod=<code>pod-name</code></li> </ul> <p>So, everything you need is to configure your <code>alertmanager.yaml</code> <a href="https://github.com/prometheus/alertmanager/blob/master/doc/examples/simple.yml" rel="noreferrer">config</a> by adding correct SMTP settings, receiver and rules like that:</p> <pre><code>global: # The smarthost and SMTP sender used for mail notifications. smtp_smarthost: 'localhost:25' smtp_from: '[email protected]' smtp_auth_username: 'alertmanager' smtp_auth_password: 'password' receivers: - name: 'team-X-mails' email_configs: - to: '[email protected]' # Only one default receiver route: receiver: team-X-mails # Example group with one alert groups: - name: example-alert rules: # Alert about restarts - alert: RestartAlerts expr: count(kube_pod_container_status_restarts_total) by (pod-name) &gt; 5 for: 10m annotations: summary: &quot;More than 5 restarts in pod {{ $labels.pod-name }}&quot; description: &quot;{{ $labels.container-name }} restarted (current value: {{ $value }}s) times in pod {{ $labels.pod-namespace }}/{{ $labels.pod-name }}&quot; </code></pre>
<p>I have a requirement where i push bunch of key value pairs to a text/json file. Post that, i want to import the key value data into a configMap and consume this configMap within a POD using kubernetes-client API's.</p> <p>Any pointers on how to get this done would be great.</p> <p>TIA</p>
<p>You can do it in two ways.</p> <h3>Create ConfigMap from file as is.</h3> <p>In this case you will get ConfigMap with filename as a key and filedata as a value.</p> <p>For example, you have file <code>your-file.json</code> with content <code>{key1: value1, key2: value2, keyN: valueN}</code>. </p> <p>And <code>your-file.txt</code> with content </p> <p><code> key1: value1 key2: value2 keyN: valueN</code></p> <pre><code>kubectl create configmap name-of-your-configmap --from-file=your-file.json kubectl create configmap name-of-your-configmap-2 --from-file=your-file.txt </code></pre> <p>As result:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: name-of-your-configmap data: your-file.json: | {key1: value1, key2: value2, keyN: valueN} apiVersion: v1 kind: ConfigMap metadata: name: name-of-your-configmap-2 data: your-file.txt: | key1: value1 key2: value2 keyN: valueN </code></pre> <p>After this you can mount any of ConfigMaps to a Pod, for example let's mount <code>your-file.json</code>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh","-c","cat /etc/config/keys" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: name-of-your-configmap items: - key: your-file.json path: keys restartPolicy: Never </code></pre> <p>Now you can get any information from your <code>/etc/config/your-file.json</code> inside the Pod. Remember that data is read-only.</p> <h2>Create ConfigMap from file with environment variables.</h2> <p>You can use special syntax to define pairs of <code>key: value</code> in file. These syntax rules apply:</p> <ul> <li>Each line in a file has to be in VAR=VAL format.</li> <li>Lines beginning with # (i.e. comments) are ignored.</li> <li>Blank lines are ignored.</li> <li>There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).</li> </ul> <p>You have file <code>your-env-file.txt</code> with content</p> <p><code>key1=value1 key2=value2 keyN=valueN </code></p> <pre><code>kubectl create configmap name-of-your-configmap-3 --from-env-file=you-env-file.txt </code></pre> <p>As result:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: name-of-your-configmap-3 data: key1: value1 key2: value2 keyN: valueN </code></pre> <p>Now you can use ConfigMap data as Pod environment variables:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-test-pod-2 spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: name-of-your-configmap-3 key: key1 - name: LOG_LEVEL valueFrom: configMapKeyRef: name: name-of-your-configmap-3 key: key2 - name: SOME_VAR valueFrom: configMapKeyRef: name: name-of-your-configmap-3 key: keyN restartPolicy: Never </code></pre> <p>Now you can use these variables inside the Pod.</p> <p>For more information check for <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noreferrer">documentation</a> </p>
<p>Is there a way to download the container image from kuberentes pod? Let say we have multiple containers running in a Pod and we want to download the container so we can do some stuff offline in that container. The reason we want to download from Kubernetes pod directly since most of the environment settings are already set and the container should able to start with a single docker command.</p> <p>It is like asking docker compose file from the kubernetes pod environment.</p>
<p>You cannot do it by Kubernetes itself, by any special command, etc. Here is a <a href="https://github.com/kubernetes/kubernetes/issues/14561" rel="noreferrer">discussion</a> about it.</p> <p>But you still can use <code>docker commit</code> <a href="https://docs.docker.com/engine/reference/commandline/commit/" rel="noreferrer">command</a> right on the node and save a state of your container as a new image like that:</p> <ol> <li>Get container's ID, and node where it is running in detailed pod information: <code>kubectl describe $POD</code>. You need <code>Container ID</code> and <code>Node</code> values.</li> <li>Connect to a node which you got on a previous step and call: <code>docker commit $CONTAINER_ID saved-image:1</code>.</li> </ol>
<p><strong>The environment:</strong> I have a kubernetes cluster set up with namespaces for "dev", "sit" and "prod". In each of these namespaces i have multiple services of type:LoadBalancer which target a specific deployment of a dockerised application (i have multiple applications) so i can access each of these by just using the exposed ip address of the service of whichever namespace i want. Example service looks like this an is very simple:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: application1 spec: ports: - port: 80 targetPort: 3000 protocol: TCP name: http type: LoadBalancer selector: app: application1 </code></pre> <p><strong>The problem:</strong> I now want to be able to support multiple versions of all applications (ip:/v1/<em>, ip:/v2/</em> etc) as to allow the users to migrate to the new version when they are ready and i've been trying to implement path-based routing following this <a href="https://anthonychu.ca/post/api-versioning-kubernetes-nginx-ingress/" rel="nofollow noreferrer">guide</a>. I have managed to restructure my architecture so that i have ReplicationControllers and an ingress which looks at the rules of the path to route to the correct service. </p> <p>This seems to work if i'd only have one exposed service and a single namespace because i only have DNS host names for production environment and want to use the individual ip address of a service for other environments and i can't figure out how to specify the ingress rules for a service which doesn't have a hostname.</p> <p>I could just have a loadbalancer for every environment and use path based routing to route to each different services for dev and sit which is not ideal because to access any service we'd have to now use something like this ip/application1 and ip/application2 instead of directly using the service ip address of each application. But my biggest problem is that when i followed the guide and created the ingress, replicationController and a service in my SIT namespace it started affecting the loadbalancer services in my other two environments (as i understand the kubernetes would sometimes try to use the nginx controller from SIT environment on my DEV services and therefore would fail, other times it would use the GCE default configuration and would work).</p> <p>I tried adding the arg "- --watch-namespace=sit" to limit the scope of the ingress controller to only affect sit but it does not seem to work.</p>
<p>I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc.) </p> <p>That is exactly what Ingress can do, but the problem is that you want to use IP addresses for routing, but Ingress is using DNS names for that. </p> <p>I think the best way to implement this is to use an Ingress which will handle requests. On GCE Ingress uses the HTTP(S) load balancer. Yes, you will need a DNS name for that, but it will help you to create a routing which you need.<br> Also, I highly recommend using TLS encryption for connections.<br> You can check LetsEncrypt to get a free SSL certificate. </p> <p>So, the solution should like below: </p> <p>1. Deploy your Services with type "ClusterIP" instead of "LoadBalancer". You can have more than one Service object for an application so you can do it in parallel with your current configuration.<br> 2. Select any namespace (even special one), for instance - "ingress-ns". We need to create there Service objects which will point to your services in other namespaces. Here is an example of a service (let new DNS name be "my.shiny.new.domain"): </p> <pre><code>kind: Service apiVersion: v1 metadata: name: service-v1 namespace: ingress-ns spec: type: ExternalName externalName: &lt;service&gt;.&lt;namespace&gt;.svc.cluster.local # here is a service name and namespace of your service with version v1. ports: - port: 80 </code></pre> <p>3. Now, we have a namespace with several services which are pointing to different versions of your application in different namespaces. Now, we can create an Ingress object which will create an HTTP(S) Load Balancer on GCE with path-based routing: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test namespace: ingress-ns spec: rules: - host: my.shiny.new.domain http: paths: - path: /v1 backend: serviceName: service-v1 servicePort: 80 - path: /v2 backend: serviceName: service-v2 servicePort: 80 </code></pre> <p>Kubernetes will create a new HTTP(S) balancer with rules you set up in an Ingress object, and you will have an entry point with cross-namespaces path-based routing, and you don't have to use multiple IP addresses for that. </p> <p>Actually, you can also manage by that ingress your primary version of an application and use your primary domain with "/" path to handle requests to your production version.</p>
<p>I'm deploying a helm chart that consists of a service with three replica containers. I've been following <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">these directions</a> for exposing a service to an external IP address.</p> <p>How do I expose a port per container or per pod? I explicitly do not want to expose a load balancer that maps that port onto some (but any) pod in the service. The service in question is part of a stateful set, and to clients on the outside it matters which of the three are being contacted, so I can't abstract that away behind a load balancer.</p>
<p>You need to create a new service for every pod in you stateful set. To distinguish pods you need to label them with their names like described <a href="https://github.com/kubernetes/kubernetes/issues/44103#issuecomment-326851272" rel="nofollow noreferrer">here</a><br> When you have separate services you can use them individually in ingress.</p>
<p>We're hosting a lot of different applications on our Kubernetes cluster already - mostly Java based.</p> <p>For PHP-FPM + Nginx our approach is currently, that we're building a container, which includes PHP-FPM, Nginx and the PHP application source code. But this actually breaks with the one-process-per-container docker rule, so we were thinking on how to improve it. We tried to replace it by using a pod with multiple containers - a nginx and a PHP container.</p> <p>The big question is now where to put the source code. My initial idea was to use a data-only container, which we mount to the nginx and PHP-FPM container. The problem is, that there's seems to be no way to do this in Kubernetes <a href="https://github.com/kubernetes/kubernetes/issues/831" rel="noreferrer">yet</a>.</p> <p>The only approach that I see is creating a sidecar container, which contains the source code and copies it to an emptyDir volume which is shared between the containers in the pod.</p> <p>My question: Is there a good approach for PHP-FPM + Nginx and a data container on Kubernetes, or what is best practice to host PHP on Kubernetes (maybe still using one container for everything)?</p>
<p>This is a good question because there is an important distinction that gets elided in most coverage of container architecture- that between multithreaded or event-driven service applications and multiprocess service applications. </p> <p>Multithreaded and event-driven service applications are able with a single process to handle multiple service requests concurrently. </p> <p>Multiprocess service applications are not.</p> <p>Kubernetes workload management machinery is completely agnostic as to the real request concurrency level a given service is facing- agnostic in the sense that different concurrency rates by themselves do not have any impact on automated workload sizing or scaling. </p> <p>The underlying assumption, however, is that a given unit of deployment- a pod- is able to handle multiple requests concurrently.</p> <p>PHP in nearly all deployment models is multiprocess. It requires multiple processes to be able to handle concurrent requests in a single deployment unit. Whether those processes are coordinated by FPM or by some other machinery is an implementation detail. </p> <p>So- it's fine to run nginx + FPM + PHP in a single container, even though it's not a single process. The number of processes itself doesn't matter- there is actually no rule in Docker about this. The ability to support concurrency does matter. One wants to deploy in a container/pod the minimal system to support concurrent requests, and in the case of PHP, usually putting it all in a single container is simplest.</p>
<p>Best practice for managing 3 or 4 secrets for a single Kubernetes deployment. </p> <p>We have a deployment with some secrets repeated in all namespaces, and others that are environment specific. </p> <p>We are trying to decide between one secret file and changing only the ones that need to be, OR running 2+ secrets where one is foo-dev-secrets and the other is foo-universal-secrets.</p> <p>We can't find any examples of what to do in these cases, more specifically how to manage these secrets, we know you can only have "one secret per volume", but in all honesty we aren't sure what that means. </p> <p>Feel free to respond as if we are dumb children. ^_^</p>
<blockquote> <p>running 2+ secrets where one is foo-dev-secrets and the other is foo-universal-secrets</p> </blockquote> <p>As a for-your-consideration, a well-constructed RBAC policy would ensure that only accounts with the correct permissions would be able to read secrets if they are decomposed, which would (of course) be much harder if they were all lumped into one "all-the-secrets" bucket</p> <blockquote> <p>we know you can only have "one secret per volume", but in all honesty we aren't sure what that means.</p> </blockquote> <p>Then you're in good company, because I don't know what that means, either :-D If you mean there are global secrets, but the dev secrets are named the same but more specific, I <em>think</em> <code>docker -v</code> will tolerate that:</p> <pre><code>containers: - name: foo volumeMounts: - name: global-secrets mountPoint: /run/secrets/global readOnly: true - name: foo-secret-override mountPoint: /run/secrets/global/no-really readOnly: true </code></pre> <p>... with the disadvantage being that your <code>volumes:</code> and <code>volumeMounts:</code> will become quite chatty</p> <p>That said, I would bet the more general, and less mind-bend-y solution is to mount them as peers and then do the application equivalent of <code>find /run/secrets/ -not -type d</code> to slurp them all up:</p> <pre><code> volumeMounts: - name: global-secrets mountPoint: /run/secrets/0-global readOnly: true - name: foo-secrets mountPoint: /run/secrets/1-foo readOnly: true </code></pre> <p>Or, if possible, have the application read the path from an environment variable or <code>ConfigMap</code>-ed situation, meaning one could project them as peers (still) but point-out to the application which of the two values it should use.</p> <p>Of course, the devil's in the details, so feel free to chime in if you are able to share more specifics about the hurdles you are encountering</p>
<p>I am using a x509 authentication for a user in Kubernetes, which works fine. However, while provide access to the deployments does not seem to be working fine, as shown below:</p> <p>Roles:</p> <pre><code># kubectl get rolebindings devops-rb -n demo -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: 2018-03-26T13:43:49Z name: devops-rb namespace: demo resourceVersion: "2530329" selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/demo/rolebindings/devops-rb uid: b6c17e28-30fb-11e8-b530-000d3a11bb2f roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: devops-role subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: devops </code></pre> <p>Role Bindings:</p> <pre><code># kubectl get roles devops-role -n demo -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: 2018-03-26T13:43:49Z name: devops-role namespace: demo resourceVersion: "2538402" selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/demo/roles/devops-role uid: b6bee0fb-30fb-11e8-b530-000d3a11bb2f rules: - apiGroups: - "" resources: - pods - secrets - services - replicasets - persistentvolumeclaims - deployments verbs: - get - list - watch </code></pre> <p>Trying to list deployments using user config:</p> <pre><code># kubectl --kubeconfig /root/.kube/config-tesla get deploy -n demo Error from server (Forbidden): deployments.extensions is forbidden: User "tesla" cannot list deployments.extensions in the namespace "demo" </code></pre> <p>Trying to list deployments using the admin config:</p> <pre><code># kubectl get deploy -n demo NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE wordpress 1 1 1 1 13d wordpress-mysql 1 1 1 1 13d </code></pre> <p>Trying to list pods using user config:</p> <pre><code># kubectl --kubeconfig /root/.kube/config-tesla get po -n demo NAME READY STATUS RESTARTS AGE ncp-centos-pod 1/1 Running 0 12d wordpress-77d578745-vdgr9 1/1 Running 0 13d wordpress-mysql-58cf8dc9f9-pzvbs 1/1 Running 0 13d </code></pre> <p>Trying to list pods using admin config:</p> <pre><code># kubectl get pods -n demo NAME READY STATUS RESTARTS AGE ncp-centos-pod 1/1 Running 0 12d wordpress-77d578745-vdgr9 1/1 Running 0 13d wordpress-mysql-58cf8dc9f9-pzvbs 1/1 Running 0 13d </code></pre>
<p>replicasets and deployments exist in the "extensions" and "apps" API groups, not in the legacy "" group</p> <p>try:</p> <pre><code>rules: - apiGroups: - "" resources: - pods - secrets - services - persistentvolumeclaims verbs: - get - list - watch - apiGroups: - extensions - apps resources: - deployments - replicasets verbs: - get - list - watch </code></pre>
<p>Is there a hard limit on the number of service accounts that can be created in Kubernetes?</p> <p>I couldn't find any documentation where this is stated.</p>
<p>It depends on the storage behind a service account registry, as coded in <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/registry/core/serviceaccount" rel="nofollow noreferrer"><code>kubernetes/kubernetes/pkg/registry/core/serviceaccount</code></a></p> <pre><code>// storage puts strong typing around storage calls type storage struct { rest.StandardStorage } // NewRegistry returns a new Registry interface for the given Storage. Any mismatched // types will panic. func NewRegistry(s rest.StandardStorage) Registry { return &amp;storage{s} } </code></pre> <p>A storage is a REST call to, for instance, <a href="https://github.com/kubernetes/kubernetes/blob/9bd4f12c336af339c35e773e1d71a37cad342f57/pkg/registry/core/serviceaccount/storage/storage_test.go#L33" rel="nofollow noreferrer">an ETCD storage</a>.</p> <pre><code>func newStorage(t *testing.T) (*REST, *etcdtesting.EtcdTestServer) { etcdStorage, server := registrytest.NewEtcdStorage(t, "") </code></pre> <p>So this is limited by the <strong><a href="https://coreos.com/etcd/docs/latest/dev-guide/limit.html" rel="nofollow noreferrer">limits of an ETCD</a></strong>, not so much the number of entries, but rather the storage <em>size</em>. </p>
<p>I am trying to setup NetworkPolicy to control access to pods running in acs cluster. The cluster is created using acs-engine and networkPolicy=azure. </p> <p>The contents of the json file used to create the cluster:</p> <pre><code>{ "apiVersion": "vlabs", "properties": { "orchestratorProfile": { "orchestratorType": "Kubernetes", "kubernetesConfig": { "networkPolicy": "azure" }, "orchestratorRelease": "1.9" }, ... } </code></pre> <p>The network policy that I am applying is to deny all ingress traffic to pods in my namespace:</p> <pre><code> apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress </code></pre> <p>The required restrictions are not effected with this NetworkPolicy.</p> <p>If this even possible with "networkPolicy": "azure"?</p> <p>Can someone point to right direction to achieve this?</p>
<p>After exploring I found that Azure CNI policy does not support Kubernetes network policies. The solution is to use CNI that supports it eg Calico.</p> <p><a href="https://learn.microsoft.com/en-us/azure/virtual-network/container-networking" rel="nofollow noreferrer">Container networking capabilities of Azure CNI</a></p>
<p>I created a namespace inside Kubernetes and tried to create a container using the following command: </p> <p><code>kubectl run busybox -it ----image=busybox -- sh</code></p> <p>But now, everytime I delete the pod using <code>kubectl delete pods --all</code>, it deletes the pod that was just created and it automatically recreates a new pod. I looked through the documentation but am unable to figure out what flag will stop this incessant creation of these containers. </p>
<p>The reason it does this is because <code>kubectl run</code> implicitly creates a deployment for the pod. Deployments are tasked with ensuring a certain number of pods are always running, so when Kubernetes detects a misalignment in the number of pods the deployment <em>should</em> be running vs the number that are actually running, it'll spin up a new one. You can remedy this by deleting the deployment: <code>kubectl delete deployment busybox</code></p> <p>Alternatively, you can temporarily kill the pods (but keep the deployment) by scaling down the deployment to run 0 pods: <code>kubectl scale deployment busybox --replicas=0</code>.</p> <p>Documentation: <a href="https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_run/" rel="nofollow noreferrer">https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_run/</a></p> <blockquote> <p>Create and run a particular image, possibly replicated. Creates a deployment or job to manage the created container(s).</p> </blockquote>
<p>My application has some aggregation/window operation, so it has some state store which stores in the <code>state.dir</code>. AFAIK, it also writes the changelog of state store to the broker, so is that OK to consider the Kafka Stream application as a stateless POD?</p>
<blockquote> <p>My application has some aggregation/window operation, so it has some state store which stores in the <code>state.dir</code>. AFAIK, it also writes the changelog of state store to the broker, so is that OK to consider the Kafka Stream application as a stateless POD?</p> </blockquote> <p><strong>Stateless pod and data safety (= no data loss):</strong> Yes, you can consider the application as a stateless pod as far as <em>data safety</em> is concerned; i.e. regardless of what happens to the pod Kafka and Kafka Streams guarantee that you will not lose data (and if you have enabled exactly-once processing, they will also guarantee the latter).</p> <p>That's because, as you already said, state changes in your application are always continuously backed up to Kafka (brokers) via changelogs of the respective state stores -- unless you explicitly disabled this changelog functionality (it is enabled by default).</p> <p>Note: The above is even true when you are not using Kafka's Streams default storage engine (RocksDB) but the alternative in-memory storage engine. Many people don't realize this because they read "in-memory" and (falsely) conclude "data will be lost when a machine crashes, restarts, etc.".</p> <p><strong>Stateless pod and application restoration/recovery time:</strong> The above being said, you should understand how having vs. not-having local state available after pod restarts will impact restoration/recovery time of your application (or rather: application instance) until it is fully operational again.</p> <p>Imagine that one instance of your stateful application runs on a machine. It will store its local state under <code>state.dir</code>, and it will also continuously backup any changes to its local state to the remote Kafka cluster (brokers).</p> <ul> <li>If the app instance is being restarted and <em>does not</em> have access to its previous <code>state.dir</code> (probably because it is restarted on a different machine), it will fully reconstruct its state by restoring from the associated changelog(s) in Kafka. Depending on the size of your state this may take milliseconds, seconds, minutes, or more. Only once its state is fully restored it will begin processing new data.</li> <li>If the app instance is being restarted and <em>does</em> have access to its previous <code>state.dir</code> (probably because it is restarted on the same, original machine), it can recover much more quickly because it can re-use all or most of the existing local state, so only a small delta needs to restored from the associated changelog(s). Only once its state is fully restored it will begin processing new data.</li> </ul> <p>In other words, if your application is able to re-use existing local state then this is good because it will minimize application recovery time.</p> <p><strong>Standby replicas to the rescue in stateless environments:</strong> But even if you are running stateless pods you have options to minimize application recovery times by configuring your application to use <a href="https://kafka.apache.org/documentation/streams/developer-guide/config-streams.html#num-standby-replicas" rel="noreferrer">standby replicas</a> via the <code>num.standby.replicas</code> setting:</p> <blockquote> <p><strong>num.standby.replicas</strong></p> <p>The number of standby replicas. Standby replicas are shadow copies of local state stores. Kafka Streams attempts to create the specified number of replicas and keep them up to date as long as there are enough instances running. Standby replicas are used to minimize the latency of task failover. A task that was previously running on a failed instance is preferred to restart on an instance that has standby replicas so that the local state store restoration process from its changelog can be minimized.</p> </blockquote> <p>See also the documentation section <a href="https://kafka.apache.org/documentation/streams/developer-guide/running-app.html#state-restoration-during-workload-rebalance" rel="noreferrer">State restoration during workload rebalance</a></p> <p><strong>Update 2018-08-29</strong>: Arguably the most convenient option to run Kafka/Kafka Streams/KSQL on Kubernetes is to use Confluent Operator or the Helm Charts provided by Confluent, see <a href="https://www.confluent.io/confluent-operator/" rel="noreferrer">https://www.confluent.io/confluent-operator/</a>. (Disclaimer: I work for Confluent.)</p> <p><strong>Update 2019-01-10</strong>: There's also a Youtube video that demoes how to <a href="https://www.youtube.com/watch?v=4hAVEy01dGY" rel="noreferrer">Scale Kafka Streams with Kubernetes</a>.</p>
<p>I have three namespaces dev, test and staging. test and staging have no pods in them. In dev I have nginx, ingress and a frontend service. For all requests to the nginx it's forwarded to the frontend service. </p> <p>But the issue is nginx in dev trying to find frontend service in test and staging namespaces also. It's doing round robin between the 3 namespaces. So sometimes the page is loading and sometimes it's 503 error.</p> <p>Here is the ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend-ingress namespace: dev annotations: nginx.ingress.kubernetes.io/rewrite-target: / ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / backend: serviceName: frontend servicePort: 80 </code></pre> <p>And here is the log of nginx:</p> <pre><code>I0327 07:54:50.867120 1 command.go:76] change in configuration detected. Reloading... W0327 07:54:50.867339 1 controller.go:841] service test/frontend does not have any active endpoints W0327 07:54:50.867370 1 controller.go:841] service staging/frontend does not have any active endpoints W0327 07:54:50.868198 1 controller.go:777] upstream test-frontend-80 does not have any active endpoints. Using default backend W0327 07:54:50.868219 1 controller.go:777] upstream staging-frontend-80 does not have any active endpoints. Using default backend </code></pre>
<p>Specify <code>--force-namespace-isolation=true</code> argument when deploying nginx pod. And update image to <code>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0</code></p>
<p>We recently upgraded from nginx ingress controller from 0.8.2 to 0.11.0, and started getting 502 bad gateway error on large file uploads around 10 MB or higher, we have set the <strong>client_max_body_size</strong> to <strong>500m</strong> through <strong>proxy-body-size</strong> in the configmap and <strong>verified</strong> its set. The smaller files around 5-6 MB works fine. </p> <p>There are no errors in the logs, just these messages. </p> <blockquote> <p>redacted - [redacted] - - [25/Mar/2018:02:08:49 +0000] "POST /redacted/upload HTTP/1.1" 000 0 "<a href="https://redacted/" rel="nofollow noreferrer">https://redacted/</a>" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36" 3371263 10.850 [uploader-443] ----</p> </blockquote> <p>and </p> <blockquote> <p>[warn] 30684#30684: *42090 a client request body is buffered to a temporary file /var/lib/nginx/body/0000000482, client: redacted, server: redacted, request: "POST /redacted/upload HTTP/1.1", host: "redacted", referrer: "<a href="https://redacted/" rel="nofollow noreferrer">https://redacted/</a>"</p> </blockquote> <p>The proxied server is tomcat and requests do not make it to tomcat. We have tried increasing:</p> <ul> <li><em>timeouts</em> </li> <li><em>proxy_buffers</em> </li> <li><em>proxy_buffer_size</em></li> </ul> <p>but nothing worked.</p> <p>Going back to <strong>0.8.2</strong> version resolves the issue. </p> <p><strong>update 1: nginx.conf snippet</strong> </p> <blockquote> <p>location /redacted/ {</p> <p>port_in_redirect off;</p> <p>set $proxy_upstream_name "redacted-443";</p> <p>....</p> <p>....</p> <p>client_max_body_size "500m";</p> </blockquote>
<p>It looks like you may need to set the bigger file size globally for all Ingress rules.</p> <p>This can be achieved by using <strong><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#custom-max-body-size" rel="nofollow noreferrer">proxy-body-size</a></strong> parameter i.e.:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/proxy-body-size: 20mb .... </code></pre> <p>Afterwards it should work for files bigger than 10MB.</p>
<p>I am writing a spark job which uses kubernetes instead of yarn.</p> <pre><code>val spark = SparkSession.builder().appName("Demo").master(????).getOrCreate() </code></pre> <p>So what should be my master, is it kubernetes or something else</p> <p>Thanks</p>
<p>I should be in the format of <code>k8s://https://&lt;k8s-apiserver-host&gt;:&lt;k8s-apiserver-port&gt;</code></p> <p><strong>From the Doc <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">here</a></strong></p> <blockquote> <p>URL with the format k8s://. Prefixing the master string with k8s:// will cause the Spark application to launch on the Kubernetes cluster, with the API server being contacted at api_server_url. If no HTTP protocol is specified in the URL, it defaults to https. For example, setting the master to k8s://example.com:443 is equivalent to setting it to k8s://<a href="https://example.com:443" rel="nofollow noreferrer">https://example.com:443</a>, but to connect without TLS on a different port, the master would be set to k8s://<a href="http://example.com:8080" rel="nofollow noreferrer">http://example.com:8080</a>.</p> </blockquote>
<p>I have created a deployment with two replicas in my K8S cluster with the volumes section to clone a git repo. The repo is cloned, the pods are created, deployment is created. I can login to the pod and run git commits are things look ok.</p> <p>My assumption: If I do a git push to the repository managed by gitRepo volume mount, the pod or K8S deployment will be automatically redeployed but this is not happening? Is my assumption wrong? Should I do something more to do an auto-redeploy once a push is done like a CD pipeline? If the latter is true, I am trying to understand the purpose of gitRepo volume mounts now.</p> <p>Thanks for your inputs.</p> <p>Abdul.</p>
<p>After some research, I think I understand the scope of gitRepo volumes now. My requirement to auto-update the pods/deployments when changes happen to git repo can be handled using a microservice such as this: <strong><a href="https://github.com/fabric8io/gitcontroller" rel="nofollow noreferrer">https://github.com/fabric8io/gitcontroller</a></strong></p> <p><strong>The issues with the above microservice:</strong></p> <ol> <li>Not able to find the gitcontroller binary (look at the GitHub issues as well, someone had posted this earlier)</li> <li>When I try to build this manually after installing Go and moving the microservice to the desired location that it expects, I finally run into memory errors.</li> </ol> <p>So I believe the above microservice is broken and not probably maintained now (as of writing). Until it's fixed, I am going back to using Spring Cloud Config server with git backed repo to handle my configs. So when my configs change, I can run the Fabric8 maven plugin (<a href="https://maven.fabric8.io/" rel="nofollow noreferrer">https://maven.fabric8.io/</a>) to build a new Docker image and deploy that in my cluster. All of this can be automated using a simple Jenkins CD pipeline.</p>
<p>We have default Ingress <code>apiVersion: extensions/v1beta1</code> serving multiple Pods in Google Kubernetes Engine. The problem appears on our website when we click link which will go through the same Ingress with large header size. It works with smaller header size.</p> <p>I have tried switching to ingress-nginx and modified <code>client_max_body_size</code> with annotation/configmap <code>proxy-body-size: "500m"</code> without success. We are using Kubernetes versions <code>1.7.12-gke.1</code> for master and <code>1.7.10-gke.0</code> for node.</p> <p>Is there any solution to modify allowed header size directly? I'm out of ideas.</p>
<p>The default load balancer witch is Ingress in GKE, block request to the backend services if requested URL and headers size is longer than about 15KB for the general <a href="https://cloud.google.com/compute/docs/load-balancing/http/#illegal_request_handling" rel="nofollow noreferrer">security reason</a>. Please consider that if this URL is affecting sending such large amount of data, that probably is something wrong with backend application - maybe infinite loop for example or misconfigured Web services. If you are sure that everything is fine on the backend side you can consider to deploy custom Kubernet Ingress from <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">github</a> and tune up client header buffer size witch is mentioned <a href="https://github.com/kubernetes/ingress-nginx/pull/150" rel="nofollow noreferrer">here</a>.</p>
<p>I have saved a docker image as a tar file locally using the command,</p> <pre><code>docker save -o ./dockerImage:version.tar docker.io/image:latest-1.0 </code></pre> <p>How to specify this file in my pod.yaml to use this tarball and start the pod instead of pulling / already pulled image to launch the container.</p> <p><strong>Current pod.yaml file:</strong></p> <pre><code> apiVersion: myApp/v1 kind: myKind metadata: name: myPod2 spec: baseImage: docker.io/image version: latest-1.0 </code></pre> <p><strong>I want similar to this</strong></p> <pre><code> apiVersion: myApp/v1 kind: myKind metadata: name: myPod2 spec: baseImage: localDockerImage.tar:latest-1.0 version: latest-1.0 </code></pre>
<p>There's no direct way to achieve that in Kubernetes.</p> <p>See the discussions here: <a href="https://github.com/kubernetes/kubernetes/issues/1668" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/1668</a></p> <p>They have finally closed that issue because of the following reasons:</p> <blockquote> <p>Given that there are a number of ways to do this (your own cluster startup scripts, run a daemonset to side load your custom images, create VM images with images pre-loaded, run a cluster-local docker registry), and the fact that there have been no substantial updates in over two years, I'm going to close this as obsolete.</p> </blockquote>
<p>I have a multi-regional testing setup on GKE k8s 1.9.4. Every cluster has:</p> <ul> <li>an ingress, configured with <code>kubemci</code></li> <li>3 node pools with different node labels: <ul> <li>default-pool <code>system</code> (1vCPU / 2GB RAM)</li> <li>frontend-pool <code>frontend</code> (2vCPU / 2GB RAM)</li> <li>backend-pool <code>backend</code> (1vCPU / 600Mb RAM)</li> </ul></li> <li>HPA with scaling by the custom metric</li> </ul> <p>So stuff like <code>prometheus-operator</code>, <code>prometheus-server</code>, <code>custom-metrics-api-server</code> and <code>kube-state-metrics</code> attached to a node with <code>system</code> label.</p> <p>Frontend and backend pod attached to nodes with <code>frontend</code> and <code>backend</code> labels respectively (single pod to a single node), see <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature" rel="noreferrer">podantiaffinity</a>. </p> <p>After autoscaling scales <code>backend</code> or <code>frontend</code> pods down, them nodes remains to stay, as there appear to be pods from <code>kube-system</code> namespace, i.e <code>heapster</code>. This leads to a situation when node with <code>frontend</code> / <code>backend</code> label stays alive after downscaling even there's no backend or frontend pod left on it. </p> <p>The question is: how can I avoid creating <code>kube-system</code> pods on the nodes, that serving my application (if this is really sane and possible)? </p> <p>Guess, I should use taints and tolerations for <code>backend</code> and <code>frontend</code> nodes, but how it can be combined with HPA and in-cluster node autoscaler?</p>
<p>Seems like <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">taints and tolerations</a> did the trick. </p> <p>Create a cluster with a default node pool (for monitoring and <code>kube-system</code>):</p> <pre><code>gcloud container --project "my-project-id" clusters create "app-europe" \ --zone "europe-west1-b" --username="admin" --cluster-version "1.9.4-gke.1" --machine-type "custom-2-4096" \ --image-type "COS" --disk-size "10" --num-nodes "1" --network "default" --enable-cloud-logging --enable-cloud-monitoring \ --maintenance-window "01:00" --node-labels=region=europe-west1,role=system </code></pre> <p>Create node pool for your application:</p> <pre><code>gcloud container --project "my-project-id" node-pools create "frontend" \ --cluster "app-europe" --zone "europe-west1-b" --machine-type "custom-2-2048" --image-type "COS" \ --disk-size "10" --node-labels=region=europe-west1,role=frontend \ --node-taints app=frontend:NoSchedule \ --enable-autoscaling --num-nodes "1" --min-nodes="1" --max-nodes="3" </code></pre> <p>then add <code>nodeAffinity</code> and <code>tolerations</code> sections to a pods template <code>spec</code> in your deployment manifest:</p> <pre><code> tolerations: - key: "app" operator: "Equal" value: "frontend" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/instance-type operator: In values: - custom-2-2048 - matchExpressions: - key: role operator: In values: - frontend </code></pre>
<p>I'm new to kubernetes and have setup a service with a few pods. I got this working, however I would like to remove or override the <code>Host</code> Header in the request. Can be overridden from the service somehow? I read into ingress but that will just reroute based on rules</p> <p>Thanks in advance</p>
<blockquote> <p>I read into ingress but that will just reroute based on rules</p> </blockquote> <p>With the addition that many of them offer support for <a href="https://github.com/kubernetes/ingress-nginx/blob/nginx-0.12.0/docs/user-guide/annotations.md#configuration-snippet" rel="nofollow noreferrer">custom configuration snippets</a> to solve that very problem.</p> <p>While provisioning Graylog2 last week, I took advantage of that annotation to add a custom header between the Ingress controller and the Graylog2 cluster, which would have otherwise required standing up a custom nginx or http server just for the one simple header. I highly recommend Ingress controllers, I think they're awesome.</p>
<p>I am on Jenkins 2.73.2.1 and using Kubernetes-jenkins plugin 1.4 to spin up dynamic slaves . However ,I am not able to start up parallel builds. Jenkins always puts it to build queue and executes one at a time.</p> <p>I have tried setting this while starting jenkins but it doesn't help either:</p> <pre><code>-Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 </code></pre> <p>Is there any other setting we have to do at plugin configuration end for parallel pods running ?</p>
<p>The solution that worked for us was combination of above JVM arguments as well as selecting "use jobs matching the current label only" in kubernetes container template section.</p> <p>We were able to run the required jobs parallelly in different pods</p>
<p>I have two kubernetes deployments say backend and frontend. frontend deployment depends on the backend deployments. Means after backend deployment pods are ready then the pods for frontend should be created. How can I specify this in the deployment yaml?</p>
<p>The solution you are looking for is <strong>Init container</strong>. Pod can have one or more Init containers and they run one after another before main Pod containers are started. Please be aware that each Init container runs until completion.</p> <p>So you can use Init containers to check availability of your back-end applications. Here is an example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: front-end labels: app: front-end spec: containers: - name: front-end image: node:boron initContainers: - name: init-backend image: busybox command: ['sh', '-c', 'until &lt;put check condition for your back-end&gt;; do echo waiting for back-end; sleep 2; done;'] </code></pre> <p>For more information you can go through <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">documentation</a>.</p>
<p>I created a 200G disk with the command <code>gcloud compute disks create --size 200GB my-disk</code></p> <p>then created a PersistentVolume</p> <pre> apiVersion: v1 kind: PersistentVolume metadata: name: my-volume spec: capacity: storage: 200Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: my-disk fsType: ext4 </pre> <p>then created a PersistentVolumeClaim</p> <pre> apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi </pre> <p>then created a StatefulSet and mount the volume to /mnt/disks, which is an existing directory. statefulset.yaml:</p> <pre> apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: ... spec: ... spec: containers: - name: ... ... volumeMounts: - name: my-volume mountPath: /mnt/disks volumes: - name: my-volume emptyDir: {} volumeClaimTemplates: - metadata: name: my-claim spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 200Gi </pre> <p>I ran command <code>kubectl get pv</code> and saw that disk was successfully mounted to each instance</p> <pre> NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE my-volume 200Gi RWO Retain Available 19m pvc-17c60f45-2e4f-11e8-9b77-42010af0000e 200Gi RWO Delete Bound default/my-claim-xxx_1 standard 13m pvc-5972c804-2e4e-11e8-9b77-42010af0000e 200Gi RWO Delete Bound default/my-claim standard 18m pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e 200Gi RWO Delete Bound default/my-claimxxx_0 standard 18m </pre> <p>but when I ssh into an instance and run <code>df -hT</code>, I do not see the mounted volume. below is the output:</p> <pre> Filesystem Type Size Used Avail Use% Mounted on /dev/root ext2 1.2G 447M 774M 37% / devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 744K 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup tmpfs tmpfs 1.9G 0 1.9G 0% /tmp tmpfs tmpfs 256K 0 256K 0% /mnt/disks /dev/sda8 ext4 12M 28K 12M 1% /usr/share/oem /dev/sda1 ext4 95G 3.5G 91G 4% /mnt/stateful_partition tmpfs tmpfs 1.0M 128K 896K 13% /var/lib/cloud overlayfs overlay 1.0M 148K 876K 15% /etc </pre> <p>anyone has any idea?</p> <p>Also worth mentioning that I'm trying to mount the disk to a docker image which is running in kubernete engine. The pod was created with below commands:</p> <pre> docker build -t gcr.io/xxx . gcloud docker -- push gcr.io/xxx kubectl create -f statefulset.yaml </pre> <p>The instance I sshed into is the one that runs the docker image. I do not see the volume in both instance and the docker container</p> <p><strong>UPDATE</strong> I found the volume, I ran <code>df -ahT</code> in the instance, and saw the relevant entries</p> <pre> /dev/sdb - - - - - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e /dev/sdb - - - - - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e /dev/sdb - - - - - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e /dev/sdb - - - - - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e /dev/sdb - - - - - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e /dev/sdb - - - - - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e /dev/sdb - - - - - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e /dev/sdb - - - - - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e </pre> <p>then I went into the docker container and ran <code>df -ahT</code>, I got</p> <pre> Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 ext4 95G 3.5G 91G 4% /mnt/disks </pre> <p>Why I'm seeing 95G total size instead of 200G, which is the size of my volume?</p> <p>More info: <code>kubectl describe pod</code></p> <pre> Name: xxx-replicaset-0 Namespace: default Node: gke-xxx-cluster-default-pool-5e49501c-nrzt/10.128.0.17 Start Time: Fri, 23 Mar 2018 11:40:57 -0400 Labels: app=xxx-replicaset controller-revision-hash=xxx-replicaset-755c4f7cff Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"xxx-replicaset","uid":"d6c3511f-2eaf-11e8-b14e-42010af0000... kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container xxx-deployment Status: Running IP: 10.52.4.5 Created By: StatefulSet/xxx-replicaset Controlled By: StatefulSet/xxx-replicaset Containers: xxx-deployment: Container ID: docker://137b3966a14538233ed394a3d0d1501027966b972d8ad821951f53d9eb908615 Image: gcr.io/sampeproject/xxxstaging:v1 Image ID: docker-pullable://gcr.io/sampeproject/xxxstaging@sha256:a96835c2597cfae3670a609a69196c6cd3d9cc9f2f0edf5b67d0a4afdd772e0b Port: 8080/TCP State: Running Started: Fri, 23 Mar 2018 11:42:17 -0400 Ready: True Restart Count: 0 Requests: cpu: 100m Environment: Mounts: /mnt/disks from my-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-hj65g (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: my-claim: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: my-claim-xxx-replicaset-0 ReadOnly: false my-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-hj65g: Type: Secret (a volume populated by a Secret) SecretName: default-token-hj65g Optional: false QoS Class: Burstable Node-Selectors: Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 10m (x4 over 10m) default-scheduler PersistentVolumeClaim is not bound: "my-claim-xxx-replicaset-0" (repeated 5 times) Normal Scheduled 9m default-scheduler Successfully assigned xxx-replicaset-0 to gke-xxx-cluster-default-pool-5e49501c-nrzt Normal SuccessfulMountVolume 9m kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt MountVolume.SetUp succeeded for volume "my-volume" Normal SuccessfulMountVolume 9m kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt MountVolume.SetUp succeeded for volume "default-token-hj65g" Normal SuccessfulMountVolume 9m kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt MountVolume.SetUp succeeded for volume "pvc-902c57c5-2eb0-11e8-b14e-42010af0000e" Normal Pulling 9m kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt pulling image "gcr.io/sampeproject/xxxstaging:v1" Normal Pulled 8m kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt Successfully pulled image "gcr.io/sampeproject/xxxstaging:v1" Normal Created 8m kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt Created container Normal Started 8m kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt Started container </pre> <p>Seems like it did not mount the correct volume. I ran <code>lsblk</code> in docker container</p> <pre> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 95.9G 0 part /mnt/disks ├─sda2 8:2 0 16M 0 part ├─sda3 8:3 0 2G 0 part ├─sda4 8:4 0 16M 0 part ├─sda5 8:5 0 2G 0 part ├─sda6 8:6 0 512B 0 part ├─sda7 8:7 0 512B 0 part ├─sda8 8:8 0 16M 0 part ├─sda9 8:9 0 512B 0 part ├─sda10 8:10 0 512B 0 part ├─sda11 8:11 0 8M 0 part └─sda12 8:12 0 32M 0 part sdb 8:16 0 200G 0 disk </pre> <p>Why this is happening?</p>
<p>When you use PVCs, K8s manages persistent disks for you.</p> <p>The exact way how PVs can by defined by provisioner in storage classes. Since you use GKE your default SC uses kubernetes.io/gce-pd provisioner (<a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#gce" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/storage-classes/#gce</a>).</p> <p>In other words for each pod new PV is created.</p> <p>If you would like to use existing disk you can use Volumes instead of PVCs (<a href="https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk</a>)</p>
<p>When trying to run <code>minikube</code> with hyperkit, I was getting errors about xhyve not being installed. I installed that and reran <code>minikube start --vm-driver hyperkit</code> with no issues.</p> <p>I was under the impression that hyperkit was a replacement for xhyve, not a supplement to it.</p> <p>When I run <code>ps</code> I see both <code>com.docker.hyperkit</code> and <code>docker-machine-driver-xhyve</code> running.</p> <p>How can I confirm that minikube is correctly using hyperkit?</p>
<p><a href="https://www.docker.com/docker-mac" rel="noreferrer">Docker for Mac</a> changed virtualization layer few times last years, and it can confuse users after updates of environment.</p> <p>If the process list shows both <a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver" rel="noreferrer">com.docker.hyperkit</a> and <a href="https://github.com/mist64/xhyve" rel="noreferrer">xhyve</a> processes is probably due to docker-machine environment which was previously set up using <a href="https://github.com/machine-drivers/docker-machine-driver-xhyve" rel="noreferrer">docker-machine-driver-xhyve</a>.</p> <p>You may consider cleaning up installation by</p> <ul> <li>stopping Docker (from command line or from tray icon),</li> <li>next removing machines created by <code>docker-machine</code> tool. </li> </ul> <p>I can also suggest to remove current minikube installation using</p> <pre><code>minikube stop &amp;&amp; minikube delete </code></pre> <p>and start fresh one with: </p> <pre><code>minikube start --v=10 --vm-driver=hyperkit" </code></pre> <p>That will add additional verbose output of building minikube environment.</p>
<p>I am trying to set some environment variables in docker container, Below is the env definition part from the kubernetes pod definition.</p> <pre><code> env: - name: NRIA_LICENSE_KEY value: NRIA_LICENSE_KEY -name: NRIA_DISPLAY_NAME value: abc-$HOSTNAME </code></pre> <p>When I echo the value of NRIA_DISPLAY_NAME variable in the container, however, I am getting below output, which is not expected.</p> <pre><code>echo $NRIA_DISPLAY_NAME NRIA_DISPLAY_NAME=abc-\$HOSTNAME </code></pre> <p><strong>expected output:</strong></p> <pre><code> echo $NRIA_DISPLAY_NAME NRIA_DISPLAY_NAME=abc-myhostname ($HOSTNAME should be replaced with the actual hostname value) </code></pre> <p>I tried setting it in command tag as well, that didn't work.</p> <pre><code> command: [ "bash", "-c", "export NRIA_DISPLAY_NAME=abc-$HOSTNAME" ] </code></pre>
<pre><code>- name: NRIA_DISPLAY_NAME value: abc-$HOSTNAME </code></pre> <p>should have two changes made to it:</p> <ol> <li><p>Ensure the thing you wish to have substituted is <em>also</em> present in the <code>env:</code> block -- I know you might think <code>$HOSTNAME</code> is universally populated, but that's not always true, and it is for sure not true in the kubernetes mental model</p></li> <li><p>Use the syntax <code>$(INTERPOLATION_HERE)</code> rather than the bash-centric <code>$VAR</code> or <code>${VAR}</code>. That syntax is documented in the <code>PodSpec</code> reference</p></li> </ol> <p>Thus:</p> <pre><code>env: - name: HOSTNAME valueFrom: fieldRef: fieldPath: metadata.name # or "status.hostIP" or "spec.nodeName" etc - name: NRIA_DISPLAY_NAME value: abc-$(HOSTNAME) </code></pre>
<p>I'm deploying a helm chart that consists of a service with three replica containers. I've been following <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">these directions</a> for exposing a service to an external IP address.</p> <p>How do I expose a port per container or per pod? I explicitly do not want to expose a load balancer that maps that port onto some (but any) pod in the service. The service in question is part of a stateful set, and to clients on the outside it matters which of the three are being contacted, so I can't abstract that away behind a load balancer.</p>
<p>Just adding the official Kubernetes documentation about creating a service:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p> <p>A Service in Kubernetes is a REST object, similar to a Pod. Like all of the REST objects, a Service definition can be POSTed to the apiserver to create a new instance. For example, suppose you have a set of Pods that each expose port 9376 and carry a label "app=MyApp".</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 </code></pre> <p>This specification will create a new Service object named “my-service” which targets TCP port 9376 on any Pod with the "app=MyApp" label. This Service will also be assigned an IP address (sometimes called the “cluster IP”), which is used by the service proxies (see below). The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named “my-service”.</p> <p>Note that a Service can map an incoming port to any targetPort. By default the targetPort will be set to the same value as the port field. Perhaps more interesting is that targetPort can be a string, referring to the name of a port in the backend Pods. The actual port number assigned to that name can be different in each backend Pod. This offers a lot of flexibility for deploying and evolving your Services. For example, you can change the port number that pods expose in the next version of your backend software, without breaking clients.</p> <p>Kubernetes Services support TCP and UDP for protocols. The default is TCP.</p>
<p>Trying to catch up with the Spark 2.3 documentation on how to deploy jobs on a Kubernetes 1.9.3 cluster : <a href="http://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="noreferrer">http://spark.apache.org/docs/latest/running-on-kubernetes.html</a></p> <p>The Kubernetes 1.9.3 cluster is operating properly on offline bare-metal servers and was installed with <code>kubeadm</code>. The following command was used to submit the job (<code>SparkPi</code> example job):</p> <pre><code>/opt/spark/bin/spark-submit --master k8s://https://k8s-master:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=2 --conf spark.kubernetes.container.image=spark:v2.3.0 local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar </code></pre> <p>Here is the stacktrace that we all love:</p> <pre><code>++ id -u + myuid=0 ++ id -g + mygid=0 ++ getent passwd 0 + uidentry=root:x:0:0:root:/root:/bin/ash + '[' -z root:x:0:0:root:/root:/bin/ash ']' + SPARK_K8S_CMD=driver + '[' -z driver ']' + shift 1 + SPARK_CLASSPATH=':/opt/spark/jars/*' + env + grep SPARK_JAVA_OPT_ + sed 's/[^=]*=\(.*\)/\1/g' + readarray -t SPARK_JAVA_OPTS + '[' -n /opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar ']' + SPARK_CLASSPATH=':/opt/spark/jars/*:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar' + '[' -n '' ']' + case "$SPARK_K8S_CMD" in + CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[@]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS) + exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.kubernetes.driver.pod.name=spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver -Dspark.driver.port=7078 -Dspark.submit.deployMode=cluster -Dspark.master=k8s://https://k8s-master:6443 -Dspark.kubernetes.executor.podNamePrefix=spark-pi-b6f8a60df70a3b9d869c4e305518f43a -Dspark.driver.blockManager.port=7079 -Dspark.app.id=spark-7077ad8f86114551b0ae04ae63a74d5a -Dspark.driver.host=spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc -Dspark.app.name=spark-pi -Dspark.kubernetes.container.image=spark:v2.3.0 -Dspark.jars=/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar,/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar -Dspark.executor.instances=2 -cp ':/opt/spark/jars/*:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=10.244.1.17 org.apache.spark.examples.SparkPi 2018-03-07 12:39:35 INFO SparkContext:54 - Running Spark version 2.3.0 2018-03-07 12:39:36 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-03-07 12:39:36 INFO SparkContext:54 - Submitted application: Spark Pi 2018-03-07 12:39:36 INFO SecurityManager:54 - Changing view acls to: root 2018-03-07 12:39:36 INFO SecurityManager:54 - Changing modify acls to: root 2018-03-07 12:39:36 INFO SecurityManager:54 - Changing view acls groups to: 2018-03-07 12:39:36 INFO SecurityManager:54 - Changing modify acls groups to: 2018-03-07 12:39:36 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 2018-03-07 12:39:36 INFO Utils:54 - Successfully started service 'sparkDriver' on port 7078. 2018-03-07 12:39:36 INFO SparkEnv:54 - Registering MapOutputTracker 2018-03-07 12:39:36 INFO SparkEnv:54 - Registering BlockManagerMaster 2018-03-07 12:39:36 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 2018-03-07 12:39:36 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up 2018-03-07 12:39:36 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-7f5370ad-b495-4943-ad75-285b7ead3e5b 2018-03-07 12:39:36 INFO MemoryStore:54 - MemoryStore started with capacity 408.9 MB 2018-03-07 12:39:36 INFO SparkEnv:54 - Registering OutputCommitCoordinator 2018-03-07 12:39:36 INFO log:192 - Logging initialized @1936ms 2018-03-07 12:39:36 INFO Server:346 - jetty-9.3.z-SNAPSHOT 2018-03-07 12:39:36 INFO Server:414 - Started @2019ms 2018-03-07 12:39:36 INFO AbstractConnector:278 - Started ServerConnector@4215838f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2018-03-07 12:39:36 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040. 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5b6813df{/jobs,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@495083a0{/jobs/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5fd62371{/jobs/job,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2b62442c{/jobs/job/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@66629f63{/stages,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@841e575{/stages/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@27a5328c{/stages/stage,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6b5966e1{/stages/stage/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@65e61854{/stages/pool,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1568159{/stages/pool/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4fcee388{/storage,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6f80fafe{/storage/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3af17be2{/storage/rdd,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@f9879ac{/storage/rdd/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@37f21974{/environment,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5f4d427e{/environment/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6e521c1e{/executors,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@224b4d61{/executors/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5d5d9e5{/executors/threadDump,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@303e3593{/executors/threadDump/json,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4ef27d66{/static,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@62dae245{/,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4b6579e8{/api,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3954d008{/jobs/job/kill,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2f94c4db{/stages/stage/kill,null,AVAILABLE,@Spark} 2018-03-07 12:39:36 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:4040 2018-03-07 12:39:36 INFO SparkContext:54 - Added JAR /opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar at spark://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:7078/jars/spark-examples_2.11-2.3.0.jar with timestamp 1520426376949 2018-03-07 12:39:37 WARN KubernetesClusterManager:66 - The executor's init-container config map is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies. 2018-03-07 12:39:37 WARN KubernetesClusterManager:66 - The executor's init-container config map key is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies. 2018-03-07 12:39:42 ERROR SparkContext:91 - Error initializing SparkContext. org.apache.spark.SparkException: External scheduler cannot be instantiated at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747) at org.apache.spark.SparkContext.&lt;init&gt;(SparkContext.scala:492) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver] in namespace: [default] failed. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.&lt;init&gt;(KubernetesClusterSchedulerBackend.scala:70) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120) at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741) ... 8 more Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at okhttp3.Dns$1.lookup(Dns.java:39) at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171) at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137) at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82) at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171) at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121) at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100) at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185) at okhttp3.RealCall.execute(RealCall.java:69) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217) ... 12 more 2018-03-07 12:39:42 INFO AbstractConnector:318 - Stopped Spark@4215838f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2018-03-07 12:39:42 INFO SparkUI:54 - Stopped Spark web UI at http://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:4040 2018-03-07 12:39:42 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped! 2018-03-07 12:39:42 INFO MemoryStore:54 - MemoryStore cleared 2018-03-07 12:39:42 INFO BlockManager:54 - BlockManager stopped 2018-03-07 12:39:42 INFO BlockManagerMaster:54 - BlockManagerMaster stopped 2018-03-07 12:39:42 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running 2018-03-07 12:39:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped! 2018-03-07 12:39:42 INFO SparkContext:54 - Successfully stopped SparkContext Exception in thread "main" org.apache.spark.SparkException: External scheduler cannot be instantiated at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747) at org.apache.spark.SparkContext.&lt;init&gt;(SparkContext.scala:492) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver] in namespace: [default] failed. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.&lt;init&gt;(KubernetesClusterSchedulerBackend.scala:70) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120) at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741) ... 8 more Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at okhttp3.Dns$1.lookup(Dns.java:39) at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171) at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137) at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82) at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171) at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121) at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100) at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67) at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185) at okhttp3.RealCall.execute(RealCall.java:69) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217) ... 12 more 2018-03-07 12:39:42 INFO ShutdownHookManager:54 - Shutdown hook called 2018-03-07 12:39:42 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-64fe7ad8-669f-4591-a3f6-67440d450a44 </code></pre> <p>So apparently the Kubernetes Scheduler Backend cannot contact the pod because it is unable to resolve <code>kubernetes.default.svc</code>. Hum.. why?</p> <p>I also configured RBAC with a <code>spark</code> service account as mentionned in the documentation but the same problem occurs. (also tried on a different namespace, same problem)</p> <p>Here are the logs from <code>kube-dns</code>:</p> <pre><code>I0306 16:04:04.170889 1 dns.go:555] Could not find endpoints for service "spark-pi-b9e8b4c66fe83c4d94a8d46abc2ee8f5-driver-svc" in namespace "default". DNS records will be created once endpoints show up. I0306 16:04:29.751201 1 dns.go:555] Could not find endpoints for service "spark-pi-0665ad323820371cb215063987a31e05-driver-svc" in namespace "default". DNS records will be created once endpoints show up. I0306 16:06:26.414146 1 dns.go:555] Could not find endpoints for service "spark-pi-2bf24282e8033fa9a59098616323e267-driver-svc" in namespace "default". DNS records will be created once endpoints show up. I0307 08:16:17.404971 1 dns.go:555] Could not find endpoints for service "spark-pi-3887031e031732108711154b2ec57d28-driver-svc" in namespace "default". DNS records will be created once endpoints show up. I0307 08:17:11.682218 1 dns.go:555] Could not find endpoints for service "spark-pi-3d84127226393fc99e2fe035db56bfb5-driver-svc" in namespace "default". DNS records will be created once endpoints show up. </code></pre> <p>I really can't figure out why those errors come up.</p>
<p>Try to alter the pod network with one method except Calico, check whether kube-dns work well.</p> <p>To create a custom service account, a user can use the kubectl create serviceaccount command. For example, the following command creates a service account named spark:</p> <pre><code>$ kubectl create serviceaccount spark </code></pre> <p>To grant a service account a Role or ClusterRole, a RoleBinding or ClusterRoleBinding is needed. To create a RoleBinding or ClusterRoleBinding, a user can use the kubectl create rolebinding (or clusterrolebinding for ClusterRoleBinding) command. For example, the following command creates an edit ClusterRole in the default namespace and grants it to the spark service account created above:</p> <pre><code>$ kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default </code></pre> <p>Depending on the version and setup of Kubernetes deployed, this default service account may or may not have the role that allows driver pods to create pods and services under the default Kubernetes RBAC policies. Sometimes users may need to specify a custom service account that has the right role granted. Spark on Kubernetes supports specifying a custom service account to be used by the driver pod through the configuration property spark.kubernetes.authenticate.driver.serviceAccountName=. For example to make the driver pod use the spark service account, a user simply adds the following option to the spark-submit command:</p> <pre><code>spark-submit --master k8s://https://192.168.1.5:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.container.image=leeivan/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar </code></pre>
<p>I have two containers inside a Pod with label app=read-write. container-1 is reading on 7110 port and container-2 is listening on 7111. Now I need to connect with these containers via Services NodePorts. I create two separate services to assign NodePorts on each port (7110 and 7111).</p> <pre><code>--- apiVersion: v1 kind: Service metadata: labels: app: read tier: service name: read spec: ports: - port: 7110 nodePort: 32000 protocol: TCP type: NodePort selector: app: read --- apiVersion: v1 kind: Service metadata: labels: app: write tier: service name: write spec: ports: - port: 7111 nodePort: 32001 protocol: TCP type: NodePort selector: app: write </code></pre> <p>but Services connect with Pods by matching label. So currently these two services will NOT connect with any pod because pod label is <code>read-write</code> and services are looking for any pod with label <code>read</code> or <code>write</code>. Is there any way in kubernetes where we can label containers inside a Pod ? </p> <p>The only solution I have is to create single container per Pod and assign read and write label to each. But I really want to keep these two containers inside single pod. </p>
<p>The solution was rather simple, I've added two labels (with different keys) in both services, Instead of <code>app:read</code> and <code>app:write</code>, I've added <code>read: read-service</code> and <code>write: write-service</code> in my services. and in the Pod I've added both of these keys under label and selector;</p> <p>Services readwrite.yml</p> <pre><code>--- apiVersion: v1 kind: Service metadata: labels: read: read-service tier: service name: read-service spec: ports: - port: 7110 nodePort: 32000 protocol: TCP type: NodePort selector: read: read-service --- apiVersion: v1 kind: Service metadata: labels: write: write-service tier: service name: write-service spec: ports: - port: 7111 nodePort: 32001 protocol: TCP type: NodePort selector: write: write-service </code></pre> <p>deployment.yml</p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: creationTimestamp: null labels: read: read-service write: write-service tier: service name: if11-read-write spec: replicas: 1 selector: matchLabels: read: read-service write: write-service template: metadata: labels: read: read-service write: write-service tier: service spec: containers: - name: read-container-name ... - name: write-container-name ... </code></pre> <p>Now since both containers are under one Pod and having two labels, So read-service will look-up for a Pod with label <code>read: read-service</code> and assign a port 7110 and NodePort 32000. and write-service will look-up for a Pod with label <code>write: write-service</code> and assign a port 7111 and NodePort 32001. </p>
<p>I am pulling an image from a private repository on gitlab and running a cronjob in kubernetes. Since it is a private repo I would also have to supply the imagePullSecrets. but I noticed it gives an error because a cronjob doesn't accept the imagePullSecrets tag. It gives the following error. Does that mean I can't use an image from a private repository in a cronjob?</p> <pre><code>error: error validating "cron.yml": error validating data: ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0]): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false </code></pre>
<p>The <code>imagePullSecrets</code> field is not a per container field - you need to set that at <code>CronJob.spec.jobTemplate.spec.template.spec.imagePullSecrets</code> instead of <code>CronJob.spec.jobTemplate.spec.template.spec.containers.imagePullSecrets</code>. You can see an example for a Pod <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry#create-a-pod-that-uses-your-secret" rel="noreferrer">here</a>.</p>
<p>I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: service1 annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 spec: type: LoadBalancer selector: app: some-app ports: - port: 80 protocol: TCP </code></pre> <p>However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: service1 annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 service.beta.kubernetes.io/aws-load-balancer-type: "nlb" spec: type: LoadBalancer selector: app: some-app ports: - port: 80 protocol: TCP externalTrafficPolicy: Local </code></pre>
<p>It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.</p>
<p>I have a Spring boot application which is configured with https with the below properties.</p> <pre><code>server.ssl.keyStore=/users/admin/certs/appcert.jks server.ssl.keyStorePassword=certpwd server.ssl.trustStore=/users/admin/certs/trustcert server.ssl.trustStorePassword=trustpwd </code></pre> <p>These applications were running in VM's and the certs were placed in the defined path. Now, trying to deploy this application into Kubernetes and not sure how to achieve this. </p> <p>I already have created mount for application.properties in configMap. In my dockerFile, </p> <blockquote> <p>--spring.config.location=file:/conf/application.properties</p> </blockquote> <p>and in deployments.yaml like the below one. </p> <pre><code>"spec": { "volumes": [ { "name": "app-prop", "configMap": { "name": "app-config", "items": [ { "key": "application.properties", "path": "application.properties" } ] } } ], "containers": [ { "name": "app-service", "image": "docker.com/app-service", "volumeMounts": [ { "name": "app-prop", "mountPath": "/conf" } ], "imagePullPolicy": "IfNotPresent" } ], </code></pre> <p>The property server.ssl.keyStore is in application.properties. </p>
<p>Since those keystores are password protected, arguably you could just bundle them into the docker image and reference them relative to the image's root, then inject the passwords via enviroment variables that are set from <code>Secret</code>s:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: spring-ssl data: keyPass: bAsE64TxT --- kind: Pod # etc containers: - env: - name: SERVER_SSL_KEY_STORE_PASSWORD valueFrom: secretKeyRef: key: spring-ssl name: keyPass </code></pre> <p>Spring Boot will grab those correctly formatted environment variables and apply them on top of any other configuration values, yielding (hopefully) the correct assignment without having to hard-code the passwords anywhere easily accessible</p> <p>However, if you'd prefer to keep even the jks out of the docker image, then one could feel free to stash the jks in the same <code>Secret</code>, or even a separate one, and <code>volumeMount</code> the jks in place</p>
<hr> <p>I'm trying to translate a bunch of docker-compose files into kubernetes yamls. I have used <a href="https://github.com/kubernetes/kompose" rel="nofollow noreferrer">kompose</a>, which has gotten me part way, but I'm getting stuck on one particular part for multiple containers.</p> <p>This is one of the containers. Notice the docker container is mounting <code>/u/data/. . .</code> to <code>/var/lib/mysql</code>. This is actually necessary as the mysql directory contains the database and configurations.</p> <pre><code>server1-backend-mysql: image: mysql container_name: server-backend-mysql restart: always volumes: - /u/data/server-backend-mysql:/var/lib/mysql networks: - eolnet </code></pre> <p>What is the correct way to make this happen in Kubernetes? Note that for k8 I will be mounting an nfs volume (this is only for testing purposes).</p> <p>I did look into <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">hostpath</a>, but so far no luck.</p>
<p>When declaring a Pod, specify the volume at <code>spec.volumes</code>, and then the volume mount at <code>spec.containers[*].volumeMounts</code>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: server1-backend-mysql spec: containers: - image: mysql name: mysql volumeMounts: - mountPath: /var/lib/mysql name: mysql-data volumes: - name: mysql-data hostPath: path: /u/data/... type: Directory </code></pre> <p>When declaring a <code>Deployment</code> or <code>StatefulSet</code> (which you <em>should</em> do instead of declaring a Pod), move the respective configurations to <code>spec.template.spec.volumes</code> and <code>spec.template.spec.containers[*].volumeMounts</code>. For more information, have a look at the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">documentation</a>.</p> <hr/> <p>As a side note unrelated to your question: if you're planning to run MySQL from a NFS volume, keep in mind that running MySQL from NFS is possible, but <a href="https://dev.mysql.com/doc/refman/5.7/en/disk-issues.html" rel="nofollow noreferrer">not something that MySQL is really optimized for</a>. Be sure to configure your MySQL server accordingly, and check if your environment permits you to use a networked block device (and not a network file system) like a Ceph RBD volume or similar.</p>
<p>I have deployed kubernetes on a virt-manager vm following this link </p> <p><a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/</a> </p> <p>When i join my another vm to the cluster i find that the kube-dns is in pending state.</p> <pre><code>root@ubuntu1:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-ubuntu1 1/1 Running 0 7m kube-system kube-apiserver-ubuntu1 1/1 Running 0 8m kube-system kube-controller-manager-ubuntu1 1/1 Running 0 8m kube-system kube-dns-86f4d74b45-br6ck 0/3 Pending 0 8m kube-system kube-proxy-sh9lg 1/1 Running 0 8m kube-system kube-proxy-zwdt5 1/1 Running 0 7m kube-system kube-scheduler-ubuntu1 1/1 Running 0 8m root@ubuntu1:~# kubectl --namespace=kube-system describe pod kube-dns-86f4d74b45-br6ck Name: kube-dns-86f4d74b45-br6ck Namespace: kube-system Node: &lt;none&gt; Labels: k8s-app=kube-dns pod-template-hash=4290830601 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/kube-dns-86f4d74b45 Containers: kubedns: Image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 Ports: 10053/UDP, 10053/TCP, 10055/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3 Environment: PROMETHEUS_PORT: 10055 Mounts: /kube-dns-config from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro) dnsmasq: Image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 Ports: 53/UDP, 53/TCP Host Ports: 0/UDP, 0/TCP Args: -v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 Requests: cpu: 150m memory: 20Mi Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro) sidecar: Image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 Port: 10054/TCP Host Port: 0/TCP Args: --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV Requests: cpu: 10m memory: 20Mi Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-4fjt4 (ro) Conditions: Type Status PodScheduled False Volumes: kube-dns-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-dns Optional: true kube-dns-token-4fjt4: Type: Secret (a volume populated by a Secret) SecretName: kube-dns-token-4fjt4 Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 6m (x7 over 7m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready. Warning FailedScheduling 3s (x19 over 6m) default-scheduler 0/2 nodes are available: 2 node(s) were not ready. </code></pre> <p>Can anyone just help me how to deconstruct this and find the actual issue?? </p> <p>Any help would be off great use </p> <p>Thanks in advance.</p>
<p>In addition to what @justcompile has wrote you will need a minimum of <strong>2 CPU cores</strong> in order to run all pods from the <em>kube-system</em> namespace without issues. </p> <p>You need to verify how much resources you have on that box and compare it with CPU reservations which each of Pods make. </p> <p>For example in the provided by you output I can see that your DNS service tries to make a reservetion for 10% of CPU core:</p> <pre><code>Requests: cpu: 100m </code></pre> <p>You can check each of deployed pods and their CPU reservations using:</p> <pre><code>kubectl describe pods --namespace=kube-system </code></pre>
<p>i want to use local volume that is mounted on my node on path: /mnts/drive. so i created a storageclass (as shown in documentation for local storageclass), and created a PVC and a simple pod which uses that volume.</p> <p>so these are the configurations used:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-fast provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysampleclaim spec: storageClassName: local-fast accessModes: - ReadWriteOnce resources: requests: storage: 3Gi --- apiVersion: v1 kind: Pod metadata: name: mysamplepod spec: containers: - name: frontend image: nginx:1.13 volumeMounts: - mountPath: "/var/www/html" name: myvolume volumes: - name: myvolume persistentVolumeClaim: claimName: mysampleclaim </code></pre> <p>and when i try to create this yaml file gives me an error, don't know what i am missing:</p> <pre><code> Unable to mount volumes for pod "mysamplepod_default(169efc06-3141-11e8-8e58-02d4a61b9de4)": timeout expired list of unattached/unmounted volumes=[myvolume] </code></pre>
<p>If you want to use local volume that is mounted on the node on <code>/mnts/drive</code> path, you just need to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volume in your pod:</p> <blockquote> <p>A hostPath volume mounts a file or directory from the host node’s filesystem into your pod.</p> </blockquote> <p>The final <code>pod.yaml</code> is:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mysamplepod spec: containers: - name: frontend image: nginx:1.13 volumeMounts: - mountPath: "/var/www/html" name: myvolume volumes: - name: myvolume hostPath: # directory location on host path: /mnts/drive </code></pre>
<p>I'm using the official stable ZooKeeper Helm chart for Kubernetes <a href="https://github.com/kubernetes/charts/blob/b40c8c395d8acfa428a865d8aeb9c607e0cce69c/incubator/zookeeper/templates/statefulset.yaml#L39" rel="nofollow noreferrer">which pulls a ZooKeeper Docker image</a> from Google's sample images on Google Container Registry. </p> <p>That ZooKeeper image is available <a href="https://console.cloud.google.com/gcr/images/google-samples/GLOBAL/k8szk@sha256:32212dd754b6280ac6c96b615605300f1f060baad1fdf68abd370d2ffb07ae47/details/info?tag=v2" rel="nofollow noreferrer">here</a>, however, I can't seem to find any reference to the Dockerfile for how it is built or if its Dockerfile is generated from some other representation (e.g., <a href="https://github.com/bazelbuild/rules_docker" rel="nofollow noreferrer">via Bazel</a>). I'd like to know info like what else is installed on the image, what OS it's based on, etc.</p> <p>In general are Dockerfiles for the Google sample images publicly hosted on GCR available?</p> <p>For the ZooKeeper image specifically, I'd like to determine how it compares to <a href="https://hub.docker.com/r/confluentinc/cp-zookeeper/" rel="nofollow noreferrer">Confluent's ZooKeeper image</a>: is it similar? Does it bundle something extra for running ZooKeeper on top of Kubernetes? etc</p> <p>So far I've done quite a bit of Googling, read through the <a href="https://cloud.google.com/container-registry/docs/" rel="nofollow noreferrer">Google Container Registry docs</a>, poked around the <a href="https://github.com/google" rel="nofollow noreferrer">Google org on GitHub</a>, and <a href="https://stackoverflow.com/search?q=google%20container%20registry%20dockerfile">searched Stack Overflow</a> but haven't been able to locate this info.</p>
<p>Please do not use images from <code>gcr.io/google-samples</code> for production use.</p> <p>These images are used solely for GKE tutorials on cloud.google.com and they are not actively maintained, in the sense that we don't rebuild them for security vulnerabilities for the components on the images etc.</p> <p>Source codes for some of the images are at <a href="https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/</a>.</p>
<p>I am using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation" rel="nofollow noreferrer">Mount propagation</a> feature of Kubernetes to check the health of mount points of certain type. I create a daemonset and run a script which would do a simple <code>ls</code> on these mount points. I noticed that new mount points are not getting listed from the pods. Is this the expected behaviour.</p> <pre><code>volumeMounts: - mountPath: /host name: host-kubelet mountPropagation: HostToContainer volumes: - name: host-kubelet hostPath: path: /var/lib/kubelet </code></pre> <p>Related Issue : <a href="https://github.com/kubernetes/kubernetes/issues/44713" rel="nofollow noreferrer">hostPath containing mounts do not update as they change on the host #44713</a></p>
<p>In brief, <strong>Mount propagation</strong> allows sharing volumes mounted by a Container to other Containers in the same Pod, or even to other Pods on the same node.<br> Mount propagation of a volume is controlled by <code>mountPropagation</code> field in <code>Container.volumeMounts</code>. Its values are:</p> <ul> <li><code>HostToContainer</code> - one way propagation, from host to container. If you mount anything inside the volume, the Container will see it there.</li> <li><code>Bidirectional</code> - In addition to propagation from host to container, all volume mounts created by the Container will be propagated back to the host, so all Containers of all Pods that use the same volume will see it as well.</li> </ul> <p>Based on <a href="https://v1-9.docs.kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">documentation</a> the Mount propagation feature is in alpha state for clusters v1.9, and going to be beta on v1.10</p> <p>I've reproduced your case on kubernetes v1.9.2 and found that it completely ignores <code>MountPropagation</code> configuration parameter. If you try to check current state of the <code>DaemonSet</code> or <code>Deployment</code>, you'll see that this option is missed from the listed yaml configuration</p> <pre><code>$ kubectl get daemonset --export -o yaml </code></pre> <p>If you try to run just docker container with mount propagation option you may see it is working as expected:</p> <pre><code>docker run -d -it -v /tmp/mnt:/tmp/mnt:rshared ubuntu </code></pre> <p>Comparing docker container configuration with kubernetes pod container in the volume mount section, you may see that the last flag (<code>shared</code>/<code>rshared</code>) is missing in kubernetes container.</p> <p>And that's why it happens in <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters#about_feature_stages" rel="noreferrer">Google kubernetes clusters</a> and may happen to clusters managed by other providers:</p> <blockquote> <p>To ensure stability and production quality, normal Kubernetes Engine clusters only enable features that are beta or higher. Alpha features are not enabled on normal clusters because they are not production-ready or upgradeable.</p> <p>Since Kubernetes Engine automatically upgrades the Kubernetes control plane, enabling alpha features in production could jeopardize the reliability of the cluster if there are breaking changes in a new version.</p> </blockquote> <p><a href="https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#alpha-beta-and-stable-versions" rel="noreferrer">Alpha level features availability</a>: committed to main kubernetes repo; appears in an official release; feature is disabled by default, but may be enabled by flag (in case you are able to set flags)</p> <p>Before mount propagation can work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu) mount share must be configured correctly in Docker as shown below.</p> <p>Edit your Docker’s <code>systemd</code> service file. Set <code>MountFlags</code> as follows:</p> <pre><code>MountFlags=shared </code></pre> <p>Or, remove <code>MountFlags=slave</code> if present. Then restart the Docker daemon:</p> <pre><code> $ sudo systemctl daemon-reload $ sudo systemctl restart docker </code></pre>
<p>Usually ingress rewrite target works as follows:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: / </code></pre> <p>This will rewrite the target of your service names as they are in the root directory. So if I have this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: demo-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: rules: http: paths: - path: / backend: serviceName: front-main servicePort: 80 - path: /api backend: serviceName: back-main servicePort: 80 </code></pre> <p>My services are going to receive data as they are in <code>/</code>. However, I would like for my service <code>front-main</code> to send root <code>/</code> and for the server <code>back-main</code> to send <code>/someotherpath/</code>. How can I do this?</p> <p>Is there something like the following line?</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: "front-main: / ; back-main: /someotherpath" </code></pre> <p>I don't seem to find the answer in the documentation.</p>
<p>Unfortunately, Ingress based on free version of Nginx do not have that feature.</p> <p>But, if you can use <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx Plus based Ingress</a>, you can do it by annotation.</p> <p>Here is an <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/rewrites" rel="nofollow noreferrer">example</a> from official repo:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress annotations: nginx.org/rewrites: &quot;serviceName=tea-svc rewrite=/;serviceName=coffee-svc rewrite=/beans/&quot; spec: rules: - host: cafe.example.com http: paths: - path: /tea/ backend: serviceName: tea-svc servicePort: 80 - path: /coffee/ backend: serviceName: coffee-svc servicePort: 80 </code></pre> <p>Below are the examples of how the URI of requests to the <code>tea-svc</code> are rewritten (Note that the /tea requests are redirected to /tea/).</p> <pre><code>/tea/ -&gt; / /tea/abc -&gt; /abc </code></pre> <p>Below are the examples of how the URI of requests to the <code>coffee-svc</code> are rewritten (Note that the /coffee requests are redirected to /beans/).</p> <pre><code>/coffee/ -&gt; /beans/ /coffee/abc -&gt; /beans/abc </code></pre>
<p>I created a cluster with kops in the AWS.</p> <pre><code>sudo kops create cluster --name=k8s.ehh.fun --state=s3://kops-state-ehh000 --zones=us-east-1a --node-count=3 --node-size=t2.micro --master-size=t2.micro --dns-zone=k8s.ehh.fun </code></pre> <p>And now a Would like to change the node-count without destroy the cluster. How can I do that? I tried :</p> <pre><code> sudo kops update cluster --name=k8s.ehh.fun --state=s3://kops-state-ehh000 --node-count=3 --node-size=t2.micro </code></pre> <p>But I got : Error: unknown flag: --node-count</p>
<p>You can change the node count by editing the <code>nodes</code> instance group:</p> <pre><code>kops edit instancegroup nodes </code></pre> <p>This will open an editor in which you can edit your instance group's specification and increase the code count. After saving and exiting, call:</p> <pre><code>kops update cluster &lt;cluster-name&gt; --yes </code></pre> <p>This will automatically update your auto-scaling group and start additional instances (or terminate them if you decreased the node count).</p> <p>See <a href="https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md#resize-an-instance-group" rel="noreferrer">the documentation</a> for more information.</p>
<p>I've a <code>Deployment</code> object where I expose the POD ID using the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="noreferrer">Downward API</a>. That works fine. However, I want to set up another env variable, log path, with reference to the POD ID. But, setting that variable value to <code>/var/log/mycompany/${POD_ID}/logs</code> isn't working, no logs are created in the container. I can make the entrypoint script or the app aware of the POD ID, and build up the log path, but I'd rather not do that.</p>
<p>The correct syntax is to use <code>$(FOO)</code>, as is described in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvar-v1-core" rel="noreferrer">v1.EnvVar <code>value:</code> documentation</a>; the syntax you have used is &quot;shell&quot; syntax, which isn't the way kubernetes interpolates variables. So:</p> <pre><code>containers: - env: - name: POD_ID valueFrom: # etc etc - name: LOG_PATH value: /var/log/mycompany/$(POD_ID)/logs </code></pre> <p>Also please note that, as mentioned in the Docs, the variable to expand must be defined before the variable referencing it.</p>
<p>I'm running a Kubernetes cluster and HA redis VMs on the same VPC on Google Cloud Platform. ICMP and traffic on all TCP and UDP ports is allowed on the subnet 10.128.0.0/20. Kubernetes has its own internal network, 10.12.0.0/14, but the cluster runs on VMs inside of 10.128.0.0/20, same as redis VM. </p> <p>However, even though the VMs inside of 10.128.0.0/20 see each other, I can't ping the same VM or connect to its ports while running commands from Kubernetes pod. What would I need to modify either in k8s or in GCP firewall rules to allow for this - I was under impression that this should work out of the box and pods would be able to access the same network that their nodes were running on?</p> <p>kube-dns is up and running, and this k8s 1.9.4 on GCP. </p>
<p>I've tried to reproduce your issue with the same configuration, but it works fine. I've create a network called "myservernetwork1" with subnet 10.128.0.0/20. I started a cluster in this subnet and created 3 firewall rules to allow icmp, tcp and udp traffic inside the network.</p> <pre><code>$ gcloud compute firewall-rules list --filter="myservernetwork1" myservernetwork1-icmp myservernetwork1 INGRESS 1000 icmp myservernetwork1-tcp myservernetwork1 INGRESS 1000 tcp myservernetwork1-udp myservernetwork1 INGRESS 1000 udp </code></pre> <p>I allowed all TCP, UDP and ICMP traffic inside the network. I created a rule for icmp protocol for my sub-net using this command:</p> <pre><code>gcloud compute firewall-rules create myservernetwork1-icmp \ --allow icmp \ --network myservernetwork1 \ --source-ranges 10.0.0.0/8 </code></pre> <p>I’ve used /8 mask because I wanted to cover all addresses in my network. Check your GCP firewall settings to make sure those are correct.</p>
<p>I'm trying to update the deployment from the application of Go in Cluster, but it fails with an authorization error.</p> <p>GKE Master version 1.9.4-gke.1</p> <pre class="lang-go prettyprint-override"><code>package main import ( "fmt" "github.com/pkg/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" ) func updateReplicas(namespace string, name string, replicas int32) error { config, err := rest.InClusterConfig() if err != nil { return errors.Wrap(err, "failed rest.InClusterConfig") } clientset, err := kubernetes.NewForConfig(config) if err != nil { return errors.Wrap(err, "failed kubernetes.NewForConfig") } deployment, err := clientset.AppsV1().Deployments(namespace).Get(name, metav1.GetOptions{}) if err != nil { fmt.Printf("failed get Deployment %+v\n", err) return errors.Wrap(err, "failed get deployment") } deployment.Spec.Replicas = &amp;replicas fmt.Printf("Deployment %v\n", deployment) ug, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment) if err != nil { fmt.Printf("failed update Deployment %+v", err) return errors.Wrap(err, "failed update Deployment") } fmt.Printf("done update deployment %v\n", ug) return nil } </code></pre> <p>result message</p> <pre><code>failed get Deployment deployments.apps "land-node" is forbidden: User "system:serviceaccount:default:default" cannot get deployments.apps in the namespace "default": Unknown user "system:serviceaccount:default:default" </code></pre> <p>I have set the authority as follows, but is it not enough?</p> <p>deployment-editor.yaml</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: deployment-editor rules: - apiGroups: [""] resources: ["deployments"] verbs: ["get", "list", "watch", "create", "update", "patch"] </code></pre> <p>editor-deployement.yaml</p> <pre><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: editor-deployment namespace: default subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: deployment-editor apiGroup: rbac.authorization.k8s.io </code></pre>
<p>From <a href="https://stackoverflow.com/questions/49495189/unable-to-list-deployments-resources-using-rbac">Unable to list deployments resources using RBAC</a>.</p> <blockquote> <p>replicasets and deployments exist in the "extensions" and "apps" API groups, not in the legacy "" group</p> </blockquote> <pre><code>- apiGroups: - extensions - apps resources: - deployments - replicasets verbs: - get - list - watch - update - create - patch </code></pre>
<p>I am trying to mount a config file created as a secret to a specific path in a pod. However, the mount path on the pod is always generated as a directory. Could someone tell me what am I doing wrong? </p> <pre><code>&gt; kubectl get secrets config NAME TYPE DATA AGE config Opaque 1 29m </code></pre> <p>pod yaml: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd-plus-secret spec: containers: - image: ubuntu name: bash stdin: true tty: true volumeMounts: - name: "config" mountPath: "/mnt/configFile" subPath: "configFile" volumes: - name: "config" secret: secretName: "config" </code></pre> <p>After the pod is created, I try to read the file on the pod, and get: </p> <pre><code>cat: /mnt/configFile: Is a directory </code></pre> <p>I am using: kubernetes client version 1.9.0 and server version 1.8.6</p>
<p>The syntax you want is to select just the secret's <code>items:</code>, and not try to use <code>subPath</code> in that manner. It's <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#secretvolumesource-v1-core" rel="nofollow noreferrer">documented in <code>SecretVolumeSource</code></a></p> <pre><code> volumes: - name: config secret: secretName: config items: - key: configFile path: configFile </code></pre>
<p>I have defined a parent chart called base-microservice and is available at mycompany.github.com/pages/base-microservice</p> <p>Structure is as follows :</p> <pre><code> base-microservice - templates - deployment.yaml - ingress.yaml - service.yaml - Chart.yaml - values.yaml - index.yaml - base-microservice-0.1.0.tgz </code></pre> <p>I would like to define a customapp chart which inherits from the parent chart.</p> <p>Structure is as follows :</p> <pre><code>customapp-service - customapp - Chart.yaml - charts - requirements.yaml - values.yaml - src </code></pre> <p>requirements.yaml is as follows :</p> <pre><code>dependencies: - name: base-microservice repository: https://mycompany.github.com/pages/base-microservice version: 0.1.0 </code></pre> <p>When I do</p> <pre><code>helm install --repo https://mycompany.github.com/pages/base-microservice --name customapp --values customapp/values.yaml </code></pre> <p>It creates and deploys base-microservice instead of customapp.. in other words my Chart.yaml and values.yaml in custom app chart don’t override what was defined in the base one..</p> <p>Kindly advice how to structure the app ?</p>
<p>You may want to read the <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="noreferrer">Subcharts and Global Values</a> doc page within Helm's repo. It covers Creating a Subchart, Adding Values and a Template to the Subchart, Overriding Values from a Parent Chart, Global Chart Values, and Sharing Templates with Subcharts. It sounds like you want the example in <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart" rel="noreferrer">Overriding Values from a Parent Chart</a>. Note that all values passed from the parent to the subchart are nested below a YAML key by the same name as the subchart. <code>--set</code> syntax is the same concept, just prefix the key with the subchart name (<code>--set subchartname.subchartkey=myvalue</code>.</p> <p>Also, docs.helm.sh has good, consolidated Helm documentation, and the <a href="https://helm.sh/docs/topics/charts/#scope-dependencies-and-values" rel="noreferrer">Scope, Dependencies, and Values</a> section of Intro To Charts gives more context to the use case above as well as others.</p>
<p>I am looking for any code example that deploys Helm chart without via CLI call. The reason behind this is:</p> <ol> <li>My company got couple existing pipelines written with AWS CodePipeline / CodeBuild / CodeDeploy. They don't like to investigate more time on re-writing all pipelines.</li> <li>My company does not have any plan to maintain extra instance(s) just for deployment.</li> <li>AWS CodePipeline could trigger Lambda, and theoretically I could write some Python code to do the job if Helm provides Python client.</li> </ol> <p>Currently I steal Lambda function from this:</p> <p><a href="https://github.com/aws-samples/aws-kube-codesuite" rel="nofollow noreferrer">https://github.com/aws-samples/aws-kube-codesuite</a></p> <p>Whereas this does not provide same level features as Helm does. We have to provide our release-name system, template system, etc. In other words, it functions poorly if I have big change on the manifest, and does not handle the first time deployment (means deploying manifest to an empty K8S cluster) Also we use Github, although not really relevant.</p> <p>For python client of Helm chart, the best I could find is pyhelm listed on pip. But it does not have sample code for calling deployment, and from some user group / forum feedback the installation process is painful. Somebody also points to azure/draft and another repo but I have no clue on how to come out a solid example that only use Python to deploy Helm chart.</p> <p>Please let me know where I miss. Thanks.</p>
<p>You can find my fork of pyhelm with examples and Python3 support.</p> <pre><code>git clone [email protected]:andriisoldatenko/pyhelm.git cd pyhelm &amp;&amp; python setup.py install </code></pre> <h2>How to use Pyhelm</h2> <h3>First you need repo_url and chart name to download chart</h3> <pre><code>from pyhelm.repo import from_repo chart_path = chart_versions = from_repo('https://kubernetes-charts.storage.googleapis.com/', 'mariadb') print(chart_path) "/tmp/pyhelm-kibwtj8d/mongodb" </code></pre> <h3>Now you can see that chart folder of mongodb::</h3> <pre><code>In [3]: ls -la /tmp/pyhelm-kibwtj8d/mongodb total 40 drwxr-xr-x 7 andrii wheel 224 Mar 21 17:26 ./ drwx------ 3 andrii wheel 96 Mar 21 17:26 ../ -rwxr-xr-x 1 andrii wheel 5 Jan 1 1970 .helmignore* -rwxr-xr-x 1 andrii wheel 261 Jan 1 1970 Chart.yaml* -rwxr-xr-x 1 andrii wheel 4394 Jan 1 1970 README.md* drwxr-xr-x 8 andrii wheel 256 Mar 21 17:26 templates/ </code></pre> <h3>Next step to build ChartBuilder instance to manipulate with Tiller::</h3> <pre><code>from pyhelm.chartbuilder import ChartBuilder chart = ChartBuilder({'name': 'mongodb', 'source': {'type': 'directory', 'location': '/tmp/pyhelm-kibwtj8d/mongodb'}}) # than we can get chart meta data etc In [9]: chart.get_metadata() Out[9]: name: "mongodb" version: "0.4.0" description: "Chart for MongoDB" </code></pre> <h3>Install chart::</h3> <pre><code>from pyhelm.chartbuilder import ChartBuilder from pyhelm.tiller import Tiller chart = ChartBuilder({'name': 'mongodb', 'source': {'type': 'directory', 'location': '/tmp/pyhelm-kibwtj8d/mongodb'}}) chart.install_release(chart.get_helm_chart(), dry_run=False, namespace='default') Out[9]: release { name: "fallacious-bronco" info { status { code: 6 } first_deployed { seconds: 1521647335 nanos: 746785000 } last_deployed { seconds: 1521647335 nanos: 746785000 } Description: "Dry run complete" } chart {.... } </code></pre>
<p>I'm new to k8s and trying to run 3-nodes (master + 2 workers) cluster (v1.9.6) in Vagrant (Ubuntu 16.04) from scratch without any automation. I believe this is a right way to get hands-on experience for the beginner like me. To be honest, I've already spent on this more than a week and feel desperate.</p> <p>My problem is that coredns pod (same with kube-dns) can't reach out to kube-apiserver via ClusterIP. It looks like this:</p> <pre><code>vagrant@master-0:~$ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 2d kube-system kube-dns ClusterIP 10.0.30.1 &lt;none&gt; 53/UDP,53/TCP 2h vagrant@master-0:~$ kubectl logs coredns-5c6d9fdb86-mffzk -n kube-system E0330 15:40:45.476465 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:319: Failed to list *v1.Namespace: Get https://10.0.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout E0330 15:40:45.478241 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:312: Failed to list *v1.Service: Get https://10.0.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout E0330 15:40:45.478289 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:314: Failed to list *v1.Endpoints: Get https://10.0.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout </code></pre> <p>At the same time I can ping 10.0.0.1 from any machine and from inside the pods (used busybox to test) but curl doesn't work.</p> <p><strong>Master</strong></p> <p><em>interfaces</em></p> <pre><code>br-e468013fba9d Link encap:Ethernet HWaddr 02:42:8f:da:d3:35 inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) docker0 Link encap:Ethernet HWaddr 02:42:d7:91:fd:9b inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) enp0s3 Link encap:Ethernet HWaddr 02:74:f2:80:ad:a4 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3521 errors:0 dropped:0 overruns:0 frame:0 TX packets:2116 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:784841 (784.8 KB) TX bytes:221888 (221.8 KB) enp0s8 Link encap:Ethernet HWaddr 08:00:27:45:ed:ec inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe45:edec/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:322839 errors:0 dropped:0 overruns:0 frame:0 TX packets:329938 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:45879993 (45.8 MB) TX bytes:89279972 (89.2 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:249239 errors:0 dropped:0 overruns:0 frame:0 TX packets:249239 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:75677355 (75.6 MB) TX bytes:75677355 (75.6 MB) </code></pre> <p><em>iptables</em></p> <pre><code>-P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N DOCKER -N DOCKER-ISOLATION -N DOCKER-USER -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -o br-e468013fba9d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o br-e468013fba9d -j DOCKER -A FORWARD -i br-e468013fba9d ! -o br-e468013fba9d -j ACCEPT -A FORWARD -i br-e468013fba9d -o br-e468013fba9d -j ACCEPT -A DOCKER-ISOLATION -i br-e468013fba9d -o docker0 -j DROP -A DOCKER-ISOLATION -i docker0 -o br-e468013fba9d -j DROP -A DOCKER-ISOLATION -j RETURN -A DOCKER-USER -j RETURN </code></pre> <p><em>routes</em></p> <pre><code>Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-e468013fba9d 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8 </code></pre> <p><em>kube-apiserver (docker-compose)</em></p> <pre><code>version: '3' services: kube_apiserver: image: gcr.io/google-containers/hyperkube:v1.9.6 restart: always network_mode: host container_name: kube-apiserver ports: - "8080" volumes: - "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem" - "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem" - "/var/lib/kubernetes/kubernetes.pem:/var/lib/kubernetes/kubernetes.pem" - "/var/lib/kubernetes/kubernetes-key.pem:/var/lib/kubernetes/kubernetes-key.pem" command: ["/usr/local/bin/kube-apiserver", "--admission-control", "Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota", "--advertise-address", "192.168.0.1", "--etcd-servers", "http://192.168.0.1:2379,http://192.168.0.2:2379,http://192.168.0.3:2379", "--insecure-bind-address", "127.0.0.1", "--insecure-port", "8080", "--kubelet-https", "true", "--service-cluster-ip-range", "10.0.0.0/16", "--allow-privileged", "true", "--runtime-config", "api/all", "--service-account-key-file", "/var/lib/kubernetes/ca-key.pem", "--client-ca-file", "/var/lib/kubernetes/ca.pem", "--tls-ca-file", "/var/lib/kubernetes/ca.pem", "--tls-cert-file", "/var/lib/kubernetes/kubernetes.pem", "--tls-private-key-file", "/var/lib/kubernetes/kubernetes-key.pem", "--kubelet-certificate-authority", "/var/lib/kubernetes/ca.pem", "--kubelet-client-certificate", "/var/lib/kubernetes/kubernetes.pem", "--kubelet-client-key", "/var/lib/kubernetes/kubernetes-key.pem"] </code></pre> <p><em>kube-controller-manager (docker-compose)</em></p> <pre><code>version: '3' services: kube_controller_manager: image: gcr.io/google-containers/hyperkube:v1.9.6 restart: always network_mode: host container_name: kube-controller-manager ports: - "10252" volumes: - "/var/lib/kubernetes/ca-key.pem:/var/lib/kubernetes/ca-key.pem" - "/var/lib/kubernetes/ca.pem:/var/lib/kubernetes/ca.pem" command: ["/usr/local/bin/kube-controller-manager", "--allocate-node-cidrs", "true", "--cluster-cidr", "10.10.0.0/16", "--master", "http://127.0.0.1:8080", "--port", "10252", "--service-cluster-ip-range", "10.0.0.0/16", "--leader-elect", "false", "--service-account-private-key-file", "/var/lib/kubernetes/ca-key.pem", "--root-ca-file", "/var/lib/kubernetes/ca.pem"] </code></pre> <p><em>kube-scheduler (docker-compose)</em></p> <pre><code>version: '3' services: kube_scheduler: image: gcr.io/google-containers/hyperkube:v1.9.6 restart: always network_mode: host container_name: kube-scheduler ports: - "10252" command: ["/usr/local/bin/kube-scheduler", "--master", "http://127.0.0.1:8080", "--port", "10251"] </code></pre> <p><strong>Worker0</strong></p> <p><em>interfaces</em></p> <pre><code>br-c5e101440189 Link encap:Ethernet HWaddr 02:42:60:ba:c9:81 inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) cbr0 Link encap:Ethernet HWaddr ae:48:89:15:60:fd inet addr:10.10.0.1 Bcast:10.10.0.255 Mask:255.255.255.0 inet6 addr: fe80::a406:b0ff:fe1d:1d85/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1149 errors:0 dropped:0 overruns:0 frame:0 TX packets:409 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:72487 (72.4 KB) TX bytes:35650 (35.6 KB) enp0s3 Link encap:Ethernet HWaddr 02:74:f2:80:ad:a4 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::74:f2ff:fe80:ada4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3330 errors:0 dropped:0 overruns:0 frame:0 TX packets:2269 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:770147 (770.1 KB) TX bytes:246770 (246.7 KB) enp0s8 Link encap:Ethernet HWaddr 08:00:27:07:69:06 inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe07:6906/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:268762 errors:0 dropped:0 overruns:0 frame:0 TX packets:258080 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:48488207 (48.4 MB) TX bytes:25791040 (25.7 MB) flannel.1 Link encap:Ethernet HWaddr 86:8e:2f:c4:98:82 inet addr:10.10.0.0 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::848e:2fff:fec4:9882/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:8 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2955 errors:0 dropped:0 overruns:0 frame:0 TX packets:2955 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:218772 (218.7 KB) TX bytes:218772 (218.7 KB) vethe5d2604 Link encap:Ethernet HWaddr ae:48:89:15:60:fd inet6 addr: fe80::ac48:89ff:fe15:60fd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:828 (828.0 B) </code></pre> <p><em>iptables</em></p> <pre><code>-P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N DOCKER -N DOCKER-ISOLATION -N DOCKER-USER -N KUBE-FIREWALL -N KUBE-FORWARD -N KUBE-SERVICES -A INPUT -j KUBE-FIREWALL -A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A FORWARD -j DOCKER-USER -A FORWARD -m comment --comment "kubernetes forward rules" -j KUBE-FORWARD -A FORWARD -s 10.0.0.0/16 -j ACCEPT -A FORWARD -d 10.0.0.0/16 -j ACCEPT -A OUTPUT -j KUBE-FIREWALL -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A DOCKER-USER -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -s 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -d 10.10.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT </code></pre> <p><em>routes</em></p> <pre><code>Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3 10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cbr0 10.10.1.0 10.10.1.0 255.255.255.0 UG 0 0 0 flannel.1 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-c5e101440189 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8 </code></pre> <p><em>kubelet (systemd-service)</em></p> <pre><code>[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes #After=docker.service #Requires=docker.service [Service] ExecStart=/usr/local/bin/kubelet \ --allow-privileged=true \ --anonymous-auth=false \ --authorization-mode=AlwaysAllow \ --cloud-provider= \ --cluster-dns=10.0.30.1 \ --cluster-domain=cluster.local \ --node-ip=192.168.0.2 \ --pod-cidr=10.10.0.0/24 \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --runtime-request-timeout=15m \ --hostname-override=worker0 \ # --read-only-port=10255 \ --client-ca-file=/var/lib/kubernetes/ca.pem \ --tls-cert-file=/var/lib/kubelet/worker0.pem \ --tls-private-key-file=/var/lib/kubelet/worker0-key.pem Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target </code></pre> <p><em>kube-proxy (systemd-service)</em></p> <pre><code>[Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes #After=docker.service #Requires=docker.service [Service] ExecStart=/usr/local/bin/kube-proxy \ --cluster-cidr=10.10.0.0/16 \ --kubeconfig=/var/lib/kube-proxy/kubeconfig \ --v=5 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target </code></pre> <p>Worker1 configuration is pretty similar to worker0.</p> <p>If any additional info required, please let me know.</p>
<p>According to the <a href="https://kubernetes.io/docs/reference/generated/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a> documentation:</p> <pre><code>--bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank, all interfaces will be used (0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces). (default 0.0.0.0) --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 6443) </code></pre> <p>As far as I see, the flags <code>--bind-address</code> and <code>--secure-port</code> wasn't defined in your <code>kube-apiserver</code> configuration, so by default <code>kube-apiserver</code> listens https connections on <code>0.0.0.0:6443</code>. </p> <p>So, in order to solve your issue, just add <code>--secure-port</code> flag to the <code>kube-apiserver</code> configuration:</p> <pre><code>"--secure-port", "443", </code></pre>
<p>I am running a dotnet core app using Kubernetes with Docker.</p> <p>The setup is as follows:</p> <p><strong>APP</strong></p> <p>In the dotnet core app, I have Kestrel server listening on port 8080 by setting the following in Program.cs:</p> <pre><code>public static IWebHost BuildWebHost(string[] args) =&gt; WebHost.CreateDefaultBuilder(args) .UseStartup&lt;Startup&gt;() .UseKestrel(options =&gt; { options.Listen(IPAddress.Loopback, 8080); }) .Build(); </code></pre> <p>I have tested the app build locally and the endpoint works as expected on <code>localhost:8080/api/test</code>.</p> <p><strong>DOCKER IMAGE</strong></p> <p>In the Dockerfile I have the following:</p> <pre><code>EXPOSE 8080 </code></pre> <p>I understand this to mean that the container gets built with an exposed 8080 port.</p> <p>Question 1: ...does this necessarily mean that the container's 8080 is mapped to the app's 8080? If not, how do I map this?</p> <p><strong>KUBERNETES (MINIKUBE)</strong></p> <p>Within Kubernetes (running locally on Minikube), I then use a Replication Controller to create 3 pods that each contain 1 docker container with the app. My RC file looks like this:</p> <pre><code>{ "apiVersion": "v1", "kind": "ReplicationController", "spec": { "replicas": 3, "selector": { "app": "myApp" }, "template": { "metadata": { "labels": { "app": "myApp" } }, "spec": { "containers": [ { "name": "my-app", "image": "myname/myapp:1.0", "ports": [ { "containerPort": 8080 } ] } ] } } } } </code></pre> <p>Notice <code>"ports": [{ "containerPort": 8080 }]</code>. The way I understand it, this means that the container port I want to expose is 8080.</p> <p>I then have a Kubernetes Service that exposes my 3 pods' 8080 ports through the endpoint <code>[minikubeIPAddress]:30001</code>:</p> <pre><code>{ "apiVersion": "v1", "kind": "Service", "spec": { "type": "NodePort", "ports": [ { "port": 8080, "nodePort": 30001, "protocol": "TCP" } ], "selector": { "app": "myApp" } } } </code></pre> <p>When I try to hit the endpoint <code>[minikubeIPAddress]:30001/api/test</code> I am getting a 'site can't be reached' error.</p> <p><strong>I had it working this morning when I was using the default HTTP port 80.</strong> The only changes that have been made are to the port numbers.</p> <p>Question 2: ...have I missed something here? Is there a connection along the line here that is still mapped to the default port of 80?</p> <p>Any help would be much appreciated.</p>
<p>use "targetPort" to indicate what port your pod is listening on. Your yaml spec should be something like:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: myApp spec: selector: app: myApp ports: - name: http port: 8080 targetPort: 8080 nodePort: 30001 </code></pre>
<p>So when I run <code>kubectl get all --all-namespaces</code> on different machines, I get different output and I can't understand why.</p> <p>First machine (output trimmed for brevity):</p> <pre><code>kube-system po/kubernetes-dashboard-5569448c6d-w2bdb 1/1 Running 0 16h kube-system po/service-mesh-traefik-5bb8d58bf6-gfdqd 1/1 Running 0 2d kube-system po/tiller-deploy-78d74d4979-rh7nv 1/1 Running 0 23h </code></pre> <p>Second machine:</p> <pre><code>kube-system kubernetes-dashboard-5569448c6d-w2bdb 1/1 Running 0 16h kube-system service-mesh-traefik-5bb8d58bf6-gfdqd 1/1 Running 0 2d kube-system tiller-deploy-78d74d4979-rh7nv 1/1 Running 0 23h </code></pre> <p>Ideally, I would like the output from the first machine, with the resource type prefixed to the ouput. Trying to comprehend the output of <code>get all</code> without it is extremely frustrating and due to the nature of the words in what I am searching for, it is even more frustrating trying to use Google to search for "get all".</p> <p>What is different? The cluster is the same, so it should be returning the same data. The first machine is kubectl version 1.9.2, second machine is 1.10.0. The cluster is running 1.8.7.</p>
<p>The answer lies in the details you mentioned above. The difference in kubectl version displays in different output format between the two machines.</p>
<p>Fresh Kubernetes (1.10.0) cluster stood up using kubeadm (1.10.0) install on RHEL7 bare metal VMs</p> <pre><code>Linux 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Dec 28 14:23:39 EST 2017 x86_64 x86_64 x86_64 GNU/Linux kubeadm.x86_64 1.10.0-0 installed kubectl.x86_64 1.10.0-0 installed kubelet.x86_64 1.10.0-0 installed kubernetes-cni.x86_64 0.6.0-0 installed </code></pre> <p>and 1.12 docker</p> <pre><code>docker-engine.x86_64 1.12.6-1.el7.centos installed docker-engine-selinux.noarch 1.12.6-1.el7.centos installed </code></pre> <p>With Flannel v0.9.1 pod network installed</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml </code></pre> <p>kubeadm init comand I ran is</p> <pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version stable-1.10 </code></pre> <p>which completes successfully and kubeadm join on worker node also successful. I can deploy busybox pod on the master and nslookups are successful, but as soon as I deploy anything to the worker node I get failed API calls from the worker node on the master:</p> <pre><code>E0331 03:28:44.368253 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://172.30.0.85:6443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 172.30.0.85:6443: getsockopt: connection refused E0331 03:28:44.368987 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://172.30.0.85:6443/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 172.30.0.85:6443: getsockopt: connection refused E0331 03:28:44.735886 1 event.go:209] Unable to write event: 'Post https://172.30.0.85:6443/api/v1/namespaces/default/events: dial tcp 172.30.0.85:6443: getsockopt: connection refused' (may retry after sleeping) E0331 03:28:51.980131 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list endpoints at the cluster scope I0331 03:28:52.048995 1 controller_utils.go:1026] Caches are synced for service config controller I0331 03:28:53.049005 1 controller_utils.go:1026] Caches are synced for endpoints config controller </code></pre> <p>and nslookup times out</p> <pre><code>kubectl exec -it busybox -- nslookup kubernetes Server: 10.96.0.10 Address 1: 10.96.0.10 nslookup: can't resolve 'kubernetes' command terminated with exit code 1 </code></pre> <p>I have looked at many similar post on stackoverflow and github and all seem to be resolved with setting iptables -A FORWARD -j ACCEPT but not this time. I have also included the iptables from the worker node</p> <pre><code>Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */ DOCKER all -- anywhere !loopback/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination KUBE-POSTROUTING all -- anywhere anywhere /* kubernetes postrouting rules */ MASQUERADE all -- 172.17.0.0/16 anywhere RETURN all -- 10.244.0.0/16 10.244.0.0/16 MASQUERADE all -- 10.244.0.0/16 !base-address.mcast.net/4 RETURN all -- !10.244.0.0/16 box2.ara.ac.nz/24 MASQUERADE all -- !10.244.0.0/16 10.244.0.0/16 Chain DOCKER (2 references) target prot opt source destination RETURN all -- anywhere anywhere Chain KUBE-MARK-DROP (0 references) target prot opt source destination MARK all -- anywhere anywhere MARK or 0x8000 Chain KUBE-MARK-MASQ (6 references) target prot opt source destination MARK all -- anywhere anywhere MARK or 0x4000 Chain KUBE-NODEPORTS (1 references) target prot opt source destination Chain KUBE-POSTROUTING (1 references) target prot opt source destination MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 Chain KUBE-SEP-HZC4RESJCS322LXV (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.0.18 anywhere /* kube-system/kube-dns:dns-tcp */ DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.18:53 Chain KUBE-SEP-JNNVSHBUREKVBFWD (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.0.18 anywhere /* kube-system/kube-dns:dns */ DNAT udp -- anywhere anywhere /* kube-system/kube-dns:dns */ udp to:10.244.0.18:53 Chain KUBE-SEP-U3UDAUPXUG5BP2NG (2 references) target prot opt source destination KUBE-MARK-MASQ all -- box1.ara.ac.nz anywhere /* default/kubernetes:https */ DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-U3UDAUPXUG5BP2NG side: source mask: 255.255.255.255 tcp to:172.30.0.85:6443 Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https KUBE-MARK-MASQ udp -- !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain KUBE-SVC-TCOU7JCQXEZGVUNU udp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references) target prot opt source destination KUBE-SEP-HZC4RESJCS322LXV all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references) target prot opt source destination KUBE-SEP-U3UDAUPXUG5BP2NG all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-U3UDAUPXUG5BP2NG side: source mask: 255.255.255.255 KUBE-SEP-U3UDAUPXUG5BP2NG all -- anywhere anywhere /* default/kubernetes:https */ Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references) target prot opt source destination KUBE-SEP-JNNVSHBUREKVBFWD all -- anywhere anywhere /* kube-system/kube-dns:dns */ Chain WEAVE (0 references) target prot opt source destination Chain cali-OUTPUT (0 references) target prot opt source destination Chain cali-POSTROUTING (0 references) target prot opt source destination Chain cali-PREROUTING (0 references) target prot opt source destination Chain cali-fip-dnat (0 references) target prot opt source destination Chain cali-fip-snat (0 references) target prot opt source destination Chain cali-nat-outgoing (0 references) target prot opt source destination </code></pre> <p>Also I can see packets getting dropped on the flannel interface</p> <pre><code>flannel.1: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::a096:47ff:fe58:e438 prefixlen 64 scopeid 0x20&lt;link&gt; ether a2:96:47:58:e4:38 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 198 bytes 14747 (14.4 KiB) TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0 </code></pre> <p>I have installed the same versions of Kubernetes/Docker and Flannel on other VMs and it works, but not sure why I am getting these failed API calls to the master proxy from the worker nodes in this install? I have several fresh installs and tried weave and calico pod networks as well with the same result.</p>
<p>Right so I got this going by changing from flannel to weave container networking, and a kubeadm reset and restart of the VMs.</p> <p>Not sure what the problem was flannel and my VMs but happy to have got it going. </p>
<p>I want to create a 3 node Mongo Replica set in Kubernetes. I have created a headless service as below </p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongo labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo </code></pre> <p>I have also created a 3 node stateful set as below -</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongo spec: serviceName: "mongo" replicas: 3 template: metadata: labels: role: mongo environment: test spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo command: - mongod - "--replSet" - rs0 - "--smallfiles" - "--noprealloc" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db resources: limits: cpu: 500m memory: 512Mi requests: cpu: 400m memory: 256Mi volumeClaimTemplates: - metadata: name: mongo-persistent-storage persistentVolumeClaim: claimName: fast spec: accessModes: [ "ReadWriteMany" ] resources: requests: storage: 10Gi </code></pre> <p>I have created the stateful set and the pods are up and running. Now if i login to one of the containers and set the config for the replication set in the mongo shell, I am getting error. The commands I enter are -</p> <pre><code>&gt; config = { ... "_id" : "rs0", ... "members" : [ ... { ... _id: 1, ... host: 'mongo-0.mongo.demo.svc.cluster.local:27017' ... }, ... { ... _id: 2, ... host: 'mongo-1.mongo.demo.svc.cluster.local:27017', ... }, ... { ... _id: 3, ... host: 'mongo-2.mongo.demo.svc.cluster.local:27017' ... } ... ] ... } &gt; rs.initiate(config) </code></pre> <p>When I do the following, I get the below error - </p> <pre><code>"errmsg" : "replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo-1.mongo.demo.svc.cluster.local:27017 failed with Connection refused, mongo-2.mongo.demo.svc.cluster.local:27017 failed with Connection refused" "code" : 74, "codeName" : "NodeNotFound", </code></pre> <p>I dont know how to debug this, because the containers are up and running. Can somebody help me with this? Thanks</p>
<p>If you see the logs of any Pod, you will see a warning:</p> <blockquote> <p>[initandlisten] ** WARNING: This server is bound to localhost. <br/> [initandlisten] ** Remote systems will be unable to connect to this server. </p> </blockquote> <p>So, you have to provide <code>"--bind_ip"</code> flag to ensure that MongoDB listens for connections from applications on configured addresses. See more about ip binding in <a href="https://docs.mongodb.com/manual/tutorial/deploy-replica-set/" rel="nofollow noreferrer">official mongodb doc.</a></p> <p>After correction, the improved yaml will look like:</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongo spec: --- template: --- spec: --- containers: - --- command: - mongod - "--replSet" - rs0 - "--bind_ip" # Add these two - 0.0.0.0 # lines...!! - "--smallfiles" - "--noprealloc" --- --- </code></pre> <p>And, <code>Host</code> DNS needs one slight modification too. The service is deployed in <code>default</code> namespaces, not in <code>demo</code>. So, the correct config will be:</p> <pre><code>&gt; config = { "_id" : "rs0", "members" : [ { _id: 1, host: 'mongo-0.mongo.default.svc.cluster.local:27017' }, { _id: 2, host: 'mongo-1.mongo.default.svc.cluster.local:27017', }, { _id: 3, host: 'mongo-2.mongo.default.svc.cluster.local:27017' } ] } </code></pre>
<p>I have a ingress like the below one.</p> <blockquote> <p>kubectl get ing test-ingress -o yaml</p> </blockquote> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"tectonic"},"name":"test-ingress","namespace":"nstest"},"spec":{"rules":[{"host":"test.nstest.k8s.privatecloud.com","http":{"paths":[{"backend":{"serviceName":"test","servicePort":8080},"path":"/"}]}}]}} kubernetes.io/ingress.class: tectonic creationTimestamp: 2018-03-27T17:57:02Z generation: 1 name: test-ingress namespace: "nstest" resourceVersion: "19985087" selfLink: /apis/extensions/v1beta1/namespaces/nstest/ingresses/test-ingress uid: 4100bd04-31e8-11e8-8f7b-5cb9018ebebc spec: rules: - host: test.nstest.k8s.privatecloud.com http: paths: - backend: serviceName: test servicePort: 8080 path: / status: loadBalancer: {} </code></pre> <p>My service is as follows,</p> <blockquote> <p>kubectl get svc test -o yaml</p> </blockquote> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"test"},"name":"test","namespace":"nstest"},"spec":{"ports":[{"port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"app":"test"}}} creationTimestamp: 2018-03-27T17:57:02Z labels: app: test name: test namespace: "nstest" resourceVersion: "19985067" selfLink: /api/v1/namespaces/nstest/services/test uid: 40f975f3-31e8-11e8-8f7b-5cb9018ebebc spec: clusterIP: 172.158.50.20 ports: - port: 8080 protocol: TCP targetPort: 8080 selector: app: test sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>The pods are running fine. What is wrong with this? Why the routing from ingress to service not working.</p> <p>Error on accessing ingress endpoint,</p> <blockquote> <pre><code>Ingress Error No healthy backends could be found. Check pod liveness probes for more details. </code></pre> </blockquote> <p>Thanks,</p>
<p>You should give the port in your Service spec a "name", and refer to that in your Ingress:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: myApp spec: selector: app: myApp ports: - name: http port: 80 targetPort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myApp spec: rules: - host: myapp.domain.com http: paths: - path: / backend: serviceName: myApp servicePort: http </code></pre>
<p>I need to map a volume while starting the container, I am able to do it so with yaml file. </p> <p>Is there an way volume mapping can be done via command line without using yaml file? just like<br> <code>-v</code> option in docker?</p>
<blockquote> <p>without using yaml file</p> </blockquote> <p>Technically, yes: you would need a json file, as illustrated in "<a href="https://stackoverflow.com/q/37555281/6309">Create kubernetes pod with volume using <code>kubectl run</code></a>"</p> <p>See <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer"><code>kubectl run</code></a>.</p> <pre><code>kubectl run -i --rm --tty ubuntu --overrides=' { "apiVersion": "batch/v1", "spec": { "template": { "spec": { "containers": [ { "name": "ubuntu", "image": "ubuntu:14.04", "args": [ "bash" ], "stdin": true, "stdinOnce": true, "tty": true, "volumeMounts": [{ "mountPath": "/home/store", "name": "store" }] } ], "volumes": [{ "name":"store", "emptyDir":{} }] } } } } ' --image=ubuntu:14.04 --restart=Never -- bash </code></pre>
<p>I am trying to upgrade my 1.9.0 cluster to 1.10. kubeadm upgrade plan command giving below error message. How to resolve this error message</p> <pre><code> kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/config] FATAL: could not decode configuration: unable to decode config from bytes: v1alpha1.MasterConfiguration: KubeProxy: v1alpha1.KubeProxy: Config: v1alpha1.KubeProxyConfiguration: FeatureGates: ReadMapCB: expect { or n, but found ", error found in #10 byte of ...|reGates":"","healthz|..., bigger context ...|24h0m0s"},"enableProfiling":false,"featureGates":"","healthzBindAddress":"0.0.0.0:10256","hostnameOv|... </code></pre> <p><strong>YAML config file output:</strong></p> <pre><code>apiVersion: v1 data: MasterConfiguration: | api: advertiseAddress: 192.168.16.211 bindPort: 6443 authorizationModes: - Node - RBAC certificatesDir: /etc/kubernetes/pki cloudProvider: "" etcd: caFile: "" certFile: "" dataDir: /var/lib/etcd endpoints: null image: "" keyFile: "" imageRepository: gcr.io/google_containers kubeProxy: config: bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: 10.244.0.0/16 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false featureGates: "" healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: minSyncPeriod: 0s scheduler: "" syncPeriod: 30s metricsBindAddress: 127.0.0.1:10249 mode: "" oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy udpTimeoutMilliseconds: 250ms kubeletConfiguration: {} kubernetesVersion: v1.9.0 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 nodeName: k8sm-01 token: "" tokenTTL: 24h0m0s unifiedControlPlaneImage: "" kind: ConfigMap metadata: creationTimestamp: 2017-10-06T20:44:05Z name: kubeadm-config namespace: kube-system resourceVersion: "2462269" selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config uid: 1818b79c-aad7-11e7-9ef5-525400ada096 </code></pre>
<p>This is followed by <a href="https://github.com/kubernetes/kubernetes/issues/61764" rel="nofollow noreferrer">kubernetes issue 61764</a>, which mentions the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#before-upgrading" rel="nofollow noreferrer"><strong>Before upgrading</strong> section</a>:</p> <blockquote> <p><code>kube-proxy</code>: feature gates are now specified as a map when provided via a JSON or YAML <code>KubeProxyConfiguration</code>, rather than as a string of key-value pairs.<br> For example:</p> <p><code>KubeProxyConfiguration</code> Before:</p> <pre><code>apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration **featureGates: "SupportIPVSProxyMode=true"** </code></pre> <p><code>KubeProxyConfiguration</code> After:</p> <pre><code>apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration **featureGates:** ** SupportIPVSProxyMode: true** </code></pre> </blockquote> <p>And:</p> <blockquote> <p>if <code>featureGates: ""</code>, replace with with <code>featureGates: {}</code></p> </blockquote> <p>Actually, the <a href="https://stackoverflow.com/users/344669/sfgroups">OP sfgroups</a> ads <a href="https://stackoverflow.com/questions/49602742/kubernetes-upgrade-from-1-9-0-to-1-10-fatal-could-not-decode-configuration-una/49603827#comment86246695_49603827">in the comments</a>:</p> <blockquote> <p>changed config like this: <code>featureGates: {""}</code></p> </blockquote>
<p>I installed <code>ingress-nginx</code> in a cluster. I tried exposing the service with the <code>kind: nodePort</code> option, but this only allows for a port range between <code>30000-32767</code> (AFAIK)... I need to expose the service at port <code>80</code> for http and <code>443</code> for tls, so that I can link <code>A Records</code> for the domains directly to the service. Does anyone know how this can be done?</p> <p>I tried with <code>type: LoadBalancer</code> before, which worked fine, but this creates a new external Load Balancer at my cloud provider for each cluster. In my current situation I want to spawn multiple mini clusters. It would be too expensive to create a new (digitalocean) Load Balalancer for each of those, so I decided to run each cluster with it's own internal ingress-controller and expose that directly on <code>80/443</code>.</p>
<p>If you want on IP for 80 port from a service you could use the externalIP field in service config yaml. You could find how to write the yaml here <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">Kubernetes External IP</a></p> <p>But if your usecase is really like getting the ingress controller up and running it does not need the service to be exposed externally.</p>