prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm using k8s v 1.7. Following the guide in <a href="https://kubernetes.io/docs/concepts/cluster-administration/sysctl-cluster/" rel="nofollow noreferrer">docs</a>, my deployment has the following in its spec:</p>
<pre><code>annotations:
security.alpha.kubernetes.io/unsafe-sysctls: net.core.somaxconn=16384
</code></pre>
<p>But when applying that using <code>kubectl</code>, I get the following error:</p>
<pre><code>forbidden sysctl: "net.core.somaxconn" not whitelisted
</code></pre>
<p>Now the docs say I should run something like this in the node:</p>
<pre><code>kubelet --experimental-allowed-unsafe-sysctls 'net.core.somaxconn'
</code></pre>
<p>But I don't know where to run that command. I tried inside <code>minikube ssh</code> but <code>kubelet</code> doesn't seem to be installed.</p>
<p>I'm looking for something like:</p>
<pre><code>minikube start --extra-config='allow net.core.somaxconn'
</code></pre>
| <p>I tried with this and it seemed to work</p>
<pre><code>minikube start --extra-config=kubelet.AllowedUnsafeSysctls=net.core.somaxconn
</code></pre>
<p>Extracted from here: <a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kubelet/app/options/options.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cmd/kubelet/app/options/options.go</a></p>
|
<p>As per the documentation at - <a href="https://github.com/spring-cloud-incubator/spring-cloud-kubernetes/#configmap-propertysource" rel="nofollow noreferrer">https://github.com/spring-cloud-incubator/spring-cloud-kubernetes/#configmap-propertysource</a>, it is possible to make ConfigMaps available during application bootstrapping through adding <code>spring.cloud.kubernetes.config.name</code> to the <code>bootstrap.yaml/properties</code>.</p>
<p>Is it possible to consume multiple ConfigMaps in this manner? </p>
<p>I believe it is possible to do this in the pod specification through the use of <code>env-from</code> - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/</a>. But it would be great to do this with the current setup that we have. </p>
| <p>As you can see in <a href="https://github.com/fabric8io/spring-cloud-kubernetes/blob/master/spring-cloud-kubernetes-core/src/main/java/io/fabric8/spring/cloud/kubernetes/config/ConfigMapPropertySource.java" rel="nofollow noreferrer">ConfigMapPropertySource.java</a>, only one ConfigMap will be used by this property source.</p>
<p>However, using <code>envFrom</code>, all entries in a ConfigMap can be provided as environment variables to the container and <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html" rel="nofollow noreferrer">Spring Boot can also read environment variables</a>, so maybe this will help you.</p>
|
<p>We have two pods in Kubernetes that for sake of conversation we'll call pod1 and pod2. I created pv1 and pvc1 on pod 1 and it's working fine.
In my opinion, the documentation is not clear enough about this scenario or I couldn't find the right wiki.
How can I access pv1 and pvc1 from pod2?</p>
| <p>From the k8s documentation:</p>
<blockquote>
<p>A <strong>PersistentVolume (PV)</strong> is a piece of storage in the cluster that
has been provisioned by an administrator. It is a resource in the
cluster just like a node is a cluster resource. PVs are volume plugins
like Volumes, but have a lifecycle independent of any individual pod
that uses the PV. This API object captures the details of the
implementation of the storage, be that NFS, iSCSI, or a
cloud-provider-specific storage system.</p>
<p>A <strong>PersistentVolumeClaim (PVC)</strong> is a request for storage by a user.
It is similar to a pod. Pods consume node resources and PVCs consume
PV resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., can
be mounted once read/write or many times read-only).</p>
</blockquote>
<p>Meaning that in the scenario pictured in the question, if PodA_deployment.yaml creates a volume claim:</p>
<pre><code>volumeMounts:
- name: myapp-data-pv-1
mountPath: /home/myappdata/mystuff
</code></pre>
<p>then PodB will be able to mount the pv making a claim like the following:</p>
<pre><code>volumes:
- name: myapp-data-pv-1
persistentVolumeClaim:
claimName: myapp-data-pvc-1
</code></pre>
<p>in PodB_deployment.yaml.
While it's clear once and it makes sense once you get to understand it, the documentation could explain it better.</p>
|
<p>I'm running a gunicorn+flask service in a docker container with Google Container Engine. I set up the cluster following the tutorial at <a href="http://kubernetes.io/docs/hellonode/" rel="nofollow">http://kubernetes.io/docs/hellonode/</a></p>
<p>The <code>REMOTE_ADDR</code> environmental variable always contains an internal address in the Kubernetes cluster. What I was looking for is <code>HTTP_X_FORWARDED_FOR</code> but it's missing from the request headers. Is it possible to configure the service to retain the external client ip in the requests? </p>
| <p>If anyone gets stuck on this there is a better approach.
You can use the following annotations depending on your kubernetes version:</p>
<pre><code>service.spec.externalTrafficPolicy: Local
</code></pre>
<p>on 1.7</p>
<p>or </p>
<pre><code>service.beta.kubernetes.io/external-traffic: OnlyLocal
</code></pre>
<p>on 1.5-1.6</p>
<p>before this is not supported</p>
<p>source: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/</a></p>
<p>note that there are caveats:
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#caveats-and-limitations-when-preserving-source-ips" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#caveats-and-limitations-when-preserving-source-ips</a></p>
|
<p>I've been working at this, and I'm not making any progress.</p>
<p>The issue is that when I create a service out of a deployment, the ClusterIp that's created for the service isn't accessible within MiniKube as I expect it should be.</p>
<p>I can verify that it's not accessible by sshing into a different pod than the one I've exposed, and pinging the IP of the service.</p>
<p><code>kubectl expose deployment/foo --target-port=2500</code></p>
<p>This creates the service at 10.0.0.5, which routes to ${foo's IP}:2500</p>
<p><code>kubectl exec -it bar-5435435-sadasf -- bash
root@bar-5435435-sadasf:/# ping 10.0.0.5</code></p>
<p><code>PING 10.0.0.5 (10.0.0.5): 56 data bytes
^C--- 10.0.0.5 ping statistics ---
8 packets transmitted, 0 packets received, 100% packet loss</code></p>
<p>I have no issue pinging the pod IP ($foo's IP), but that's not what I want to do.</p>
<p>I've done enough reading to know that the issue is likely related to <code>proxy.go</code> which seems to be the <code>kube-proxy</code> equivalent in Minikube.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies</a>
<a href="https://github.com/kubernetes/minikube/blob/master/pkg/localkube/proxy.go" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/master/pkg/localkube/proxy.go</a></p>
<p>I've checked out the Minikube logs and grepped for anything containing "proxy", and it seems this might point to the issue, but I don't know how to solve it.</p>
<p>My latest proxy logs: </p>
<p><code>Sep 06 18:13:06 minikube localkube[3373]: Starting proxy...
Sep 06 18:13:06 minikube localkube[3373]: Waiting for proxy to be healthy...
Sep 06 18:13:07 minikube localkube[3373]: proxy is ready!
Sep 06 18:46:47 minikube localkube[3373]: E0906 18:46:47.742324 3373 proxy.go:207] Error proxying data from backend to client: write tcp 192.168.99.101:8443->192.168.99.1:58760: write: broken pipe
Sep 06 19:11:41 minikube localkube[3373]: E0906 19:11:41.077014 3373 proxy.go:193] Error proxying data from client to backend: write tcp 127.0.0.1:44180->127.0.1.1:10250: write: broken pipe
Sep 06 19:11:41 minikube localkube[3373]: E0906 19:11:41.077220 3373 proxy.go:207] Error proxying data from backend to client: write tcp 192.168.99.101:8443->192.168.99.1:45586: write: broken pipe
Sep 06 19:22:07 minikube localkube[3373]: E0906 19:22:07.214287 3373 proxy.go:207] Error proxying data from backend to client: write tcp 192.168.99.101:8443->192.168.99.1:51558: write: broken pipe
Sep 06 19:22:07 minikube localkube[3373]: E0906 19:22:07.214459 3373 proxy.go:193] Error proxying data from client to backend: write tcp 127.0.0.1:45184->127.0.1.1:10250: write: broken pipe</code></p>
<p>Does anyone have an idea what the issue is and how to fix it?</p>
| <p>A Service gets its Virtual IP address using the ClusterIP. That IP address is used for communicating with the Service and is accessible only within the cluster. </p>
<p>Make sure that you connect to a service not only by its IP but also with the port that service exposed.</p>
<p>In your case:</p>
<pre><code>$ kubectl expose deployment/foo --port=3030 --target-port=2500
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
foo 10.0.0.73 <nodes> 3030/TCP 1m
</code></pre>
<p>Now the service is reacheable from within a cluster:</p>
<pre><code>$ minikube ssh
$ curl 10.0.0.73:3030
Hello World!
$ exit
$ kubectl exec -i -t bar-j26rd /bin/sh
$ curl 10.0.0.73:3030
Hello World!
$ exit
</code></pre>
|
<p>For exmaple, I created network at docker</p>
<pre><code>docker network create hello-rails
</code></pre>
<p>Then, I have mySQL, which is connected to this network</p>
<pre><code>docker run -p 3306 -d --network=hello-rails --network-alias=db -e MYSQL_ROOT_PASSWORD=password --name hello-rails-db mysql
</code></pre>
<p>And also, I have rails server, which also rely on this network</p>
<pre><code>docker run -it -p 3000:3000 --network=hello-rails -e MYSQL_USER=root -e MYSQL_PASSWORD=password -e MYSQL_HOST=db --name hello-rails benjamincaldwell/hello-docker-rails:latest
</code></pre>
<p>I want to write deployment on kubernetes for these two containers with YAML file. But I don't know, how to put network inside <code>containers</code> in the file. Do you have any recommendations?</p>
| <p>In Kubernetes you would solve this by creating two services.</p>
<p>The MySQL service will look something like this:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
</code></pre>
<p>In your rails server, you can access the MySQL service by either using the <code>mysql</code> DNS name or using the <code>MYSQL_SERVICE_HOST</code> and <code>MYSQL_SERVICE_PORT</code> environment variables. There is no need to link the containers or specifying a network, as would be done in Docker.</p>
<p>Your Rails service will look like this:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: rails
spec:
type: LoadBalancer
selector:
app: rails
ports:
- port: 3000
</code></pre>
<p>Notice the <code>type: LoadBalancer</code>, which specifies that this service will be published to the outside world. Depending on where you run Kubernetes, a public IP address will be automatically assigned to this service.</p>
<p>For more information, have a look at the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services documentation</a>.</p>
|
<p>I don't understand why a separate resource group is created for all of the infrastructure associated with an ACS cluster, and not the Resource Group I specify when creating the cluster? This leave my defined Resource Group with one lonely entity (the ACS Cluster definition) and a whole new Resource Group whose name I don't control. Not a fan of this.</p>
<p>I am currently using the Azure CLI to create my ACS cluster, so I'm "guessing" if I went the ARM route I'd have more control. Still, where does this limitation reside and why?</p>
<p>Here's my CLI command:</p>
<pre><code>az acs create -n=int-madraskube -g=internal-acs
--orchestrator-type=kubernetes --agent-count=2 --generate-ssh-keys --windows --admin-username={myadmin} --admin-password={mypassword} --service-principal={sp_guid} --client-secret={secret_guid}
</code></pre>
<p>And I end up with two resource groups:</p>
<blockquote>
<p>internal-acs</p>
<p>internal-acs_int-madraskube_westus2</p>
</blockquote>
| <p>This is the new design of the ACS (v2) in some selected regions. In the past (the v1), the created resources are in the same resource group as container service resource is. It makes it really hard to clean up those resources when you delete the container service resource. With those being in a new resource group (preferably locked down and invisible, but not possible right now), it's one step closer to managed service.</p>
<p>For more info about ACS v2, <a href="https://github.com/Azure/ACS" rel="nofollow noreferrer">https://github.com/Azure/ACS</a> </p>
|
<p>I've found jsonpath examples for testing multiple values but not extracting multiple values.</p>
<p>I want to get <code>image</code> and <code>name</code> from <code>kubectl get pods</code>.</p>
<p>this gets me <code>name</code>
<code>
kubectl get pods -o=jsonpath='{.items[*].spec.containers[*].name}' | xargs -n 1
</code></p>
<p>this gets me <code>image</code>
<code>
kubectl get pods -o=jsonpath='{.items[*].spec.containers[*].image}' | xargs -n 1
</code></p>
<p>but
<code>
kubectl get pods -o=jsonpath='{.items[*].spec.containers[*].[name,image}' | xargs -n 2
</code></p>
<p>complains <code>invalid array index image</code> - is there a syntax for getting a list of node-adjacent values?</p>
| <p>Use below command to get name and image:</p>
<pre><code>kubectl get pods -Ao jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.template.spec.containers[].image}{"\n"}{end}'
</code></pre>
<p>It will give output like below:</p>
<pre class="lang-none prettyprint-override"><code>name image
</code></pre>
|
<p>I have a Pod that occasionally needs to call itself by its own host-name. I have a Deployment set up that ultimately creates the Pod, and a Service in place. I'm able to use the service name from a different Pod within my cluster, but the Pod cannot call itself using its host-name. This is something that works with Minikube v0.17.1 with Kubernetes version 1.5.3, but upgrading either Minikube or the Kubernetes version seems to break things. This also works on our deployed / server version of Kubernetes.</p>
<p>Did something change that I need to take account of in my Pod / Service setup? How do I go about getting past this?</p>
<h2>How things should work</h2>
<p>Running Minikube version 0.17.1:</p>
<p>Start Minikube:</p>
<pre><code>$ minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}
$ minikube version
minikube version: v0.17.1
</code></pre>
<p>Deploy minimal test images (yaml definition below): </p>
<pre><code>kubectl apply -f deploy_python.yaml
</code></pre>
<p>Exec into the python2 image, and verify connection to python image:</p>
<pre><code>$ kubectl exec -it python2-1281934109-k015g bash
root@python2-1281934109-k015g:/# curl python:12345 --connect-timeout 10
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href=".dockerenv">.dockerenv</a></li>
<li><a href="bin/">bin/</a></li>
<li><a href="boot/">boot/</a></li>
<li><a href="dev/">dev/</a></li>
<li><a href="etc/">etc/</a></li>
<li><a href="home/">home/</a></li>
<li><a href="lib/">lib/</a></li>
<li><a href="lib64/">lib64/</a></li>
<li><a href="media/">media/</a></li>
<li><a href="mnt/">mnt/</a></li>
<li><a href="opt/">opt/</a></li>
<li><a href="proc/">proc/</a></li>
<li><a href="root/">root/</a></li>
<li><a href="run/">run/</a></li>
<li><a href="sbin/">sbin/</a></li>
<li><a href="srv/">srv/</a></li>
<li><a href="sys/">sys/</a></li>
<li><a href="tmp/">tmp/</a></li>
<li><a href="usr/">usr/</a></li>
<li><a href="var/">var/</a></li>
</ul>
<hr>
</body>
</html>
</code></pre>
<p>Exec into python image and verify connection to self:</p>
<pre><code>$ kubectl exec -it python-2555691705-5j0f9 bash
root@python-2555691705-5j0f9:/# curl python:12345 --connect-timeout 10
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href=".dockerenv">.dockerenv</a></li>
<li><a href="bin/">bin/</a></li>
<li><a href="boot/">boot/</a></li>
<li><a href="dev/">dev/</a></li>
<li><a href="etc/">etc/</a></li>
<li><a href="home/">home/</a></li>
<li><a href="lib/">lib/</a></li>
<li><a href="lib64/">lib64/</a></li>
<li><a href="media/">media/</a></li>
<li><a href="mnt/">mnt/</a></li>
<li><a href="opt/">opt/</a></li>
<li><a href="proc/">proc/</a></li>
<li><a href="root/">root/</a></li>
<li><a href="run/">run/</a></li>
<li><a href="sbin/">sbin/</a></li>
<li><a href="srv/">srv/</a></li>
<li><a href="sys/">sys/</a></li>
<li><a href="tmp/">tmp/</a></li>
<li><a href="usr/">usr/</a></li>
<li><a href="var/">var/</a></li>
</ul>
<hr>
</body>
</html>
</code></pre>
<p>That's a success. By creating a Deployment and a Service, I'm able to make requests to the referenced Pod from any other Pod in the cluster.</p>
<h2>The way it works in newer versions</h2>
<p>(Stop and delete running Minikube.)<br>
Start Minikube, specifying Kubernetes version 1.7.0:</p>
<pre><code>$ minikube start --kubernetes-version=v1.7.0
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Downloading localkube binary
137.48 MB / 137.48 MB [============================================] 100.00% 0s
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-30T10:17:58Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Deploy minimal test images (yaml definition below):</p>
<pre><code>$ kubectl apply -f deploy_python.yaml
service "python" created
deployment "python" created
deployment "python2" created
</code></pre>
<p>Exec into python 2 image, and verify connection to python image:</p>
<pre><code>$ kubectl exec -it python2-380393367-ztgkq bash
root@python2-380393367-ztgkq:/# curl python:12345 --connect-timeout 10
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href=".dockerenv">.dockerenv</a></li>
<li><a href="bin/">bin/</a></li>
<li><a href="boot/">boot/</a></li>
<li><a href="dev/">dev/</a></li>
<li><a href="etc/">etc/</a></li>
<li><a href="home/">home/</a></li>
<li><a href="lib/">lib/</a></li>
<li><a href="lib64/">lib64/</a></li>
<li><a href="media/">media/</a></li>
<li><a href="mnt/">mnt/</a></li>
<li><a href="opt/">opt/</a></li>
<li><a href="proc/">proc/</a></li>
<li><a href="root/">root/</a></li>
<li><a href="run/">run/</a></li>
<li><a href="sbin/">sbin/</a></li>
<li><a href="srv/">srv/</a></li>
<li><a href="sys/">sys/</a></li>
<li><a href="tmp/">tmp/</a></li>
<li><a href="usr/">usr/</a></li>
<li><a href="var/">var/</a></li>
</ul>
<hr>
</body>
</html>
</code></pre>
<p>Exec into python image and attempt connection to self:</p>
<pre><code>$ kubectl exec -it python-2168884431-gls2j bash
root@python-2168884431-gls2j:/# curl python:12345 --connect-timeout 10
curl: (28) Connection timed out after 10000 milliseconds
</code></pre>
<p>Yaml file, deploy_python.yaml:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: python
spec:
selector:
app: python
ports:
- port: 12345
targetPort: 12345
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: python
labels:
app: python
spec:
replicas: 1
template:
metadata:
labels:
app: python
spec:
containers:
- image: python
name: python
command: ["python"]
args: ["-m", "http.server", "12345" ]
ports:
- containerPort: 12345
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: python2
labels:
app: python2
spec:
replicas: 1
template:
metadata:
labels:
app: python2
spec:
containers:
- image: python
name: python2
command: ["python"]
args: ["-m", "http.server", "12345" ]
ports:
- containerPort: 12345
</code></pre>
| <p>This is a known bug in newer versions of kubernetes / minikube and is being tracked here: </p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/20475" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/20475</a></p>
<p>The current accepted workaround (<a href="https://github.com/kubernetes/kubernetes/issues/20475#issuecomment-190995739" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/20475#issuecomment-190995739</a>) is to ssh into your minikube vm and run a specific ip link command.</p>
<pre><code>minikube ssh
sudo ip link set docker0 promisc on
</code></pre>
<p>I have tested this workaround and it seems to work for me.</p>
|
<p>I have configured minikube in my local machine and going to use kubernetes externally. I have created a Service Account in kubernetes and using it's secret I can get the access token using below command.</p>
<pre><code>kubectl get secret <service-account-secret> -o yaml -n mynamespace
</code></pre>
<p>My question is how can I do this using fabric8 java client in runtime ? What I want is to obtain the access token by giving the secret of the Service account as a parameter.</p>
<p>I am initiating the config as bellow.</p>
<pre><code>Config config = new ConfigBuilder().withMasterUrl(masterURL)
.withClientCertFile(certFile).withOauthToken(serviceAccountAccessToken).build();
</code></pre>
<p>Can I know how to get the serviceAccountAccessToken as described above using fabric8 java client ?</p>
| <p>The client already does that for you.</p>
<p>If you just create an empty Config object:</p>
<pre><code>Config config = new ConfigBuilder().build();
</code></pre>
<p>or create the client, like:</p>
<pre><code>KubernetesClient client = new DefaultKubernetesClient();
</code></pre>
<p>from within a pod, it will automatically read the token for you.</p>
<p>If you need to pass it elsewhere, you can just:</p>
<pre><code>String token = config.getOauthToken();
</code></pre>
<p>or </p>
<pre><code>String token = client.getConfiguration().getOauthToken();
</code></pre>
|
<p>I'd like to show entries that have <code>.metadata.labels.app</code> set to <code>"myapp"</code>value.</p>
<p>Command:</p>
<pre><code>kubectl get pods -o go-template --template="{{range .items}}{{if eq .metadata.labels.app "myapp"}}{{.metadata.name}} {{end}}{{end}}"
</code></pre>
<p>It gives an error:</p>
<blockquote>
<p>output:1: function "myapp" not defined</p>
</blockquote>
<p>The structures look like this:</p>
<pre><code>- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2017-09-15T08:18:26Z
generateName: myapp-2830765207-
labels:
app: myapp
pod-template-hash: "2830765207"
name: myapp-2830765207-dh359
namespace: default
</code></pre>
| <p>I haven't used kubetcl before, but I am familiar with shell commands in general, from which I can tell you one thing that's going wrong, and maybe that's all you need. (I'm also somewhat familiar with Go templates, and your string comparison looks fine to me.) By using double quotes around your template and within your template, you're actually closing the string you're passing in as the template at the first double quote in <code>"myapp"</code>. Using single quotes around the template should help:</p>
<pre><code>kubectl get pods -o go-template --template='{{range .items}}{{if eq .metadata.labels.app "myapp"}}{{.metadata.name}} {{end}}{{end}}'
</code></pre>
|
<p>I'm wondering if there is a proper naming convention for generated pod names in Kubernetes. By generated pod names I mean the name displayed in both <code>kubectl get pods</code> or, for instance, by querying the heapster api:</p>
<pre><code>$ curl -s http://192.168.99.100:32416/api/v1/model/namespaces/kube-system/pods
[
"kube-addon-manager-minikube",
"kube-dns-v20-8gsbl",
"kubernetes-dashboard-tp9kc",
"heapster-kj8hh",
"influxdb-grafana-stg3s"
]
$ curl -s http://192.168.99.100:32416/api/v1/model/namespaces/default/pods
[
"my-nginx-2723453542-065rx"
]
</code></pre>
<p>If there is no convention (as it looks like) are there any scenario(s) in which the common format: <code>pod name</code> + <code>5 alpha-numeric chars</code> is true?</p>
| <p>if you use deployment then the naming convention as follows:</p>
<p>|--- Deployment: < name ><br>
β<code>-----</code>ββ Replica Set: < name >-< rs ><br>
β<code>--------</code>ββ Pod: < name >-< rs>-< RandomString ></p>
|
<p>I have Docker, Kubernetes(1.7) and Nginx all running on my RHEL7 server with my own services being inside a docker container and being picked up by Kubernetes. I know Kubernetes is working right with docker because I can call a get request of the Kubernete pod using its own IP:PORT addresses and it works. I set up Nginx with a default backend and have all of this working. I know this by calling <code>get pods</code> and <code>get svc</code> commands and everything is running as it should. When I create ingress, I know Nginx is picking it up because when I use the command <code>kubectl describe pods {NGNIX-CONTROLLER}</code> I see it updates its ingress and even logs what I named it. Now I get the IP address of Kubernetes master using <code>kubectl clusterinfo</code> and I use this ip address to attempt to call my services, something along the lines of <code>http://KUBEIPADDRESS/PATH/TO/MY/SERVICE</code>, with no port number but it doesn't work. I have no idea what is going on. Can someone help me why Ingress and/or Nnginx isn't routing properly to my services? I'll give my ingress and nginx file down below. </p>
<p>(Note, for the nginx yaml file, the deployment of the nginx controller is all the way in the bottom.)</p>
<p>Ingress yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: testhost
http:
paths:
- path: /customer
backend:
serviceName: customer
servicePort: 9001
</code></pre>
<p>nginx controller yaml </p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress
rules:
- apiGroups:
- ""
- "extensions"
resources:
- configmaps
- secrets
- services
- endpoints
- ingresses
- nodes
- pods
verbs:
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- apiGroups:
- ""
resources:
- events
- services
verbs:
- create
- list
- update
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
- ingresses
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-ns
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- services
verbs:
- get
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-ns-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-ns
subjects:
- kind: ServiceAccount
name: ingress
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress
subjects:
- kind: ServiceAccount
name: ingress
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
hostNetwork: true
terminationGracePeriodSeconds: 60
serviceAccountName: ingress
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
</code></pre>
<p>Also when I do <code>kubectl describe ing</code> I get</p>
<pre><code>Name: gateway-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
testhost
/customer customer:9001 ({IP}:9001,{IP}:9001)
Annotations:
rewrite-target: /
Events: <none>
</code></pre>
<p>Here are my deployment and service of the customer in case anyone needs that</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: customer
labels:
run: customer
spec:
replicas: 2
template:
metadata:
labels:
run: customer
spec:
containers:
- name: customer
image: customer
imagePullPolicy: Always
ports:
- containerPort: 9001
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: customer
spec:
selector:
run: customer
type: NodePort
ports:
- name: port1
protocol: TCP
port: 9001
targetPort: 9001
</code></pre>
| <p>There are some issues with your setup as far as I can see:</p>
<ul>
<li><p><code>KUBEIPADDRESS</code> in the URL you call: an IP address won't work because you configured your Ingress to listen on <code>testhost</code>. So you need to call <code>http://testhost/customer</code>, and configure your network to resolve <code>testhost</code> to the correct IP address</p></li>
<li><p>but what is the correct IP address? You are trying to use k8s master on port 80. That won't work without further configuration. For that you need to use a NodePort service for the Ingress Controller, which exposes it on port 80 (and probably 433). In order to use that low ports, you need to allow it with an option of kube-apiserver, see <code>--service-node-port-range</code> on <a href="https://kubernetes.io/docs/admin/kube-apiserver/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/kube-apiserver/</a>. Once that works, you can use any IP address of any node of your k8s cluster for <code>testhost</code>. Note: be sure that no other application uses these ports on any node!</p></li>
</ul>
|
<p>I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.</p>
<pre><code>$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
</code></pre>
<p>However all <code>kubectl</code> commands fail with "connection refused - did you specify the right host or port?"</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
</code></pre>
<p>The solution proposed <a href="https://stackoverflow.com/a/45274285/4415889">here</a> (<code>sudo ifconfig vboxnet0 up</code>) did not help, the vboxnet0 interface is up.</p>
<p>Any ideas or suggestions are highly appreciated.</p>
| <p>If you run </p>
<pre><code>kubectl config get-contexts
</code></pre>
<p>Do you get the following?</p>
<pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
</code></pre>
<p>If not that means your kubectl context is not correctly setup. To setup the context correctly run this</p>
<pre><code>kubectl config use-context minikube
</code></pre>
|
<p>I can already do spot deployments with kops but it requires manually editing a the instance groups (nodes)</p>
<pre><code>$ kops edit ig --name=test.dev.test.com nodes
machineType: t2.medium
maxSize: 2
minSize: 2
=>
machineType: t1.nano
maxSize: 1
minSize: 1
</code></pre>
<p>Need to look into a way of doing this automatically with the average spot price + 10%</p>
<p>I would also like to have at least 1 master and 1 node that are running on normal instances to survive a complete spot-overbid shutdown and the rest to spot price.</p>
<p>Can anyone help me with this?</p>
| <p>You could use Argo minion-manager. This is part of open-source workflow engine for Kubernetes <a href="http://argoproj.io" rel="nofollow noreferrer">Argo</a> , but can be run on any kubernetes cluster. Minion-manager runs as deployment and periodically updates spot instance prices using the AWS pricing API.</p>
<p>More information about minion-manager is available here: <a href="https://blog.argoproj.io/use-spot-instances-with-your-kubernetes-clusters-on-aws-2a27f1887bb0" rel="nofollow noreferrer">https://blog.argoproj.io/use-spot-instances-with-your-kubernetes-clusters-on-aws-2a27f1887bb0</a></p>
|
<p>I want to have full control of what I do with my single node cluster (savings...lol), but somehow I can't do this even if I delete the deployment it respawns ..</p>
| <p>As mentioned in another answer, you cannot delete them directly via the Kubernetes API; however, you can delete them indirectly via the Google Container Engine API. </p>
<p>To remove the dashboard, run <code>gcloud container clusters update $CLUSTER_NAME --update-addons=KubernetesDashboard=DISABLED</code>. </p>
<p>To disable heapster you need to disable monitoring using <code>gcloud container clusters update $CLUSTER_NAME --monitoring-service=none</code> (it may actually require disabling another add-on too, I can't recall at the moment).</p>
<p>See <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/update" rel="nofollow noreferrer">https://cloud.google.com/sdk/gcloud/reference/container/clusters/update</a> for the commands referenced above. </p>
|
<p>I could create the container:</p>
<pre><code>$ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=
deployment "hello-minikube" created
</code></pre>
<p>And I'm now trying to expose a service:</p>
<pre><code>$ kubectl expose deployment hello-minikube --type=NodePort
error: couldn't find port via --port flag or introspection
</code></pre>
<p>Even if I delete it, it still comes back of its own:</p>
<pre><code>$ kubectl delete pod hello-minikube-2138963058-2szl7
pod "hello-minikube-2138963058-2szl7" deleted
[stephane@stephane-ThinkPad-X201 ~]
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-minikube-2138963058-nhh1q 1/1 Running 0 3m
</code></pre>
<p>The is the pod:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-minikube-2138963058-2szl7 1/1 Running 0 16m
</code></pre>
<p>And its description:</p>
<pre><code>$ kubectl describe pod hello-minikube-2138963058-2szl7
Name: hello-minikube-2138963058-2szl7
Namespace: default
Node: minikube/192.168.42.196
Start Time: Thu, 14 Sep 2017 23:20:03 +0200
Labels: pod-template-hash=2138963058
run=hello-minikube
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"hello-minikube-2138963058","uid":"2b37ca13-9968-11e7-a720-525400...
Status: Running
IP: 172.17.0.3
Created By: ReplicaSet/hello-minikube-2138963058
Controlled By: ReplicaSet/hello-minikube-2138963058
Containers:
hello-minikube:
Container ID: docker://5e4ba407d8869e6e843ec3d7876e953147cc01104e980c7febfea218808ab379
Image: gcr.io/google_containers/echoserver:1.4
Image ID: docker-pullable://gcr.io/google_containers/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb
Port: <none>
State: Running
Started: Thu, 14 Sep 2017 23:20:05 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t8qx7 (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-t8qx7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-t8qx7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
17m 17m 1 default-scheduler Normal Scheduled Successfully assigned hello-minikube-2138963058-2szl7 to minikube
17m 17m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-t8qx7"
17m 17m 1 kubelet, minikube spec.containers{hello-minikube} Normal Pulled Container image "gcr.io/google_containers/echoserver:1.4" already present on machine
17m 17m 1 kubelet, minikube spec.containers{hello-minikube} Normal Created Created container
17m 17m 1 kubelet, minikube spec.containers{hello-minikube} Normal Started Started container
</code></pre>
<p>All of this on a host minikube on Linux.</p>
| <p>You get this error because you didn't set the container port from the command <code>kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=</code> as such the expose command doesn't know which container port to map to a node port and then error </p>
<p>You have to set exact container port as follow <code>kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=80</code> assuming <code>80</code> is the port number and then run the expose again.</p>
<p>See bellow step by step of how I was able to replicate your error and then fix</p>
<pre><code>C:\Users\innocent.anigbo\.minikube>kubectl run hello-kube --image=gcr.io/google_
containers/echoserver:1.4 --port=
deployment "hello-kube" created
C:\Users\innocent.anigbo\.minikube>kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-kube-1448409582-c9sm5 1/1 Running 0 1m
hello-minikube-938614450-417hj 1/1 Running 1 8d
hello-nginx-3322088713-c4rp4 1/1 Running 0 6m
C:\Users\innocent.anigbo\.minikube>kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-kube 1 1 1 1 2m
hello-minikube 1 1 1 1 8d
hello-nginx 1 1 1 1 7m
C:\Users\innocent.anigbo\.minikube>kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-nginx 10.0.0.136 <nodes> 80:32155/TCP 4m
kubernetes 10.0.0.1 <none> 443/TCP 20d
C:\Users\innocent.anigbo\.minikube>kubectl expose deployment hello-kube --type=N
odePort
error: couldn't find port via --port flag or introspection
See 'kubectl expose -h' for help and examples.
C:\Users\innocent.anigbo\.minikube>kubectl delete deployment hello-kube
deployment "hello-kube" deleted
C:\Users\innocent.anigbo\.minikube>kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-minikube-938614450-417hj 1/1 Running 1 8d
hello-nginx-3322088713-c4rp4 1/1 Running 0 11m
C:\Users\innocent.anigbo\.minikube>kubectl run hello-kube --image=gcr.io/google_
containers/echoserver:1.4 --port=80
deployment "hello-kube" created
C:\Users\innocent.anigbo\.minikube>kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-kube-2715294448-0rxf2 1/1 Running 0 3s
hello-minikube-938614450-417hj 1/1 Running 1 8d
hello-nginx-3322088713-c4rp4 1/1 Running 0 11m
C:\Users\innocent.anigbo\.minikube>kubectl expose deployment hello-kube --type=N
odePort
service "hello-kube" exposed
C:\Users\innocent.anigbo\.minikube>kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kube 10.0.0.137 <nodes> 80:30004/TCP 3s
hello-nginx 10.0.0.136 <nodes> 80:32155/TCP 9m
kubernetes 10.0.0.1 <none> 443/TCP 20d
</code></pre>
|
<p>I installed kubernetes by following this <a href="https://blog.alexellis.io/kubernetes-in-10-minutes/" rel="nofollow noreferrer">tutorial</a>.</p>
<p>One of my containers tries to get resources from an external domain, such as google.com. But it fails because kubernetes dns doesn't use external name resolving.</p>
<p>How can I configure kubernetes using dns 8.8.8.8 ?</p>
| <p>What are the results of <code>nslookup google.com</code> in the container and the node?</p>
<p>If the pod's dnsPolicy is ClusterFirst, google.com DNS query should be forwarded to the upstream DNS which the node specified.</p>
<p>It would be also useful to show the kube-dns container config and logs.</p>
<p><a href="https://i.stack.imgur.com/MnwHu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MnwHu.png" alt="Default lookup flow"></a></p>
|
<p>I'm creating an environment-variable with</p>
<pre><code> env :
- name: GCE_ENV
value: my-value
</code></pre>
<p>Is there a way to consume that from Java / Scala?</p>
<pre><code> "echo $GCE_ENV" !!
</code></pre>
<p>Did not grab it; I guess the JVM console session doesn't get it set?</p>
<p>Logging into the container did work </p>
<pre><code>kubectl exec -it POD -- /bin/bash
bash-4.3$ echo $GCE_ENV
my-value
</code></pre>
| <p>Getting env variable from Java is done through: </p>
<pre><code>System.getenv("GCE_ENV")
</code></pre>
|
<p>There is an error during fission setup on minikube.
I went through this instruction: <a href="http://fission.io/docs/v0.2.1/install/" rel="nofollow noreferrer">http://fission.io/docs/v0.2.1/install/</a>
On this command:</p>
<pre><code>helm install --namespace fission --set serviceType=NodePort https://github.com/fission/fission/releases/download/v0.2.1/fission-all-v0.2.1.tgz
</code></pre>
<p>There is an error:</p>
<pre><code>Error: apiVersion "rbac.authorization.k8s.io/v1beta1" in fission-all/templates/deployment.yaml is not available
</code></pre>
<p>My env is OSX Sierra 10.12.16</p>
<p>kubectl version is: 1.7.</p>
| <p>Finally I was figured out that the problem was due to installation of old minikube version: v0.16.0.</p>
<p>After upgrade to v0.22.1 everything works as expected</p>
|
<p>Below is all replica sets in my kubernetes environments create by <strong>deployments</strong> (when using <strong>deployment</strong>, it will first create <strong>replica sets</strong>): </p>
<pre><code>[root@master24 004-prometheus]# kubectl get rs --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY AGE
dev test-ddd-1-0-4129700023 3 3 3 6d
dev test111-1-0-2606576459 3 3 3 6d
dev test2-1-0-568340644 3 3 0 6d
kube-system alertmanager-2876254624 0 0 0 12d
kube-system alertmanager-34609585 0 0 0 31d
kube-system alertmanager-646055916 1 1 1 12d
kube-system container-terminal-2048074043 1 1 1 26d
kube-system container-terminal-2798068035 0 0 0 26d
kube-system default-http-backend-2282004791 1 1 1 99d
kube-system elastic-hq-1088064035 0 0 0 62d
kube-system elastic-hq-143297398 1 1 1 32d
kube-system elastic-hq-2334099411 0 0 0 62d
kube-system elastic-hq-2453506004 0 0 0 62d
kube-system elastic-hq-963545625 0 0 0 35d
kube-system grafana-2729867605 1 1 1 32d
kube-system kibana-logging-2271207004 1 1 1 39d
kube-system kibana-logging-3117667162 0 0 0 65d
kube-system kube-dns-1277622866 1 1 1 103d
kube-system kube-dns-418314620 0 0 0 103d
kube-system kube-ops-view-2627512969 1 1 1 62d
kube-system kube-state-metrics-290271031 2 2 2 31d
kube-system kubernetes-dashboard-102631441 0 0 0 103d
kube-system kubernetes-dashboard-2251164715 0 0 0 102d
kube-system kubernetes-dashboard-2628062973 1 1 1 25d
kube-system kubernetes-dashboard-3038119623 0 0 0 102d
kube-system kubernetes-dashboard-4259055596 0 0 0 40d
kube-system kubernetes-dashboard-849151901 0 0 0 103d
kube-system nginx-ingress-controller-1663470174 0 0 0 65d
kube-system nginx-ingress-controller-2772479350 0 0 0 64d
kube-system nginx-ingress-controller-3812017923 4 4 4 32d
kube-system nginx-ingress-controller-878021172 0 0 0 99d
kube-system prometheus-core-1049380509 0 0 0 11d
kube-system prometheus-core-1057044748 0 0 0 11d
kube-system prometheus-core-124082880 0 0 0 12d
kube-system prometheus-core-1274300653 0 0 0 32d
kube-system prometheus-core-1496270586 0 0 0 10d
kube-system prometheus-core-2953594474 0 0 0 10d
kube-system prometheus-core-3816572233 1 1 1 10d
kube-system prometheus-core-87115933 0 0 0 22d
kube-system prometheus-core-87378077 0 0 0 31d
kube-system redmine-sqlite-1134748302 0 0 0 31d
kube-system redmine-sqlite-1236657579 1 1 1 31d
kube-system redmine-sqlite-2005852822 0 0 0 31d
kube-system redmine-sqlite-3063145196 0 0 0 31d
kube-system sonarqube-3729821934 0 0 0 31d
kube-system sonarqube-3857878650 0 0 0 31d
kube-system sonarqube-3908538607 1 1 1 30d
</code></pre>
<p>For some reason I need to analyze the name of rs and all names of rs contains some random digitals in the end, but the number of digitals is not fixed( it may be 8, 9 or 10). So the question is:</p>
<ol>
<li><p>Does the the number of digitals is always be 8, 9 or 10, or it could be any number like 5 or 15? </p></li>
<li><p>Where do I found the naming rules of this?</p></li>
</ol>
<p><strong>Envrionment</strong>
kubernets 1.6.8</p>
| <p>If I found the correct line, then the name of a ReplicaSet is generated here:
<a href="https://github.com/kubernetes/kubernetes/blob/16670f1a95a61434f8d6c96596a3c541b01fb8aa/pkg/controller/deployment/sync.go#L299" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/16670f1a95a61434f8d6c96596a3c541b01fb8aa/pkg/controller/deployment/sync.go#L299</a></p>
<pre><code>podTemplateSpecHash := fmt.Sprintf("%d", controller.ComputeHash(&newRSTemplate, d.Status.CollisionCount))
//...
Name: d.Name + "-" + rand.SafeEncodeString(podTemplateSpecHash),
</code></pre>
<p>Where <code>d</code> is a deployment.</p>
<p>The random method is implemented like this:</p>
<pre><code>// SafeEncodeString encodes s using the same characters as rand.String. This reduces the chances of bad words and
// ensures that strings generated from hash functions appear consistent throughout the API.
func SafeEncodeString(s string) string {
r := make([]rune, len(s))
for i, b := range []rune(s) {
r[i] = alphanums[(int(b) % len(alphanums))]
}
return string(r)
}
</code></pre>
<p>It looks to me, that the length of the random is based on the length of the podTemplateSpecHash, which is a value of type <code>uint32</code>. Here is the method signature of the hashing function:</p>
<pre><code>func ComputeHash(template *v1.PodTemplateSpec, collisionCount *int32) uint32 {...}
</code></pre>
<p>Therefore the maximum number of digits should be 10. </p>
|
<p>I am migrating to Azure platform from GCP. I have a k8s cluster that needs to talk to external Cassandra cluster using internal IP(s), in the same Azure region but different VNET. I have the VNET(s) peered. I can reach the Cassandra cluster from the K8s nodes and vice versa but cannot reach them from the pods.</p>
<p>This seems to be some Azure networking issue. I have opened up firewall rules for the pods to reach Cassandra but with no luck. How best should I solve this?</p>
| <p>Because Azure can't find your private IP address of your pods. We can use Azure <strong>route table</strong> to connect them.</p>
<p>Here is my test, two resource group, one for k8s and another one for a signal VM.</p>
<p>Here is the information about pods:</p>
<pre><code>root@k8s-master-CA9C4E39-0:~# kubectl get pods --output=wide
NAME READY STATUS RESTARTS AGE IP NODE
influxdb 1/1 Running 0 59m 10.244.1.166 k8s-agent-ca9c4e39-0
my-nginx-858393261-jrz15 1/1 Running 0 1h 10.244.1.63 k8s-agent-ca9c4e39-0
my-nginx-858393261-wbpl6 1/1 Running 0 1h 10.244.1.62 k8s-agent-ca9c4e39-0
nginx 1/1 Running 0 52m 10.244.1.179 k8s-agent-ca9c4e39-0
nginx3 1/1 Running 0 43m 10.244.1.198 k8s-agent-ca9c4e39-0
</code></pre>
<p>The information about K8s agent and master :</p>
<p><a href="https://i.stack.imgur.com/loZrQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/loZrQ.png" alt="enter image description here"></a></p>
<p>The information about the signal VM:</p>
<p><a href="https://i.stack.imgur.com/JDeuT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JDeuT.png" alt="enter image description here"></a></p>
<p>By default, we can't use <code>172.16.0.4</code> to ping <code>10.244.1.0/24</code>. We should add an Azure route table, then we can ping that pod IP address:</p>
<p><a href="https://i.stack.imgur.com/EVlF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EVlF1.png" alt="enter image description here"></a></p>
<p>Here is my result:</p>
<pre><code>root@jasonvm2:~# ping 10.244.1.166
PING 10.244.1.166 (10.244.1.166) 56(84) bytes of data.
64 bytes from 10.244.1.166: icmp_seq=1 ttl=63 time=2.61 ms
64 bytes from 10.244.1.166: icmp_seq=2 ttl=63 time=1.42 ms
--- 10.244.1.166 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.424/2.019/2.614/0.595 ms
root@jasonvm2:~# ping 10.244.1.166
PING 10.244.1.166 (10.244.1.166) 56(84) bytes of data.
64 bytes from 10.244.1.166: icmp_seq=1 ttl=63 time=2.56 ms
64 bytes from 10.244.1.166: icmp_seq=2 ttl=63 time=1.10 ms
^C
--- 10.244.1.166 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.102/1.833/2.564/0.731 ms
root@jasonvm2:~# ping 10.244.1.63
PING 10.244.1.63 (10.244.1.63) 56(84) bytes of data.
64 bytes from 10.244.1.63: icmp_seq=1 ttl=63 time=2.89 ms
64 bytes from 10.244.1.63: icmp_seq=2 ttl=63 time=2.27 ms
--- 10.244.1.63 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 2.271/2.581/2.892/0.314 ms
</code></pre>
<p>About Azure route table, please refer to this <a href="https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview" rel="nofollow noreferrer">link</a>.</p>
|
<p>Is there a way to share secrets across namespaces in Kubernetes?</p>
<p>My use case is: I have the same private registry for all my namespaces and I want to avoid creating the same secret for each.</p>
| <p>Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Basically, you will have to create the secret for every namespace.</p>
<p>For more details, see this: <a href="https://kubernetes.io/docs/concepts/configuration/secret/#details" rel="noreferrer">Kubernetes Documentation / Concepts / Configuration / Secrets</a></p>
|
<p>I am trying to add Kubernetes as cloud to Jenkins server with the appropriate Kubernetes URL and other details. When i add the details and test the connection
i get the following error</p>
<blockquote>
<p>Error connecting to <a href="https://192.168.X.XX:6443" rel="nofollow noreferrer">https://192.168.X.XX:6443</a>: Failure executing: GET at: <a href="https://192.168.X.XX:6443/api/v1/namespaces/default/pods" rel="nofollow noreferrer">https://192.168.X.XX:6443/api/v1/namespaces/default/pods</a>. Message: User "system:anonymous" cannot list pods in the namespace "default".."</p>
</blockquote>
<p>I tried to perform curl with --insecure option but the same following error is logged.</p>
<blockquote>
<p>Message: User "system:anonymous" cannot list pods in the namespace "default".."</p>
</blockquote>
<p>I tried to add jenkins and the user credentials to login to jenkins as clusteradminrole using the following kubectl command</p>
<blockquote>
<p>kubectl create rolebinding jenkins-admin-binding --clusterrole=admin --user=jenkins--namespace=default</p>
</blockquote>
<p>But still the same error.</p>
<p>Anything is missing?</p>
<p>EDIT 1: Have tried to do the following as suggested</p>
<blockquote>
<p>openssl genrsa -out jenkins.key 2048</p>
<p>openssl req -new -key jenkins.key -out jenkins.csr -subj "/CN=jenkins/O=admin_jenkins"</p>
<p>openssl x509 -req -in jenkins.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out jenkins.crt -days 500</p>
<p>kubectl config set-credentials jenkins --client-certificate=/root/pods/admin_jenkins/.certs/jenkins.crt --client-key=/root/pods/admin_jenkins/.certs/jenkins.key</p>
<p>kubectl config set-context jenkins-context --cluster=kubernetes --namespace=default --user=jenkins</p>
<p>kubectl create -f role.yaml (Role file as described)</p>
<p>kubectl create -f role-binding.yaml</p>
</blockquote>
<p>even after this</p>
<pre><code>kubectl --context=jenkins-context get deployments
gives the following error
"Error from server (Forbidden): User "jenkins" cannot list deployments.extensions in the namespace "default". (get deployments.extensions)"
</code></pre>
<p>Update 2:</p>
<pre><code>after following above steps
"kubectl --context=jenkins-context get deployments" was successful.
i did the whole exercise after doing a kubeadm reset and it worked
</code></pre>
<p>But the problem still remains of integrating K8 with Jenkins when i am trying to add it as a cloud using its plugin.</p>
| <p>Did you define the role <code>admin</code>? if not define the admin role. below document your refer it.</p>
<p><a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/</a></p>
<p>Update:
1. you can create file <code>role.yaml</code> like this and create role. then run <code>kubectl apply -f role.yaml</code></p>
<pre><code> kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: admin
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # You can also use ["*"]
</code></pre>
<p>you need to pass the client certificate with this role to authenticate. </p>
<p>from your second question your trying to use this account to authenticate jenkin application user. I am not sure this method will work for you.</p>
<p><strong>update on 9/25/17</strong></p>
<pre><code>Username: admin
Group: jenkins
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=jenkins"
#Run this as root user in master node
openssl x509 -req -in admin.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out admin.crt -days 500
mkdir .certs/
mv admin.* .certs/
kubectl config set-credentials admin --client-certificate=/home/jenkin/.certs/admin.crt --client-key=/home/jenkin/.certs/admin.key
kubectl config set-context admin-context --cluster=kubernetes --namespace=jenkins --user=admin
</code></pre>
<p><strong>Save this in the file and create role</strong></p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: jenkins
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # You can also use ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
namespace: jenkins
subjects:
- kind: User
name: admin
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""
</code></pre>
<p><strong>Run the get pods command</strong></p>
<pre><code>kubectl --context=admin-context get pods
</code></pre>
|
<p>After upgrading my cluster nodes image from <strong>CONTAINER_VM</strong> to <strong>CONTAINER_OPTIMIZED_OS</strong> I ran into performance degradation of the <strong>PHP Application</strong> up to 10 times.
Did i miss something in my configuration or its a common issue?
I tried to take machines with more CPU and memory but it affected the performance slightly.</p>
<p>Terraform configuration:</p>
<pre><code>resource "google_compute_address" "dev-cluster-address" {
name = "dev-cluster-address"
region = "europe-west1"
}
resource "google_container_cluster" "dev-cluster" {
name = "dev-cluster"
zone = "europe-west1-d"
initial_node_count = 2
node_version = "1.7.5"
master_auth {
username = "*********-dev"
password = "*********"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/devstorage.full_control",
"https://www.googleapis.com/auth/sqlservice.admin"
]
machine_type = "n1-standard-1"
disk_size_gb = 20
image_type = "COS"
}
}
</code></pre>
<p>Kubernetes deployment for <strong>Symfony Application</strong>:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment-dev
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: dev
spec:
containers:
- name: nginx
image: nginx:1.13.5-alpine
volumeMounts:
- name: application
mountPath: /var/www/web
- name: nginx-config
mountPath: /etc/nginx/conf.d
ports:
- containerPort: 80
resources:
limits:
cpu: "20m"
memory: "64M"
requests:
cpu: "5m"
memory: "16M"
- name: php
image: ********
lifecycle:
postStart:
exec:
command:
- "bash"
- "/var/www/provision/files/init_php.sh"
envFrom:
- configMapRef:
name: symfony-config-dev
volumeMounts:
- name: application
mountPath: /application
- name: logs
mountPath: /var/www/var/logs
- name: lexik-jwt-keys
mountPath: /var/www/var/jwt
ports:
- containerPort: 9000
resources:
limits:
cpu: "400m"
memory: "1536M"
requests:
cpu: "300m"
memory: "1024M"
- name: cloudsql-proxy-mysql
image: gcr.io/cloudsql-docker/gce-proxy:1.09
resources:
limits:
cpu: "10m"
memory: "64M"
requests:
cpu: "5m"
memory: "16M"
command:
- "/cloud_sql_proxy"
- "-instances=***:europe-west1:dev1=tcp:0.0.0.0:3306"
- name: cloudsql-proxy-analytics
image: gcr.io/cloudsql-docker/gce-proxy:1.09
resources:
limits:
cpu: "20m"
memory: "64M"
requests:
cpu: "10m"
memory: "16M"
command:
- "/cloud_sql_proxy"
- "-instances=***:europe-west1:analytics-dev1=tcp:0.0.0.0:3307"
- name: sidecar-logging
image: alpine:3.6
args: [/bin/sh, -c, 'tail -n+1 -f /var/www/var/logs/prod.log']
volumeMounts:
- name: logs
mountPath: /var/www/var/logs
resources:
limits:
cpu: "5m"
memory: "20M"
requests:
cpu: "5m"
memory: "20M"
volumes:
- name: application
emptyDir: {}
- name: logs
emptyDir: {}
- name: nginx-config
configMap:
name: config-dev
items:
- key: nginx
path: default.conf
- name: lexik-jwt-keys
configMap:
name: config-dev
items:
- key: lexik_jwt_private_key
path: private.pem
- key: lexik_jwt_public_key
path: public.pem
</code></pre>
| <p>One of the reasons could be the fact that Kubernetes actually started enforcing the CPU limits with Container-Optimized OS.</p>
<pre><code>resources:
limits:
cpu: "20m"
</code></pre>
<p>These were not enforced on the older ContainerVM images.</p>
<p>Could you please try removing/relaxing cpu limits from your pod-spec and see if it helps?</p>
|
<p>I have a problem with multi-port services. I try to expose two ports, the first one works, the other does not. I am testing this with telnet (amongst others), and I always get "connection refused" for the second port.</p>
<p>This is the part about the ports in the service's yaml:</p>
<pre><code>spec:
clusterIP: 10.97.153.249
externalTrafficPolicy: Cluster
ports:
- name: port-1
nodePort: 32714
port: 8080
protocol: TCP
targetPort: 8080
- name: port-2
nodePort: 32715
port: 17176
protocol: TCP
targetPort: 17176
</code></pre>
| <p>I would first confirm that kubectl get svc shows the two NodePorts. If that is the case, then it is highly likely that the destination port in the pods are not working. Could you check in the pods if the ports are listening correctly? Then, I would also advise you to check the access using the ClusterIP as well. </p>
|
<p>I am in the middle of no where. Following this tutorial <a href="https://cloud.google.com/container-engine/docs/tutorials/persistent-disk" rel="nofollow noreferrer">https://cloud.google.com/container-engine/docs/tutorials/persistent-disk</a></p>
<p>I deployed wordpress to google container engine. Now i have no idea how to access wordpress files on this Persistent Disks either with ftp or sftp. I can access project files with sftp on filezilla but can't find wordpress core files in it. Is there any way i can access these wordpress files?</p>
| <p>The persistent disk containing the wordpress files are attached to the wordpress pod. This disk is mounted on /var/www/html folder under the wordpress pod. You can access these files by connecting to the wordpress pod. First get the name of the running pod by executing the following command,</p>
<pre><code> kubectl get pods
</code></pre>
<p>Now use the name of the wordpress pod in the following command. This runs a remote shell on the wordpress pod,</p>
<pre><code>kubectl exec -it <POD_NAME> sh
</code></pre>
<p>In the shell, run the ls command to see the list of wordpress files,</p>
<pre><code># ls
</code></pre>
<p>This will list the wordpress files. If you want to edit these files, you need to install vim or nano.</p>
<pre><code># apt-get update
# apt-get install vim
# apt-get install nano
# vi wp-config.php
</code></pre>
<p>Note that the vim/nano will be removed if the wordpress pod is restarted. If you really want them inside your pod, you will need to create a custom container.</p>
|
<p>I executed below command:</p>
<pre><code>kubectl proxy --port=8081 &
kubectl proxy --port=8082 &
</code></pre>
<p>and of course I have 2 accessible endpoints:</p>
<pre><code>curl http://localhost:8081/api/
curl http://localhost:8082/api/
</code></pre>
<p>But in the same time two running processes serving the same content.
How to stop one of these processes in "kubectl" manner?
Of course, I can kill the process but it seems to be a less elegant way...</p>
| <p>I believe the "kubectl way" is to not background the proxy at all as it is intended to be a short running process to access the API on your local machine without further authentication.</p>
<p>There is no way to stop it other than kill or ^C (if not in background).</p>
<p>You can use standard shell tricks though, so executing <code>fg</code> then ^C will work or <code>kill %1</code></p>
|
<p>Sometimes there are variables in the deployment yaml file which are not pre-specified and will be known only during deployment (For example name and tag for the image of a container).</p>
<p>Normally we put a marker text (e.g. <code>{{IMAGE_NAME}}</code>) in the yaml file and use a bash text manipulation tools to change it with actual value in the deployment file.</p>
<p>Is there a way to use environment variables or other methods (like using arguments when running <code>kubectl create</code>) instead of text-replace tools?</p>
| <p>What I've done is use envvars in the deployment configuration, then run <code>apply</code>/<code>create</code> with the output from an <a href="https://linux.die.net/man/1/envsubst" rel="nofollow noreferrer"><code>envsubst</code></a> command:</p>
<p>deployment.yaml file:</p>
<pre><code>[...]
spec:
replicas: $REPLICA_COUNT
revisionHistoryLimit: $HISTORY_LIM
[...]
</code></pre>
<p>during deploy:</p>
<pre><code>$ export REPLICA_COUNT=10 HISTORY_LIM=10
$ envsubst < deployment.yaml | kubectl apply -f -
</code></pre>
|
<p>I can't get Ingress to work on GKE, owing to health check failures. I've tried all of the debugging steps I can think of, including:</p>
<ul>
<li>Verified I'm not running low on any quotas</li>
<li>Verified that my service is accessible from within the cluster</li>
<li>Verified that my service works behind a k8s/GKE Load Balancer. </li>
<li>Verified that <code>healthz</code> checks are passing in Stackdriver logs</li>
</ul>
<p><strong>... I'd love any advice about how to debug or fix. Details below!</strong></p>
<hr>
<p>I have set up a service with type <code>LoadBalancer</code> on GKE. Works great via external IP:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: echoserver
namespace: es
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
selector:
app: echoserver
</code></pre>
<p>Then I try setting up an Ingress on top of this same service:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echoserver-ingress
namespace: es
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "echoserver-global-ip"
spec:
backend:
serviceName: echoserver
servicePort: 80
</code></pre>
<p>The Ingress gets created, but it thinks the backend nodes are unhealthy:</p>
<pre><code>$ kubectl --namespace es describe ingress echoserver-ingress | grep backends
backends: {"k8s-be-31102--<snipped>":"UNHEALTHY"}
</code></pre>
<p>Inspecting the state of the Ingress backend in the GKE web console, I see the same thing:</p>
<p><a href="https://i.stack.imgur.com/DZlIK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZlIK.png" alt="0 of 3 healthy"></a></p>
<p>The health check details appear as expected:</p>
<p><a href="https://i.stack.imgur.com/iwWfv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iwWfv.png" alt="health check details"></a></p>
<p>... and from within a pod in my cluster I can call the service successfully:</p>
<pre><code># curl -vvv echoserver 2>&1 | grep "< HTTP"
< HTTP/1.0 200 OK
# curl -vvv echoserver/healthz 2>&1 | grep "< HTTP"
< HTTP/1.0 200 OK
</code></pre>
<p>And I can address the service by NodePort:</p>
<pre><code># curl -vvv 10.0.1.1:31102 2>&1 | grep "< HTTP"
< HTTP/1.0 200 OK
</code></pre>
<p>(Which goes without saying, because the Load Balancer service I set up in step 1 resulted in a web site that's working just fine.)</p>
<p>I also see <code>healthz</code> checks passing in Stackdriver logs:</p>
<p><a href="https://i.stack.imgur.com/gYZEb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gYZEb.png" alt="enter image description here"></a></p>
<p>Regarding quotas, I check and see I'm only using 3 of 30 backend services:</p>
<pre><code>$ gcloud compute project-info describe | grep -A 1 -B 1 BACKEND_SERVICES
- limit: 30.0
metric: BACKEND_SERVICES
usage: 3.0
</code></pre>
| <p>Had a similar issue a few weeks ago. What fixed it for me was to add a NodePort in the service description so that the Google Cloud Loadbalancer can probe this NodePort. The config that worked for me:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
selector:
name: some-app
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 32000
protocol: TCP
</code></pre>
<p>It might take some time for the ingress to pick this up. You ca re-create the ingress to speed things up.</p>
|
<p>I'm playing with Kubernetes inside 3 VirtualBox VMs with CentOS 7, 1 master and 2 minions. Unfortunately installation manuals say something like <code>every service will be accessible from every node, every pod will see all other pods</code>, but I don't see this happening. I can access the service only from that node where the specific pod runs. Please help to find out what am I missing, I'm very new to Kubernetes.</p>
<p>Every VM has 2 adapters: <strong>NAT</strong> and <strong>Host-only</strong>. Second one has IPs 10.0.13.101-254.</p>
<ul>
<li>Master-1: 10.0.13.104</li>
<li>Minion-1: 10.0.13.105</li>
<li>Minion-2: 10.0.13.106</li>
</ul>
<hr>
<p>Get all pods from master:</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 4 37m
default nginx-demo-2867147694-f6f9m 1/1 Running 1 52m
default nginx-demo2-2631277934-v4ggr 1/1 Running 0 5s
kube-system etcd-master-1 1/1 Running 1 1h
kube-system kube-apiserver-master-1 1/1 Running 1 1h
kube-system kube-controller-manager-master-1 1/1 Running 1 1h
kube-system kube-dns-2425271678-kgb7k 3/3 Running 3 1h
kube-system kube-flannel-ds-pwsq4 2/2 Running 4 56m
kube-system kube-flannel-ds-qswt7 2/2 Running 4 1h
kube-system kube-flannel-ds-z0g8c 2/2 Running 12 56m
kube-system kube-proxy-0lfw0 1/1 Running 2 56m
kube-system kube-proxy-6263z 1/1 Running 2 56m
kube-system kube-proxy-b8hc3 1/1 Running 1 1h
kube-system kube-scheduler-master-1 1/1 Running 1 1h
</code></pre>
<p>Get all services:</p>
<pre><code>$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 1h
nginx-demo 10.104.34.229 <none> 80/TCP 51m
nginx-demo2 10.102.145.89 <none> 80/TCP 3s
</code></pre>
<p>Get Nginx pods IP info:</p>
<pre><code>$ kubectl get pod nginx-demo-2867147694-f6f9m -o json | grep IP
"hostIP": "10.0.13.105",
"podIP": "10.244.1.58",
$ kubectl get pod nginx-demo2-2631277934-v4ggr -o json | grep IP
"hostIP": "10.0.13.106",
"podIP": "10.244.2.14",
</code></pre>
<hr>
<p>As you see - one Nginx example is on the first minion, and the second example is on the second minion.</p>
<p>The problem is - I can access <strong>nginx-demo</strong> from node <strong>10.0.13.105</strong> only (with Pod IP and Service IP), with curl:</p>
<pre><code>curl 10.244.1.58:80
curl 10.104.34.229:80
</code></pre>
<p>, and <strong>nginx-demo2</strong> from <strong>10.0.13.106</strong> only, not vice versa and not from master-node. Busybox is on node <strong>10.0.13.105</strong>, so it can reach <strong>nginx-demo</strong>, but not <strong>nginx-demo2</strong>.</p>
<p>How do I make access to the service node-independently? Is flannel misconfigured?</p>
<p>Routing table on master:</p>
<pre><code>$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
10.0.13.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
</code></pre>
<p>Routing table on minion-1:</p>
<pre><code># route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
10.0.13.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel.1
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
</code></pre>
<p>Maybe default gateway is a problem (NAT adapter)? Another problem - from Busybox container I try to do services DNS resolving and it doesn't work too:</p>
<pre><code>$ kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"
If you don't see a command prompt, try pressing enter.
/ #
/ # nslookup nginx-demo
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'nginx-demo'
/ #
/ # nslookup nginx-demo.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'nginx-demo.default.svc.cluster.local'
</code></pre>
<p>Decreasing Guest OS security was done:</p>
<pre><code>setenforce 0
systemctl stop firewalld
</code></pre>
<p>Feel free to ask more info if you need.</p>
<hr>
<h2>Addional info</h2>
<p><strong>kube-dns</strong> logs:</p>
<pre><code>$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k kubedns
I0919 07:48:45.000397 1 dns.go:48] version: 1.14.3-4-gee838f6
I0919 07:48:45.114060 1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0919 07:48:45.114129 1 server.go:113] FLAG: --alsologtostderr="false"
I0919 07:48:45.114144 1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0919 07:48:45.114155 1 server.go:113] FLAG: --config-map=""
I0919 07:48:45.114162 1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0919 07:48:45.114169 1 server.go:113] FLAG: --config-period="10s"
I0919 07:48:45.114179 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0919 07:48:45.114186 1 server.go:113] FLAG: --dns-port="10053"
I0919 07:48:45.114196 1 server.go:113] FLAG: --domain="cluster.local."
I0919 07:48:45.114206 1 server.go:113] FLAG: --federations=""
I0919 07:48:45.114215 1 server.go:113] FLAG: --healthz-port="8081"
I0919 07:48:45.114223 1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0919 07:48:45.114230 1 server.go:113] FLAG: --kube-master-url=""
I0919 07:48:45.114238 1 server.go:113] FLAG: --kubecfg-file=""
I0919 07:48:45.114245 1 server.go:113] FLAG: --log-backtrace-at=":0"
I0919 07:48:45.114256 1 server.go:113] FLAG: --log-dir=""
I0919 07:48:45.114264 1 server.go:113] FLAG: --log-flush-frequency="5s"
I0919 07:48:45.114271 1 server.go:113] FLAG: --logtostderr="true"
I0919 07:48:45.114278 1 server.go:113] FLAG: --nameservers=""
I0919 07:48:45.114285 1 server.go:113] FLAG: --stderrthreshold="2"
I0919 07:48:45.114292 1 server.go:113] FLAG: --v="2"
I0919 07:48:45.114299 1 server.go:113] FLAG: --version="false"
I0919 07:48:45.114310 1 server.go:113] FLAG: --vmodule=""
I0919 07:48:45.116894 1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0919 07:48:45.117296 1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0919 07:48:45.117329 1 dns.go:147] Starting endpointsController
I0919 07:48:45.117336 1 dns.go:150] Starting serviceController
I0919 07:48:45.117702 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.117716 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.620177 1 dns.go:171] Initialized services and endpoints from apiserver
I0919 07:48:45.620217 1 server.go:129] Setting up Healthz Handler (/readiness)
I0919 07:48:45.620229 1 server.go:134] Setting up cache handler (/cache)
I0919 07:48:45.620238 1 server.go:120] Status HTTP port 8081
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k dnsmasq
I0919 07:48:48.466499 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0919 07:48:48.478353 1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0919 07:48:48.697877 1 nanny.go:111]
W0919 07:48:48.697903 1 nanny.go:112] Got EOF from stdout
I0919 07:48:48.697925 1 nanny.go:108] dnsmasq[10]: started, version 2.76 cachesize 1000
I0919 07:48:48.697937 1 nanny.go:108] dnsmasq[10]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0919 07:48:48.697943 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0919 07:48:48.697947 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0919 07:48:48.697950 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0919 07:48:48.697955 1 nanny.go:108] dnsmasq[10]: reading /etc/resolv.conf
I0919 07:48:48.697959 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0919 07:48:48.697962 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0919 07:48:48.697965 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0919 07:48:48.697968 1 nanny.go:108] dnsmasq[10]: using nameserver 85.254.193.137#53
I0919 07:48:48.697971 1 nanny.go:108] dnsmasq[10]: using nameserver 92.240.64.23#53
I0919 07:48:48.697975 1 nanny.go:108] dnsmasq[10]: read /etc/hosts - 7 addresses
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k sidecar
ERROR: logging before flag.Parse: I0919 07:48:49.990468 1 main.go:48] Version v1.14.3-4-gee838f6
ERROR: logging before flag.Parse: I0919 07:48:49.994335 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
ERROR: logging before flag.Parse: I0919 07:48:49.994369 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
ERROR: logging before flag.Parse: I0919 07:48:49.994435 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
</code></pre>
<p><strong>kube-flannel</strong> logs from one pod. but is similar to the others:</p>
<pre><code>$ kubectl -n kube-system logs kube-flannel-ds-674mx kube-flannel
I0919 08:07:41.577954 1 main.go:446] Determining IP address of default interface
I0919 08:07:41.579363 1 main.go:459] Using interface with name enp0s3 and address 10.0.2.15
I0919 08:07:41.579408 1 main.go:476] Defaulting external address to interface address (10.0.2.15)
I0919 08:07:41.600985 1 kube.go:130] Waiting 10m0s for node controller to sync
I0919 08:07:41.601032 1 kube.go:283] Starting kube subnet manager
I0919 08:07:42.601553 1 kube.go:137] Node controller sync successful
I0919 08:07:42.601959 1 main.go:226] Created subnet manager: Kubernetes Subnet Manager - minion-1
I0919 08:07:42.601966 1 main.go:229] Installing signal handlers
I0919 08:07:42.602036 1 main.go:330] Found network config - Backend type: vxlan
I0919 08:07:42.606970 1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I0919 08:07:42.608380 1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0919 08:07:42.609579 1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
I0919 08:07:42.611257 1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I0919 08:07:42.612595 1 main.go:279] Wrote subnet file to /run/flannel/subnet.env
I0919 08:07:42.612606 1 main.go:284] Finished starting backend.
I0919 08:07:42.612638 1 vxlan_network.go:56] Watching for L3 misses
I0919 08:07:42.612651 1 vxlan_network.go:64] Watching for new subnet leases
$ kubectl -n kube-system logs kube-flannel-ds-674mx install-cni
+ cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf
+ true
+ sleep 3600
+ true
+ sleep 3600
</code></pre>
<hr>
<p>I've added some more services and exposed with type <strong>NodePort</strong>, this is what I get when scanning ports from host machine:</p>
<pre><code># nmap 10.0.13.104 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.104
Host is up (0.0014s latency).
Not shown: 49992 closed ports
PORT STATE SERVICE
22/tcp open ssh
6443/tcp open sun-sr-https
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:90:26:1C (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.96 seconds
# nmap 10.0.13.105 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.105
Host is up (0.00040s latency).
Not shown: 49993 closed ports
PORT STATE SERVICE
22/tcp open ssh
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp open unknown
31844/tcp open unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:F8:E3:71 (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.87 seconds
# nmap 10.0.13.106 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:21 EEST
Nmap scan report for 10.0.13.106
Host is up (0.00059s latency).
Not shown: 49993 closed ports
PORT STATE SERVICE
22/tcp open ssh
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp open unknown
MAC Address: 08:00:27:D9:33:32 (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.92 seconds
</code></pre>
<p>If we take the latest service on port <strong>32619</strong> - it exists everywhere, but is Open only on related node, on the others it's filtered.</p>
<h1>tcpdump info on Minion-1</h1>
<p>Connection from Host to Minion-1 with <code>curl 10.0.13.105:30572</code>:</p>
<pre><code># tcpdump -ni enp0s8 tcp or icmp and not port 22 and not host 10.0.13.104
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:39.043874 IP 10.0.13.1.41132 > 10.0.13.105.30572: Flags [S], seq 657506957, win 29200, options [mss 1460,sackOK,TS val 504213496 ecr 0,nop,wscale 7], length 0
13:11:39.045218 IP 10.0.13.105 > 10.0.13.1: ICMP time exceeded in-transit, length 68
</code></pre>
<p>on <strong>flannel.1</strong> interface:</p>
<pre><code># tcpdump -ni flannel.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:49.499148 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499239 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499247 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
</code></pre>
<p>.. <code>ICMP time exceeded in-transit</code> error and SYN packets only, so no connection between pods networks, because <code>curl 10.0.13.106:30572</code> works.</p>
<h1>Minion-1 interfaces</h1>
<pre><code># ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:35:72:ab brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 77769sec preferred_lft 77769sec
inet6 fe80::772d:2128:6aaa:2355/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:f8:e3:71 brd ff:ff:ff:ff:ff:ff
inet 10.0.13.105/24 brd 10.0.13.255 scope global dynamic enp0s8
valid_lft 1089sec preferred_lft 1089sec
inet6 fe80::1fe0:dba7:110d:d673/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::f04f:5413:2d27:ab55/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:59:53:d7:fd brd ff:ff:ff:ff:ff:ff
inet 10.244.1.2/24 scope global docker0
valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether fa:d3:3e:3e:77:19 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::f8d3:3eff:fe3e:7719/64 scope link
valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
link/ether 0a:58:0a:f4:01:01 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::c4f9:96ff:fed8:8cb6/64 scope link
valid_lft forever preferred_lft forever
13: veth5e2971fe@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether 1e:70:5d:6c:55:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1c70:5dff:fe6c:5533/64 scope link
valid_lft forever preferred_lft forever
14: veth8f004069@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether ca:39:96:59:e6:63 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::c839:96ff:fe59:e663/64 scope link
valid_lft forever preferred_lft forever
15: veth5742dc0d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether c2:48:fa:41:5d:67 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::c048:faff:fe41:5d67/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
| <p>It works either by disabling the firewall or by running below command.</p>
<p>I found this open bug in my search. Looks like this is related to docker >=1.13 and flannel</p>
<p>refer: <a href="https://github.com/coreos/flannel/issues/799" rel="nofollow noreferrer">https://github.com/coreos/flannel/issues/799</a></p>
|
<p>I Build a simple NodeJS API, pushed the Docker Image to a repo and deployed it to my k8s with Helm install (works perfectly fine).</p>
<p>The pullPolicy is <code>Always</code>.</p>
<p>Now I want to update the source code and deploy the updated version of my app. I bumped the version in all files, built and pushed the new Docker image und tried <code>helm upgrade</code> but it seems like nothing happened.
With <code>helm list</code> I can see that revision was deployed but the changes to source code were not deployed.
<code>watch kubectl get pods</code> also shows that no new pods were created the way you expect it with <code>kubectl --apply...</code></p>
<p>What did I do wrong?</p>
| <p>Helm will roll out changes to kubernetes objects only if there are changes to roll out. If you use <code>:latest</code> there is no change to be applied to the deployment file, ergo no pods will rolling update. To keep using <code>latest</code>, you need to add something (ie. label with sha / version) that will change and cause deployment to get updated by helm. Also keep in mind that you will usualy need <code>ImagePullPolicy: Always</code> as well.</p>
|
<p>I have a 3 node kubernetes cluster, a master and two nodes on AWS that I created with kubeadm (<a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a>)</p>
<p>I have created some deployments from the master node and I can see that pods are created on the 2 nodes for each of the deployments.
But the issue is I can't access the pod ip from the master or from the other node. So the pod ip is only accessible on the node where is pod is running.</p>
<p>I have a service of nodeport type, so when the service(pod1:port) hits the other pod(pod2), it hangs and times out</p>
<p>Thanks.</p>
| <p>It works either by disabling the firewall or by running below command.</p>
<p>I found this bug in my search. Looks like this is related to docker >=1.13 and flannel</p>
<p>refer: <a href="https://github.com/coreos/flannel/issues/799" rel="nofollow noreferrer">https://github.com/coreos/flannel/issues/799</a></p>
|
<p>I have noticed that containers in a pod can use localhost to talk to each other as advertised. For example one container starts a server socket on localhost:9999 and a second container can connect to that address. This fails if I expose the server container's port. Also it fails if I create a TCP liveness probe on that port. It appears that the liveness probe uses the pod IP address and cannot connect to localhost:9999 unless it is exposed. If both containers use the pod IP, i.e., $HOSTNAME:9999, and the port is exposed then everything works. Does any one have an example that works where each container uses localhost and the TCP probe works?</p>
| <p>Here is an example deployment using TCP liveness probe, TCP readiness probe and networking between containers in a pod with the server containers port exposed:</p>
<h2>test.yml</h2>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
template:
metadata:
labels:
app: test
spec:
containers:
- name: server
image: alpine
command:
- '/bin/sh'
- '-c'
- 'nc -p 8080 -kle echo pong'
livenessProbe:
tcpSocket:
port: 8080
readinessProbe:
tcpSocket:
port: 8080
ports:
- containerPort: 8080
- name: client
image: alpine
command:
- '/bin/sh'
- '-c'
- 'while true; do echo -e | nc localhost 8080; sleep 1; done'
</code></pre>
<p>Creating and verifying the deployment:</p>
<pre><code>> kubectl create -f test.yml
> kubectl get pod -l app=test
NAME READY STATUS RESTARTS AGE
test-620943134-fzm05 2/2 Running 0 1m
> kubectl log test-620943134-fzm05 client
pong
pong
pong
[β¦]
</code></pre>
|
<p>With Kubernetes on can define storage classes with provisioners. How does one find which provisioners are installed and available in the cluster? </p>
<p>Inspecting the storage classes will reveal which provisioners are already in use, but not whether there are more available.</p>
| <p>A provisioner does not necessarily need to run in the cluster, e.g. the provisioner for an external storage appliance just connects to the cluster api server and watches for new persistent volume requests created with a storage class bound to its provisioner name. This is why as of Kubernetes 1.7 there is no intended universal way to see if a storage classes provisioner is actually available or not.</p>
|
<p>I have certain questions regarding importing the existing certificates.</p>
<ol>
<li><p>How are certificates used internally in Kubernetes (e.g. between api server and workers, master controller, etc.)?
Is there a CA in Kubernetes?Β (how) does it generate certificates for internal use?</p></li>
<li><p>What certificates are required at each layer?</p></li>
</ol>
| <p>Certificates in Kubernetes are primarily used to secure communication from and to the API server. Taken from the <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">official Kubernetes documentation</a>:</p>
<blockquote>
<p>Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API serverβs certificate, by the API server to validate kubelet client
certificates, etc. To support this, the CA certificate bundle is
distributed to every node in the cluster and is distributed as a
secret attached to default service accounts. Optionally, your
workloads can use this CA to establish trust. Your application can
request a certificate signing using the certificates.k8s.io API using
a protocol that is similar to the ACME draft.</p>
</blockquote>
<p>When creating a cluster with <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a>, the tool at first creates a CA in <code>/etc/kubernetes/pki</code> and signs all following certificates with its private key. The ca is later distributed on all nodes for verification and also found base64 encoded in <code>/etc/kubernetes/admin.conf</code> for verification of the api server via <code>kubectl</code>. </p>
<p>It is possible to use your own CA for cluster creation by placing it and your private key as <code>ca.crt</code> and <code>ca.key</code> in <code>/etc/kubernetes/pki</code> before invoking <code>kubeadm init</code> or any folder later specified with <code>--cert-dir</code>.</p>
<p>There are many other ways to install Kubernetes but they all essentially create a CA before any actual Kubernetes code runs or require one to exist beforehand.</p>
|
<p>I have a Kubernetes deployment that deploys a Java application based on the <a href="https://hub.docker.com/r/anapsix/alpine-java/" rel="nofollow noreferrer">anapsix/alpine-java</a> image. There is nothing else running in the container expect for the Java application and the container overhead.</p>
<p>I want to maximise the amount of memory the Java process can use inside the docker container and minimise the amount of ram that will be reserved but never used.</p>
<p><strong>For example I have:</strong></p>
<ol>
<li><strong>Two Kubernetes nodes that have 8 gig of ram each and no swap</strong></li>
<li><strong>A Kubernetes deployment that runs a Java process consuming a maximum of 1 gig of heap to operate optimally</strong></li>
</ol>
<p><em><strong>How can I safely maximise the amount of pods running on the two nodes while never having Kubernetes terminate my PODs because of memory limits?</strong></em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: my-deployment
spec:
containers:
- name: my-deployment
image: myreg:5000/my-deployment:0.0.1-SNAPSHOT
ports:
- containerPort: 8080
name: http
resources:
requests:
memory: 1024Mi
limits:
memory: 1024Mi
</code></pre>
<p>Java 8 update 131+ has a flag -XX:+UseCGroupMemoryLimitForHeap to use the Docker limits that come from the Kubernetes deployment.</p>
<p><strong>My Docker experiments show me what is happening in Kubernetes</strong></p>
<p><strong>If I run the following in Docker:</strong></p>
<pre><code>docker run -m 1024m anapsix/alpine-java:8_server-jre_unlimited java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -version
</code></pre>
<p><em><strong>I get:</strong></em></p>
<pre><code>VM settings:
Max. Heap Size (Estimated): 228.00M
</code></pre>
<p>This low value is because Java sets -XX:MaxRAMFraction to 4 by default and I get about 1/4 of the ram allocated...</p>
<p><strong>If I run the same command with -XX:MaxRAMFraction=2 in Docker:</strong></p>
<pre><code>docker run -m 1024m anapsix/alpine-java:8_server-jre_unlimited java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -XX:MaxRAMFraction=2 -version
</code></pre>
<p><em><strong>I get:</strong></em></p>
<pre><code>VM settings:
Max. Heap Size (Estimated): 455.50M
</code></pre>
<p>Finally setting MaxRAMFraction=1 quickly causes Kubernetes to Kill my container.</p>
<pre><code>docker run -m 1024m anapsix/alpine-java:8_server-jre_unlimited java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -XX:MaxRAMFraction=1 -version
</code></pre>
<p><em><strong>I get:</strong></em></p>
<pre><code>VM settings:
Max. Heap Size (Estimated): 910.50M
</code></pre>
| <p>The reason Kubernetes kills your pods is the <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="noreferrer">resource limit</a>. It is difficult to calculate because of container overhead and the usual mismatches between decimal and binary prefixes in specification of memory usage. My solution is to entirely drop the limit and only keep the requirement(which is what your pod will have available in any case if it is scheduled). Rely on the JVM to limit its heap via static specification and let Kubernetes manage how many pods are scheduled on a single node via resource requirement. </p>
<p>At first you will need to determine the actual memory usage of your container when running with your desired heap size. Run a pod with <code>-Xmx1024m -Xms1024m</code> and connect to the hosts docker daemon it's scheduled on. Run <code>docker ps</code> to find your pod and <code>docker stats <container></code> to see its current memory usage wich is the sum of JVM heap, other static JVM usage like direct memory and your containers overhead(alpine with glibc). This value should only fluctuate within kibibytes because of some network usage that is handled outside the JVM. Add this value as memory requirement to your pod template. </p>
<p>Calculate or estimate how much memory other components on your nodes need to function properly. There will at least be the Kubernetes kubelet, the Linux kernel, its userland, probably an SSH daemon and in your case a docker daemon running on them. You can choose a generous default like 1 Gibibyte excluding the kubelet if you can spare the extra few bytes. Specify <code>--system-reserved=1Gi</code> and <code>--kube-reserved=100Mi</code> in your kubelets flags and restart it. This will add those reserved resources to the Kubernetes schedulers calculations when determining how many pods can run on a node. See the <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="noreferrer">official Kubernetes documentation</a> for more information. </p>
<p>This way there will probably be five to seven pods scheduled on a node with eight Gigabytes of RAM, depending on the above chosen and measured values. They will be guaranteed the RAM specified in the memory requirement and will not be terminated. Verify the memory usage via <code>kubectl describe node</code> under <code>Allocated resources</code>. As for elegancy/flexibility, you just need to adjust the memory requirement and JVM heap size if you want to increase RAM available to your application.</p>
<p>This approach only works assuming that the pods memory usage will not explode, if it would not be limited by the JVM a rouge pod might cause eviction, see <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="noreferrer">out of resource handling</a>. </p>
|
<p>Kubernetes supports GPUs as an experimental feature. Does it work in google container engine? Do I need to have some special configuration to enable it? I want to be able to run machine learning workloads, but want to use Python 3 which isn't available in CloudML.</p>
| <p>GPUs on Google Container Engine are now available in Alpha. <a href="https://docs.google.com/forms/d/1JNnoUe1_3xZvAogAi16DwH6AjF2eu08ggED24OGO7Xc/viewform?edit_requested=true" rel="noreferrer">Sign up form</a>.</p>
<p>Beware that <a href="https://cloud.google.com/container-engine/docs/alpha-clusters" rel="noreferrer">alpha cluster limitations</a> apply: they cannot be upgraded, and they will be auto-deleted in 30 days.</p>
<p><em>Disclaimer</em>: I work at GCP.</p>
|
<p>I am trying reach my k8s master from my workstation. I can access the master from the LAN fine but not from my workstation. The error message is:</p>
<pre><code>% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
</code></pre>
<p>How can I do to add 114.215.201.87 to the certificate? Do I need to remove my old cluster ca.crt, recreate it, restart whole cluster and then resign client certificate? I have deployed my cluster with kubeadm and I am not sure how to do these steps manually.</p>
| <p>One option is to tell <code>kubectl</code> that you don't want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go:</p>
<pre><code>kubectl --insecure-skip-tls-verify --context=employee-context get pods
</code></pre>
<p>The better option is to fix the certificate. Easiest if you reinitialize the cluster by running <code>kubeadm reset</code> on all nodes including the master and then do</p>
<pre><code>kubeadm init --apiserver-cert-extra-sans=114.215.201.87
</code></pre>
<p>It's also possible to fix that certificate without wiping everything, but that's a bit more tricky. Execute something like this on the master as root:</p>
<pre><code>rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
</code></pre>
|
<p>I am using Deployments to control my pods in my K8S cluster.</p>
<p>My original deployment file looks like :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websocket-backend-deployment
spec:
replicas: 2
selector:
matchLabels:
name: websocket-backend
template:
metadata:
labels:
name: websocket-backend
spec:
containers:
- name: websocket-backend
image: armdock.se/proj/websocket_backend:3.1.4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /websocket/health
initialDelaySeconds: 300
timeoutSeconds: 30
readinessProbe:
httpGet:
port: 8080
path: /websocket/health
initialDelaySeconds: 25
timeoutSeconds: 5
</code></pre>
<p>This config is working as planned. </p>
<pre><code># kubectl get po | grep websocket
websocket-backend-deployment-4243571618-mreef 1/1 Running 0 31s
websocket-backend-deployment-4243571618-qjo6q 1/1 Running 0 31s
</code></pre>
<p>Now I plan to do a live/rolling update on the image file.
The command that I am using is :</p>
<pre><code>kubectl set image deployment websocket-backend-deployment websocket-backend=armdock.se/proj/websocket_backend:3.1.5
</code></pre>
<p>I am only updating the docker image tag.
Now im expecting for my pods to remain 2 after the update. I am getting the 2 new pods with the new version but there is one pod that still exists carrying the old version.</p>
<pre><code># kubectl get po | grep websocket
websocket-backend-deployment-4243571618-qjo6q 1/1 Running 0 2m
websocket-backend-deployment-93242275-kgcmw 1/1 Running 0 51s
websocket-backend-deployment-93242275-kwmen 1/1 Running 0 51s
</code></pre>
<p>As you can see, 1 pod uses the old tag <code>3.1.4</code></p>
<pre><code># kubectl describe po websocket-backend-deployment-4243571618-qjo6q | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.4
</code></pre>
<p>The rest of the 2 nodes are on the new tag <code>3.1.5</code>.</p>
<pre><code># kubectl describe po websocket-backend-deployment-93242275-kgcmw | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.5
# kubectl describe po websocket-backend-deployment-93242275-kwmen | grep Image:
Image: armdock.se/proj/websocket_backend:3.1.5
</code></pre>
<p>Why does 1 old pod still stay there and doesnt get deleted ? Am I missing some config ? </p>
<p>When I check the <code>rollout</code> command, its just stuck on :</p>
<pre><code># kubectl rollout status deployment/websocket-backend-deployment
Waiting for rollout to finish: 1 old replicas are pending termination...
</code></pre>
<p>My K8S version is :</p>
<pre><code># kubectl --version
Kubernetes v1.5.2
</code></pre>
| <p>I would suggest you to set the <strong>maxSurge</strong> to 0 in the RollingUpdate strategy to make the desired pods same after the rollout . The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. </p>
<p>Example: </p>
<pre class="lang-sh prettyprint-override"><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 2
selector:
matchLabels:
name: webserver
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
metadata:
labels:
name: webserver
spec:
containers:
- name: webserver
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
</code></pre>
|
<p>Hoping someone can help me resolve what appears to be a permissions error. I'm trying to start a 3-node elasticsearch cluster using the official elasticsearch docker image. When the container was started I was getting an "access denied" error from elasticsearch on /usr/share/elasticsearch/data/nodes so I tried adding a command to make elasticsearch the owner of /usr/share/elasticsearch/data...but I get these errors when I include the chown command:</p>
<pre><code>chown: cannot read directory '/usr/share/elasticsearch/data/lost+found': Permission denied
chown: changing ownership of '/usr/share/elasticsearch/data': Operation not permitted
</code></pre>
<p>Here is my statefulset yaml file:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: esnode
spec:
serviceName: elasticsearch-transport
replicas: 3
template:
metadata:
labels:
app: evo-pro-cluster
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: elasticsearch
securityContext:
privileged: true
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
command: ["/bin/sh"]
args: ["-c", "chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data"]
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.1
imagePullPolicy: Always
env:
- name: "ES_JAVA_OPTS"
value: "-Xms6g -Xmx6g"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /usr/share/elasticsearch/data
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config
configMap:
name: elasticsearch-config
volumeClaimTemplates:
- metadata:
name: storage
annotations:
storageClassName: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 110Gi
</code></pre>
| <p>This particular docker image expects the data directory to be writable by uid <code>2000</code>. You can tell Kubernetes to chown (sort of) the mount point for your pod by adding <a href="https://kubernetes.io/docs/api-reference/v1.6/#podsecuritycontext-v1-core" rel="noreferrer"><code>.spec.securityContext.fsGroup</code></a>:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: esnode
spec:
...
securityContext:
fsGroup: 2000
</code></pre>
<p>(end of course you can get rid of the chown hack or the initContainer)</p>
<blockquote>
<p><code>fsGroup</code>: integer: A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume.</p>
</blockquote>
|
<p>I am trying to create a service account with a known, fixed token used by Jenkins to deploy stuff into kubernetes. I manage to create the token all right with the following yaml: </p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: integration-secret
annotations:
kubernetes.io/service-account.name: integration
type: kubernetes.io/service-account-token
data:
token: YXNkCg== # yes this base64
</code></pre>
<p>Then I've attached the secret to 'integration' user and it's visible:</p>
<pre><code>-> kubectl describe sa integration
Name: integration
Namespace: default
Labels: <none>
Annotations: <none>
Mountable secrets: integration-secret
integration-token-283k9
Tokens: integration-secret
integration-token-283k9
Image pull secrets: <none>
</code></pre>
<p>But the login fails. If i remove the data and data.token, the token get auto-created and login works. Is there something I'm missing? My goal is to have fixed token for CI so that I won't have to update it everywhere when creating a project (don't worry this is just dev environments). Is it possible for example to define username/password for service accounts for API access?</p>
| <blockquote>
<p>Is it possible for example to define username/password for service accounts for API access?</p>
</blockquote>
<p>No, the tokens must be valid JWTs, signed by the service account token signing key.</p>
|
<p>I got Kubernetes Minikube on my laptop (4cores, 8 GB RAM). I just performed the basic installation steps (got <em>miniKube</em> and <em>kubectl</em>, enabled the BIOS virtualization) and I am able to start the cluster:</p>
<pre><code>C:\Users\me>minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
</code></pre>
<p>However, when I try to interact with the cluster, I allways get the same error, sample:</p>
<pre><code>C:\Users\me>kubectl get pods --context=minikube
Unable to connect to the server: dial tcp 192.168.99.100:8443: connectex: A connection attempt failed because the connected party
did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
</code></pre>
<p>I execute <code>minikube ip</code> and I ping the result IP and I get a response. Also I tried to give more memory (3Gb vs the standard 2Gb) and nothing changed.</p>
<p>Am I doing something wrong here?</p>
<p>Thanks!</p>
| <p>I think it could be some problem with the cluster, when I run minikube status I've got the mixed results of cluster running and cluster stopped:</p>
<p>First run:</p>
<pre><code>c:\> minikube status
</code></pre>
<blockquote>
<p>minikube: Running</p>
<p>cluster: Stopped</p>
<p>kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100</p>
</blockquote>
<p>Second run:</p>
<blockquote>
<p>minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100</p>
</blockquote>
<p>Third run:</p>
<blockquote>
<p>minikube: Running</p>
<p>cluster: Stopped</p>
<p>kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100</p>
</blockquote>
<p>The service is flapping.</p>
<p>UPDATED:
Connecting to the minikube vm using minikube ssh I realized the kubeconfig file have wrong path separator for certificates generated by minikube automatic configuration. The path on kubeconfig file stands for <code>\var\lib\localkube\certs\ca.cert</code> and it have to be <code>/var/lib/localkube/certs/ca.cert</code> and so on...</p>
<p>To update the file I have to copy the content of the orignal file to my desktop, fix the directory separators and save the correct file to <code>/var/lib/localkube/kubeconfig</code> and restart the service using:</p>
<pre><code>sudo systemclt restart localkube.
</code></pre>
<p>I hope everyone can use minikube with this tip.</p>
|
<p>I am currently using Deployments to manage my pods in my K8S cluster.</p>
<p>Some of my deployments require 2 pods/replicas, some require 3 pods/replicas and some of them require just 1 pod/replica. The issue Im having is the one with one pod/replica.</p>
<p>My YAML file is :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-management-backend-deployment
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
selector:
matchLabels:
name: user-management-backend
template:
metadata:
labels:
name: user-management-backend
spec:
containers:
- name: user-management-backend
image: proj_csdp/user-management_backend:3.1.8
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /user_management/health
initialDelaySeconds: 300
timeoutSeconds: 30
readinessProbe:
httpGet:
port: 8080
path: /user_management/health
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: nfs
mountPath: "/vault"
volumes:
- name: nfs
nfs:
server: kube-nfs
path: "/kubenfs/vault"
readOnly: true
</code></pre>
<p>I have a the old version running fine.</p>
<pre><code># kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-mrrvl 1/1 Running 0 4d
</code></pre>
<p>Now I want to update the image:</p>
<pre><code># kubectl set image deployment user-management-backend-deployment user-management-backend=proj_csdp/user-management_backend:3.2.0
</code></pre>
<p>Now as per RollingUpdate design, K8S should bring up the new pod while keeping the old pod working and only once the new pod is ready to take the traffic, should the old pod get deleted. But what I see is that the old pod is immediately deleted and the new pod is created and then it takes time to start taking traffic meaning that I have to drop traffic.</p>
<pre><code># kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9 0/1 ContainerCreating 0 1s
# kubectl get po | grep user-management-backend-deployment
user-management-backend-deployment-3264073543-l93m9 1/1 Running 0 33s
</code></pre>
<p>I have used <code>maxSurge: 2</code> & <code>maxUnavailable: 1</code> but this does not seem to be working.</p>
<p>Any ideas why is this not working ?</p>
| <p>It appears to be the <code>maxUnavailable: 1</code>; I was able to trivially reproduce your experience setting that value, and trivially achieve the correct experience by setting it to <code>maxUnavailable: 0</code></p>
<p>Here's my "pseudo-proof" of how the scheduler arrived at the behavior you are experiencing:</p>
<p>Because <code>replicas: 1</code>, the desired state for k8s is exactly one Pod in <code>Ready</code>. During a Rolling Update operation, which is the strategy you requested, it will create a new Pod, bringing the total to 2. But you granted k8s permission to leave <em>one Pod</em> in an unavailable state, and you instructed it to keep the <em>desired</em> number of Pods at 1. Thus, it fulfilled all of those constraints: 1 Pod, the desired count, in an unavailable state, permitted by the R-U strategy.</p>
<p>By setting the <code>maxUnavailable</code> to zero, you correctly direct k8s to never let any Pod be unavailable, even if that means surging Pods above the <code>replica</code> count for a short time</p>
|
<p>I have a kubernetes setup running in google container engine. one of the k8s Service "type: LoadBalancer"... so i guess it created a Google Network Load Balancing. Now part of my billing
"Compute Engine Network Load Balancing" is way higher than my compute engine cost. Is there a way to eliminate "Network Load Balancing" cost item with any other solution in kubernates...please advise.</p>
<p>This question is close to what I'm looking for:</p>
<p><a href="https://stackoverflow.com/questions/44493779/gcp-kube-lego-forwarding-rule-pricing">GCP Kube-Lego forwarding rule pricing</a></p>
<p>...but no answers so far.</p>
| <p>1) Deploy nginx-ingress-controller to kube-cluster:</p>
<pre><code>helm install --name my-lb stable/nginx-ingress --set controller.service.type=NodePort
helm list
kubectl get svc
</code></pre>
<p>This will create "my-lb-nginx-ingress-controller" - a custom nginx load balancer instead of gke-load-balancer(google's). This will implement ingress rule objects in the kube-cluster.
*** After this, any ingress rule object created with "annotations: kubernetes.io/ingress.class: nginx", will be enforced by this ngnix-controller.</p>
<p>2) Create firewall rule to open nodePorts:
Since nginx-controller deployed as "conroller.service.type=NodePort", check the nodePorts from "kubect get svc" command and create gcloud "networking/firewall" rule to allow ports "tcp:31181;tcp:31462". Now you can use browser to reach "<a href="http://node-ip-address:31181" rel="nofollow noreferrer">http://node-ip-address:31181</a>" or "<a href="https://node-ip-address:31462" rel="nofollow noreferrer">https://node-ip-address:31462</a>" reach ngnix controllers..</p>
<p>3) Delete stuff:</p>
<pre><code>helm delete my-lb
helm del --purge my-lb
</code></pre>
<p>I did above in gke, and now i have ngnix-load-balancer instead of google's cloud-load-balancer. But one limitation i experience is "<a href="http://node-ip:80" rel="nofollow noreferrer">http://node-ip:80</a>" gets connection refused...don't know why is this. But, access through nodeport "<a href="http://node-ip-address:31181" rel="nofollow noreferrer">http://node-ip-address:31181</a>" is working. Ok for now, have to figure out port 80 access denial.</p>
|
<p>I have a container cluster in Google Container Engine with Stackdriver logging agent enabled. It is correctly pulling stdout logs from my containers. Now I would like to change the fluentd config to specify a log parser so that the logs shown in the GCP Logging view will have the correct severity and component.</p>
<p>Following this <a href="https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/tasks/debug-application-cluster/logging-stackdriver.md" rel="noreferrer" title="Stackdriver logging guide from kubernetes.io">Stackdriver logging guide from kubernetes.io</a>, I have attempted to:</p>
<ol>
<li>Get the fluentd <code>ConfigMap</code> as a yml file</li>
<li>Added a new <code><filter></code> according to my log4js log format</li>
<li>Created a new <code>ConfigMap</code> named <strong>fluentd-cm-2</strong> in <code>kube-system</code> namespace</li>
<li>Edited the <code>DaemonSet</code> for fluentd and set its <code>ConfigMap</code> to <strong>fluentd-cm-2</strong>. I did this using <code>kubectl edit ds</code> instead of <code>kubectl replace -f</code> because the latter failed with an error message: "the object has been modified", even after getting a fresh copy of the <code>DaemonSet</code> yaml.</li>
</ol>
<p>Unexpected result: The <code>DaemonSet</code> is restarted, but its configuration is reverted back to the original <code>ConfigMap</code>, so my changes did not take effect.</p>
<p>I have also tried editing the <code>ConfigMap</code> directly (<code>kubectl edit cm fluentd-gcp-config-v1.1 --namespace kube-system</code>) and saved it, but it was also reverted.</p>
<p>I noticed that the <code>DaemonSet</code> and <code>ConfigMap</code> for fluentd are tagged with <code>addonmanager.kubernetes.io/mode: Reconcile</code>. I would conclude that GKE has overwritten my settings because of this "reconcile" mode.</p>
<p>So, my question is: how can I change the fluentd configuration in a Google Container Engine cluster, when the logging agent was installed by GKE on cluster provisioning?</p>
| <p>Please take a look at the Prerequisites section on the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver#configuring-stackdriver-logging-agents" rel="noreferrer">documentation page you mentioned</a>. It's mentioned there, that on GKE you cannot change the default Stackdriver Logging integration. The reason is that GKE maintains this configuration: updates the agent, watches its health and so on. It's not possible to provide the same level of support for all possible configurations.</p>
<p>However, you can always disable the default integration and deploy your own, patched version of DaemonSet. You can find out how to disable the default integration in the <a href="https://cloud.google.com/container-engine/docs/logging" rel="noreferrer">GKE documentation</a>:</p>
<p><code>gcloud beta container clusters update [CLUSTER-NAME] \
--logging-service=none</code></p>
<p>Note, that after you disabled the default integration, you have to maintain the new deployment yourself: update the agent, set the resources, watch its health.</p>
|
<p>Kubernetes surfaces an API proxy, which allows querying the internal services via eg: <a href="https://myhost.com/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/" rel="nofollow noreferrer">https://myhost.com/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/</a></p>
<p>This is all well, and good. However, for security & compliance reasons, all of our services expose an HTTPS endpoint. Attempting to access them by going to <a href="https://myhost/api/v1/proxy/namespaces/default/services/myhttpsservice:3000/" rel="nofollow noreferrer">https://myhost/api/v1/proxy/namespaces/default/services/myhttpsservice:3000/</a> results in</p>
<pre><code> Error: 'read tcp 172.20.122.129:48830->100.96.29.113:3000: read: connection reset by peer'
Trying to reach: 'http://100.96.29.113:3000/'
</code></pre>
<p>Because the endpoint, 100.96.29.113:3000 is in fact https. </p>
<p>Is there any way to configure the proxy to apply SSL to specific service endpoints?</p>
<p>(Edit: If this is not currently possible, a relevant github issue link for tracking the feature request is also acceptable answer until it will be)</p>
| <p>As documented at <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls</a>, (and pointed out on slack), you can access services behind HTTPS by prefixing the servicename with "https:" ; </p>
<p>Using the example from above, correctly it would be: <a href="https://myhost/api/v1/proxy/namespaces/default/services/https:myhttpsservice:3000/" rel="nofollow noreferrer">https://myhost/api/v1/proxy/namespaces/default/services/https:myhttpsservice:3000/</a></p>
|
<p>We are using Helm Charts for deploying a service in several environments on Kubernetes cluster. Now for each environment there are a list of variables like the database url, docker image tag etc. What is the most obvious and correct way of defining Helm related values.yaml in such case where all the Helm template files remain same for all the environment except for some parameters as stated above. </p>
| <p>One way to do this would be using multiple value files, which helm now allows. Assume you have the following values files:</p>
<p>values1.yaml:</p>
<pre><code>image:
repository: myimage
tag: 1.3
</code></pre>
<p>values2.yaml</p>
<pre><code>image:
pullPolicy: Always
</code></pre>
<p>These can both be used on command line with helm as:</p>
<pre><code>$ helm install -f values1.yaml,values2.yaml <mychart>
</code></pre>
<p>In this case, these values will be merged into</p>
<pre><code>image:
repository: myimage
tag: 1.3
pullPolicy: Always
</code></pre>
<p>You can see the values that will be used by giving the "--dry-run --debug" options to the "helm install" command.</p>
<p>Order is important. If the same value appears in both files, the values from values2.yaml will take precedent, as it was specified last. Each chart also comes with a values file. Those values will be used for anything not specified in your own values file, as if it were first in the list of values files you provided.</p>
<p>In your case, you could specify all the common settings in values1.yaml and override them as necessary with values2.yaml.</p>
|
<p>I searched the documentation but I am unable to find out if I can run a pod in Kubernetes without Scheduler. If anyone can help with any pointers it would be helpful</p>
<p>Update:
I can attach a label to node and let pod stick to that label but that would involve going through the scheduler. Is there any method without daemonset and does not use scheduler.</p>
| <p>The scheduler just sets the spec.nodeName field on the pod. You can set that to a node name yourself if you know which node you want to run your pod, though you are then responsible for ensuring the node has sufficient resources to run the pod (enough memory, free host ports, etc⦠all things the scheduler is normally responsible for checking before it assigns a pod to a node)</p>
|
<p>Before posting this question I followed this answer <a href="https://stackoverflow.com/questions/30538210/how-to-mimic-volumes-from-in-kubernetes">How to mimic '--volumes-from' in Kubernetes</a> but it didn't work for me.</p>
<p>I have 2 containers:</p>
<ul>
<li><strong>node</strong>: its image contains all the files related to the app ( inside <code>/var/www</code> ) </li>
<li><strong>nginx</strong>: it needs to access the files inside the <strong>node</strong> image (especially the <code>/clientBuild</code> folder where I have all the assets)</li>
</ul>
<p>What is inside the <strong>node</strong> image:</p>
<pre><code>$ docker run node ls -l
> clientBuild/
> package.json
> ...
</code></pre>
<p>A part of the <code>nginx.prod.conf</code>:</p>
<pre><code>location ~* \.(jpeg|jpg|gif|png|ico|css|js|gz|map|json)$ {
include /etc/nginx/mime.types;
root /usr/local/nginx/html/clientBuild/;
}
</code></pre>
<p>And the the deployment setup:</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: pwa-app-production
labels:
app: MyApp
spec:
replicas: 1
template:
metadata:
name: app
labels:
app: MyApp
env: production
spec:
containers:
- name: nginx
image: nginx
command: [nginx, -c, /nginx.prod.conf, -g, 'daemon off;']
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
volumeMounts:
- mountPath: /usr/local/nginx/html
name: pwa-disk
readOnly: true
ports:
- name: nginx
containerPort: 80
initContainers:
- name: node
image: node
command: [npm, start]
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/www
name: pwa-disk
ports:
- name: app
containerPort: 3000
- name: api
containerPort: 3001
volumes:
- name: pwa-disk
emptyDir: {}
</code></pre>
<p>I first attempt to put both images in the same <code>containers</code> key, but i got: <code>/var/www/package.json not found</code> on <code>npm start</code></p>
<p>Then I moved it inside the <code>initContainers</code> but now I only have a notice that it failed, but it does not tell me why. View logs does not show any details too.</p>
<p>Notice that when I remove volume part, the <code>npm start</code> works.</p>
<p><a href="https://i.stack.imgur.com/Susmc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Susmc.png" alt="enter image description here"></a></p>
| <p>I assume your assets are already packaged inside the image at <code>/var/www</code>. If you mount an <code>emptyDir</code> volume at that path, then everything there gets overriden with the content of the <code>emptyDir</code> volume - which initially is nothing. That means all your assets are deleted through that mount - which is why your node server is most likely failing. </p>
<p>What you want to do is mount the <code>emptyDir</code> volume at some other path, say <code>/data</code>. Then you override your node containers cmd with <code>cp -r /var/www/* /data</code> to copy the assets into your<code>pwa-disk</code> volume. Now, you can mount this volume into your nginx container. </p>
<p>I think there is a misunderstanding on how <code>initContainers</code> work. They are meant to terminate. They run <strong>BEFORE</strong> any other container is started - no other container inside your pod is started until your <code>initContainers</code> have successfully terminated. So most likely you do not want to start your node server as an <code>initContainer</code>. I guess your node server is not supposed to terminate, in which case your nginx container will never start up. Instead, you might want to declare your node server together with your nginx inside the <code>containers</code> section. Additionally, you also add your node container with an overridden cmd (<code>cp -r /var/www/* /data</code>) to the <code>initContainers</code> section, to copy the assets to a volume. The whole thing might look sth like that: </p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: pwa-app-production
labels:
app: MyApp
spec:
replicas: 1
template:
metadata:
name: app
labels:
app: MyApp
env: production
spec:
containers:
- name: nginx
image: nginx
command: [nginx, -c, /nginx.prod.conf, -g, 'daemon off;']
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
volumeMounts:
- mountPath: /usr/local/nginx/html
name: pwa-disk
readOnly: true
ports:
- name: nginx
containerPort: 80
- name: node
image: node
command: [npm, start]
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- name: app
containerPort: 3000
- name: api
containerPort: 3001
initContainers:
- name: assets
image: node
command: [bash, -c]
args: ["cp -r /var/www/* /data"]
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: pwa-disk
volumes:
- name: pwa-disk
emptyDir: {}
</code></pre>
|
<p>Let's say a Istio enabled Service <code>A</code> exposes a port <code>8080</code> which is named <code>http</code> and as such Istio performs L7 load balancing when accessing it from inside of the mesh.</p>
<p>I'd like to know if there is a way to access this <code>8080</code> port from a pod/service <code>B</code> that doesn't have the Istio sidecar. In such case the traffic would be:
<code>
B -> A Envoy -> A
</code>
or
<code>
B -> A
</code>
This way, I'm able to access an <code>A</code>'s port that is not named <code>http</code> (i.e., only L4 load balancing is in place)</p>
<p>My particular use-case is that I have Prometheus (not running in the mesh) with Prometheus Operator scraping the services running in the mesh directly (Istio Mixer is not involved; the services expose their own business logic metrics). It works for me only and only if a given service doesn't name its port <code>http</code>.</p>
| <p>If you have auth (mTLS) enabled it doesn't work, by design, as in that case Istio tries to protect all service to service communication.</p>
<p>You can turn auth off, and if that doesn't help, also try with the Istio 0.2.4 release candidate (or whichever is the latest at the time you read this, see <a href="https://github.com/istio/istio/releases" rel="nofollow noreferrer">https://github.com/istio/istio/releases</a>) and see if the problem persists, if it does please file an issue at <a href="https://github.com/istio/issues/issues/new" rel="nofollow noreferrer">https://github.com/istio/issues/issues/new</a></p>
<p>In 0.3 (and possibly earlier) we'll let you have fine grain control over mTLS.</p>
|
<p>Can I use a custom kubernetes version in which I have made some code modifications? I wanted to use the <code>--kubernetes-version string</code> flag to use a customized kubernete localkube binary. It is possible??</p>
<p>Minikube documentation says: </p>
<pre><code>--kubernetes-version string The kubernetes version that the minikube VM will use (ex: v1.2.3)
OR a URI which contains a localkube binary (ex: https://storage.googleapis.com/minikube/k8sReleases/v1.3.0/localkube-linux-amd64) (default "v1.7.5")
</code></pre>
<p>But even, when I try that flag with official localkube binaries, it fails:</p>
<pre><code>minikube start --kubernetes-version https://storage.googleapis.com/minikube/k8sReleases/v1.7.0/localkube-linux-amd64 --v 5
Invalid Kubernetes version.
The following Kubernetes versions are available:
- v1.7.5
- v1.7.4
- v1.7.3
- v1.7.2
- v1.7.0
- v1.7.0-rc.1
- v1.7.0-alpha.2
- v1.6.4
- v1.6.3
- v1.6.0
- v1.6.0-rc.1
- v1.6.0-beta.4
- v1.6.0-beta.3
- v1.6.0-beta.2
- v1.6.0-alpha.1
- v1.6.0-alpha.0
- v1.5.3
- v1.5.2
- v1.5.1
- v1.4.5
- v1.4.3
- v1.4.2
- v1.4.1
- v1.4.0
- v1.3.7
- v1.3.6
- v1.3.5
- v1.3.4
- v1.3.3
- v1.3.0
</code></pre>
<p>Many thanks!</p>
| <p>Two options come to mind:</p>
<ul>
<li><p>You can launch minikube with <code>--vm-driver=none</code>, so the binaries are installed in your local filesystem. Then replacing the binaries should not be a difficult process. </p></li>
<li><p>You can create your own minikube iso and then use the <code>--iso-url</code> flag. In order to build the ISO, you can follow this guide <a href="https://github.com/kubernetes/minikube/blob/master/docs/contributors/minikube_iso.md" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/master/docs/contributors/minikube_iso.md</a></p></li>
</ul>
|
<p>I set up a Kubernetes cluster with a master and 2 slaves on 3 bare-metal CentOS 7 server. I used kubeadm for that, following this guide: <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a>
and using Weave Net for the pod network. </p>
<p>For testing I set up 2 default-http-backends with services, to expose the ports:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend-2
labels:
k8s-app: default-http-backend-2
spec:
template:
metadata:
labels:
k8s-app: default-http-backend-2
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend-2
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend-2
labels:
k8s-app: default-http-backend-2
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend-2
</code></pre>
<p>If the 2 pods get deployed on the same node I can curl the port of one pod from the other, but if they are deployed to different nodes, they dont find a route to host: </p>
<pre><code>$~ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.111.59.235 <none> 80/TCP 34m
default-http-backend-2 10.106.29.17 <none> 80/TCP 34m
$~ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
default-http-backend-2-990549169-dd29z 1/1 Running 0 35m 10.44.0.1 vm0059
default-http-backend-726995137-9994z 1/1 Running 0 35m 10.36.0.1 vm0058
$~ kubectl exec -it default-http-backend-726995137-9994z sh
/ # wget 10.111.59.235:80
Connecting to 10.111.59.235:80 (10.111.59.235:80)
wget: server returned error: HTTP/1.1 404 Not Found
/ # wget 10.106.29.17:80
Connecting to 10.106.29.17:80 (10.106.29.17:80)
wget: can't connect to remote host (10.106.29.17): No route to host
</code></pre>
<p>used versions:</p>
<pre><code>$~ docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
Go version: go1.7.4
Git commit: 88a4867/1.12.6
Built: Mon Jul 3 16:02:02 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
Go version: go1.7.4
Git commit: 88a4867/1.12.6
Built: Mon Jul 3 16:02:02 2017
OS/Arch: linux/amd64
$~ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$~ iptables-save
*nat
:PREROUTING ACCEPT [7:420]
:INPUT ACCEPT [7:420]
:OUTPUT ACCEPT [17:1020]
:POSTROUTING ACCEPT [21:1314]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3N4EFB5KN7DZON3G - [0:0]
:KUBE-SEP-5LXBJFBNQIVWZQ4R - [0:0]
:KUBE-SEP-5WQPOVEQM6CWLFNI - [0:0]
:KUBE-SEP-64ZDVBFDSQK7XP5M - [0:0]
:KUBE-SEP-6VF4APMJ4DYGM3KR - [0:0]
:KUBE-SEP-TPSZNIDDKODT2QF2 - [0:0]
:KUBE-SEP-TR5ETKVRYPRDASMW - [0:0]
:KUBE-SEP-VMZRVJ7XGG63C7Q7 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-2BEQYC4GXBICFPF4 - [0:0]
:KUBE-SVC-2J3GLVYDXZLHJ7TU - [0:0]
:KUBE-SVC-2QFLXPI3464HMUTA - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-OWOER5CC7DL5WRNU - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-V76ZVCWXDRE26OHU - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.30.38.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/driveme-service:" -m tcp --dport 31305 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/driveme-service:" -m tcp --dport 31305 -j KUBE-SVC-2BEQYC4GXBICFPF4
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/registry-server:" -m tcp --dport 31048 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/registry-server:" -m tcp --dport 31048 -j KUBE-SVC-2J3GLVYDXZLHJ7TU
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/auth-service:" -m tcp --dport 31722 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/auth-service:" -m tcp --dport 31722 -j KUBE-SVC-V76ZVCWXDRE26OHU
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/api-gateway:" -m tcp --dport 32139 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/api-gateway:" -m tcp --dport 32139 -j KUBE-SVC-OWOER5CC7DL5WRNU
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3N4EFB5KN7DZON3G -s 10.32.0.15/32 -m comment --comment "default/api-gateway:" -j KUBE-MARK-MASQ
-A KUBE-SEP-3N4EFB5KN7DZON3G -p tcp -m comment --comment "default/api-gateway:" -m tcp -j DNAT --to-destination 10.32.0.15:8080
-A KUBE-SEP-5LXBJFBNQIVWZQ4R -s 10.32.0.13/32 -m comment --comment "default/registry-server:" -j KUBE-MARK-MASQ
-A KUBE-SEP-5LXBJFBNQIVWZQ4R -p tcp -m comment --comment "default/registry-server:" -m tcp -j DNAT --to-destination 10.32.0.13:8888
-A KUBE-SEP-5WQPOVEQM6CWLFNI -s 172.16.16.102/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-5WQPOVEQM6CWLFNI -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-5WQPOVEQM6CWLFNI --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 172.16.16.102:6443
-A KUBE-SEP-64ZDVBFDSQK7XP5M -s 10.32.0.12/32 -m comment --comment "default/driveme-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-64ZDVBFDSQK7XP5M -p tcp -m comment --comment "default/driveme-service:" -m tcp -j DNAT --to-destination 10.32.0.12:9595
-A KUBE-SEP-6VF4APMJ4DYGM3KR -s 10.32.0.11/32 -m comment --comment "kube-system/default-http-backend:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6VF4APMJ4DYGM3KR -p tcp -m comment --comment "kube-system/default-http-backend:" -m tcp -j DNAT --to-destination 10.32.0.11:8080
-A KUBE-SEP-TPSZNIDDKODT2QF2 -s 10.32.0.14/32 -m comment --comment "default/auth-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-TPSZNIDDKODT2QF2 -p tcp -m comment --comment "default/auth-service:" -m tcp -j DNAT --to-destination 10.32.0.14:9090
-A KUBE-SEP-TR5ETKVRYPRDASMW -s 10.32.0.10/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-TR5ETKVRYPRDASMW -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.10:53
-A KUBE-SEP-VMZRVJ7XGG63C7Q7 -s 10.32.0.10/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-VMZRVJ7XGG63C7Q7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.10:53
-A KUBE-SERVICES -d 10.104.131.183/32 -p tcp -m comment --comment "kube-system/default-http-backend: cluster IP" -m tcp --dport 80 -j KUBE-SVC-2QFLXPI3464HMUTA
-A KUBE-SERVICES -d 10.96.244.116/32 -p tcp -m comment --comment "default/driveme-service: cluster IP" -m tcp --dport 9595 -j KUBE-SVC-2BEQYC4GXBICFPF4
-A KUBE-SERVICES -d 10.108.120.94/32 -p tcp -m comment --comment "default/registry-server: cluster IP" -m tcp --dport 8888 -j KUBE-SVC-2J3GLVYDXZLHJ7TU
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.104.233/32 -p tcp -m comment --comment "default/auth-service: cluster IP" -m tcp --dport 9090 -j KUBE-SVC-V76ZVCWXDRE26OHU
-A KUBE-SERVICES -d 10.98.19.144/32 -p tcp -m comment --comment "default/api-gateway: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-OWOER5CC7DL5WRNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-2BEQYC4GXBICFPF4 -m comment --comment "default/driveme-service:" -j KUBE-SEP-64ZDVBFDSQK7XP5M
-A KUBE-SVC-2J3GLVYDXZLHJ7TU -m comment --comment "default/registry-server:" -j KUBE-SEP-5LXBJFBNQIVWZQ4R
-A KUBE-SVC-2QFLXPI3464HMUTA -m comment --comment "kube-system/default-http-backend:" -j KUBE-SEP-6VF4APMJ4DYGM3KR
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-TR5ETKVRYPRDASMW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-5WQPOVEQM6CWLFNI --mask 255.255.255.255 --rsource -j KUBE-SEP-5WQPOVEQM6CWLFNI
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-5WQPOVEQM6CWLFNI
-A KUBE-SVC-OWOER5CC7DL5WRNU -m comment --comment "default/api-gateway:" -j KUBE-SEP-3N4EFB5KN7DZON3G
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-VMZRVJ7XGG63C7Q7
-A KUBE-SVC-V76ZVCWXDRE26OHU -m comment --comment "default/auth-service:" -j KUBE-SEP-TPSZNIDDKODT2QF2
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Wed Sep 13 09:29:35 2017
# Generated by iptables-save v1.4.21 on Wed Sep 13 09:29:35 2017
*filter
:INPUT ACCEPT [1386:436876]
:FORWARD ACCEPT [67:11075]
:OUTPUT ACCEPT [1379:439138]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]+@p dst -m comment --comment "DefaultAllow isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-4vtqMI+kx/2]jD%_c0S%thO%V dst -m comment --comment "DefaultAllow isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -m comment --comment "DefaultAllow isolation for namespace: default" -j ACCEPT
COMMIT
# Completed on Wed Sep 13 09:29:35 2017
</code></pre>
<p>the 404 is the expected response from the service. </p>
<p>Any1 has an idea, where this problem could be caused?</p>
<p><strong>Edit:</strong>
Added examples and additional information</p>
| <p>So, I resolved my issue. For anyone finding this post and having the same Problem:
In my case all UDP traffic between the nodes was blocked and just TCP was allowed. But DNS is handled via UDP, so this also has to be allowed.</p>
|
<p>I am a Kubernetes newbie. I am trying to setup a Kubernetes cluster on AWS using kops. I was successfully able to setup the cluster. However, I am not able to access the Dashboard UI. (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui</a>)</p>
<p>When I access the master node, I see the following error:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
</code></pre>
<p>I see the status of the dashboard as CrashLoopBackOff. (Please note: I have removed the names of the other pods in the following log)</p>
<pre><code>~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kubernetes-dashboard-4167803980-vnx3k 0/1 CrashLoopBackOff 6 6m
$ kubectl logs kubernetes-dashboard-4167803980-vnx3k --namespace=kube-system
2017/09/25 17:50:37 Using in-cluster config to connect to apiserver
2017/09/25 17:50:37 Using service account token for csrf signing
2017/09/25 17:50:37 No request provided. Skipping authorization
2017/09/25 17:50:37 Starting overwatch
2017/09/25 17:50:37 Successful initial request to the apiserver, version: v1.7.2
2017/09/25 17:50:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/25 17:50:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/25 17:50:37 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/25 17:50:37 Initializing JWE encryption key from synchronized object
2017/09/25 17:50:37 Creating in-cluster Heapster client
2017/09/25 17:50:37 Serving securely on HTTPS port: 8443
2017/09/25 17:50:37 open /certs/dashboard.crt: no such file or directory
</code></pre>
<p>I would sincerely appreciate any help/suggestions to get the dashboard running. Thanks in advance!</p>
| <p>Your using latest dashboard, looks like it required SSL certificate. try with 1.6.3 it will work with-out SSL cert.</p>
<p>I am running this version in my cluster.</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
</code></pre>
<p><strong>Helm command to install dashboard</strong></p>
<pre><code>kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm install stable/kubernetes-dashboard --name kubernetes-dashboard --namespace kube-system --debug
helm install stable/heapster --namespace kube-system
</code></pre>
|
<p>I'm a super beginner with Kubernetes and I'm trying to imagine how to split my monolithic application in different micro services.
Let's say i'm writing my micro services application in Flask and each of them exposes some endpoints like:</p>
<p>Micro service 1:</p>
<ul>
<li>/v1/user-accounts</li>
</ul>
<p>Micro service 2:</p>
<ul>
<li>/v1/savings</li>
</ul>
<p>Micro service 3:</p>
<ul>
<li>/v1/auth</li>
</ul>
<p>If all of them were running as blueprints in a monolithic application all of them would be prefixed with the same IP, that is the IP of the host server my application is running on, like 10.12.234.69, eg. </p>
<ul>
<li><a href="http://10.12.234.69:5000/v1/user-accounts" rel="nofollow noreferrer">http://10.12.234.69:5000/v1/user-accounts</a></li>
</ul>
<p>Now, deploying those 3 "blueprints" on 3 different POD/Nodes in Kubernetes will change the IP address of each endpoint having maybe 10.12.234.69, than 10.12.234.70 or 10.12.234.75</p>
<p>How can i write an application that keep the URL reference constant even if the IP address changes?</p>
<ul>
<li>Would a Load Balancer Service do the trick?</li>
<li>Maybe the Service Registry feature of Kubernetes does the "DNS" part for me?</li>
</ul>
<p>I know It can sounds very obvious question but still I cannot find any reference/example to this simple problem.</p>
<p>Thanks in advance!</p>
<p><strong>EDIT:</strong> (as follow up to simon answer)</p>
<p>questions: </p>
<ul>
<li><p>given the fact that the Ingress service spawns a load balancer and is possible to have all the routes reachable from the http/path prefixed by the IP (<code>http://<ADDRESS>/v1/savings</code>) of the load balancer, how can I associate IP to the load balancer to match the ip of the pod on which flask web server is running?</p></li>
<li><p>in case I add other sub routes to the same paths, like <code>/v1/savings/get</code> and <code>/v1/savings/get/id/<var_id></code> , should i update all of them in the ingress http path in order for them to be reachable by the load balancer ? </p></li>
</ul>
| <ol>
<li>A load balancer is what you are looking for.</li>
<li>Kubernetes services will make your pods accessible under a given hostname cluster-internally.</li>
</ol>
<p>If you want to make your services accessible from outside the cluster under a single IP and different paths, you can use a load balancer and Kubernetes HTTP <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingresses</a>. They define under which domain and path a service should be mapped and can be fetched by a load balancer to build its configuration. </p>
<p>Example based on your micro service architecture: </p>
<h2>Mocking applications</h2>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: user-accounts
spec:
template:
metadata:
labels:
app: user-accounts
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
args:
- /bin/bash
- "-c"
- echo 'server { location /v1/user-accounts { return 200 "user-accounts"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: savings
spec:
template:
metadata:
labels:
app: savings
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/savings { return 200 "savings"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
template:
metadata:
labels:
app: auth
spec:
containers:
- name: server
image: nginx
ports:
- containerPort: 80
command:
- /bin/bash
- "-c"
- echo 'server { location /v1/auth { return 200 "auth"; }}' > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'
</code></pre>
<p>These deployments represent your services and just return their name via HTTP under <code>/v1/name</code>. </p>
<h2>Mapping applications to services</h2>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: user-accounts
spec:
type: NodePort
selector:
app: user-accounts
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: savings
spec:
type: NodePort
selector:
app: savings
ports:
- protocol: TCP
port: 80
---
kind: Service
apiVersion: v1
metadata:
name: auth
spec:
type: NodePort
selector:
app: auth
ports:
- protocol: TCP
port: 80
</code></pre>
<p>These services create an internal IP and a domain resolving to it based on their names, mapping them to the pods found by a given selector. Applications running in the same cluster namespace will be able to reach them under <code>user-accounts</code>, <code>savings</code> and <code>auth</code>. </p>
<h2>Making services reachable via load balancer</h2>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
spec:
rules:
- http:
paths:
- path: /v1/user-accounts
backend:
serviceName: user-accounts
servicePort: 80
- path: /v1/savings
backend:
serviceName: savings
servicePort: 80
- path: /v1/auth
backend:
serviceName: auth
servicePort: 80
</code></pre>
<p>This Ingress defines under which paths the different services should be reachable. Verify your Ingress via <code>kubectl get ingress</code>:</p>
<pre><code># kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example * 80 1m
</code></pre>
<p>If you are running on Google Container Engine, there is an <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer#step_3_create_an_ingress_resource" rel="nofollow noreferrer">Ingress controller running in your cluster</a> which will spawn a Google Cloud Load Balancer when you create a new Ingress object. Under the <code>ADDRESS</code> column of the above output, there will be an IP displayed under which you can access your applications:</p>
<pre><code># curl http://<ADDRESS>/v1/user-accounts
user-accountsβ
# curl http://<ADDRESS>/v1/savings
savingsβ
# curl http://<ADDRESS>/v1/auth
authβ
</code></pre>
|
<p>I want to rename my k8s <code>Secrets</code> and want to check if there are unused <code>Secrets</code>. Also I would like to know how many and which <code>containers</code> reference a <code>Secret</code>.</p>
<p>Is there a easier way to do this than search for the secret names in all deployments?</p>
| <p>Thanks Simon. Based on your answer I created a diff, which shows <code>secrets</code> that are not referenced in the containers env section. <code>Secrets</code> can also be referenced in:</p>
<ul>
<li><code>TLS</code> section of Ingresses</li>
<li><code>Pods</code> <code>Volumes</code> spec, like Simon mentioned</li>
<li><code>ImagePullSecrets</code> for private repositories</li>
<li><code>CRDs</code> Custom Resource Definitions</li>
</ul>
<p>But for me it is enough to find <code>secrets</code> that are not referenced in environment variables:</p>
<pre><code>diff \
<(kubectl get pods -o jsonpath='{.items[*].spec.containers[*].env[*].valueFrom.secretKeyRef.name}' | xargs -n1 | sort | uniq) \
<(kubectl get secrets -o jsonpath='{.items[*].metadata.name}' | xargs -n1 | sort | uniq)
</code></pre>
<p><strong>Update 16.04.2018</strong></p>
<p>I created a more advanced version to find also secrets referenced in <code>volumes</code>, <code>ingress tls</code> and <code>imagePullSecrets</code>. The following snippet will show you all unused <code>secrets</code> for the current namespace.</p>
<p><strong>Caution:</strong> The script does not cover all options where secrets can be referenced (e.g. Custom Resource Definitions).</p>
<p><strong>Update 15.06.2021</strong>: Added secrets from Pod container spec <code>envFrom[*].secretRef.name</code> as secret source</p>
<pre><code>envSecrets=$(kubectl get pods -o jsonpath='{.items[*].spec.containers[*].env[*].valueFrom.secretKeyRef.name}' | xargs -n1)
envSecrets2=$(kubectl get pods -o jsonpath='{.items[*].spec.containers[*].envFrom[*].secretRef.name}' | xargs -n1)
volumeSecrets=$(kubectl get pods -o jsonpath='{.items[*].spec.volumes[*].secret.secretName}' | xargs -n1)
pullSecrets=$(kubectl get pods -o jsonpath='{.items[*].spec.imagePullSecrets[*].name}' | xargs -n1)
tlsSecrets=$(kubectl get ingress -o jsonpath='{.items[*].spec.tls[*].secretName}' | xargs -n1)
diff \
<(echo "$envSecrets\n$envSecrets2\n$volumeSecrets\n$pullSecrets\n$tlsSecrets" | sort | uniq) \
<(kubectl get secrets -o jsonpath='{.items[*].metadata.name}' | xargs -n1 | sort | uniq)
</code></pre>
|
<p>I have a pod with two containers. An application and a database. I have two replicas for the pod.</p>
<p>I want to expose the application port to be accessed outside the cluster and I do not want to expose the database port.</p>
<p>But I want to access the database port using DNS to balance the traffic.</p>
<p>I can create two services one as NodePort and expose the applicatin port and other service as ClusterIP to expose the database port.</p>
<p>I could also connect to localhost from the application server as the traffic has been balance before. And only expose the app server as nodeport.</p>
<p>The questions is: Could I expose the database port as cluster ip and the app port as nodeport with only one service?</p>
<p>Thank you.</p>
| <p>The answer is: No, you can not. </p>
<p>But you should still know that a Kubernetes Service of type <code>NodePort</code> will also allocate a Cluster IP to which the port will route. So if your wanted to publish both ports, a single Service would be sufficient to reach them internally via the name and externally via node ports.</p>
|
<p>With what CA Certificate are the Kubernetes Service Account JWT tokens signed with? Is there a way to get the public key with which kubernetes service accounts are signed in GKE?</p>
| <p>You have no access to that key in GKE.</p>
<p>In general, the Service Account JWT tokens are signed with an RSA key by the controller manager. The key is specified by the <code>--service-account-private-key-file</code> for <code>kube-controller-manager</code>. (The public key is specified by the <code>--service-account-key-file</code> parameter for <code>kube-apiserver</code>.)</p>
|
<p>I have a kubernetes deployment in which I am trying to run 5 docker containers inside a single pod on a single node. The containers hang in "Pending" state and are never scheduled. I do not mind running more than 1 pod but I'd like to keep the number of nodes down. I have assumed 1 node with 1 CPU and 1.7G RAM will be enough for the 5 containers and I have attempted to split the workload across.</p>
<p>Initially I came to the conclusion that I have insufficient resources. I enabled autoscaling of nodes which produced the following (see kubectl describe pod command):</p>
<blockquote>
<p>pod didn't trigger scale-up (it wouldn't fit if a new node is added)</p>
</blockquote>
<p>Anyway, each docker container has a simple command which runs a fairly simple app. Ideally I wouldn't like to have to deal with setting CPU and RAM allocation of resources but even setting the CPU/mem limits within bounds so they don't add up to > 1, I still get (see kubectl describe po/test-529945953-gh6cl) I get this:</p>
<blockquote>
<p>No nodes are available that match all of the following predicates::
Insufficient cpu (1), Insufficient memory (1).</p>
</blockquote>
<p>Below are various commands that show the state. Any help on what I'm doing wrong will be appreciated.</p>
<blockquote>
<p>kubectl get all</p>
</blockquote>
<pre><code>user_s@testing-11111:~/gce$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/test-529945953-gh6cl 0/5 Pending 0 34m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.7.240.1 <none> 443/TCP 19d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/test 1 1 1 0 34m
NAME DESIRED CURRENT READY AGE
rs/test-529945953 1 1 0 34m
user_s@testing-11111:~/gce$
</code></pre>
<blockquote>
<p>kubectl describe po/test-529945953-gh6cl</p>
</blockquote>
<pre><code>user_s@testing-11111:~/gce$ kubectl describe po/test-529945953-gh6cl
Name: test-529945953-gh6cl
Namespace: default
Node: <none>
Labels: app=test
pod-template-hash=529945953
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"test-529945953","uid":"c6e889cb-a2a0-11e7-ac18-42010a9a001a"...
Status: Pending
IP:
Created By: ReplicaSet/test-529945953
Controlled By: ReplicaSet/test-529945953
Containers:
container-test2-tickers:
Image: gcr.io/testing-11111/testology:latest
Port: <none>
Command:
process_cmd
arg1
test2
Limits:
cpu: 150m
memory: 375Mi
Requests:
cpu: 100m
memory: 375Mi
Environment:
DB_HOST: 127.0.0.1:5432
DB_PASSWORD: <set to the key 'password' in secret 'cloudsql-db-credentials'> Optional: false
DB_USER: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b2mxc (ro)
container-kraken-tickers:
Image: gcr.io/testing-11111/testology:latest
Port: <none>
Command:
process_cmd
arg1
arg2
Limits:
cpu: 150m
memory: 375Mi
Requests:
cpu: 100m
memory: 375Mi
Environment:
DB_HOST: 127.0.0.1:5432
DB_PASSWORD: <set to the key 'password' in secret 'cloudsql-db-credentials'> Optional: false
DB_USER: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b2mxc (ro)
container-gdax-tickers:
Image: gcr.io/testing-11111/testology:latest
Port: <none>
Command:
process_cmd
arg1
arg2
Limits:
cpu: 150m
memory: 375Mi
Requests:
cpu: 100m
memory: 375Mi
Environment:
DB_HOST: 127.0.0.1:5432
DB_PASSWORD: <set to the key 'password' in secret 'cloudsql-db-credentials'> Optional: false
DB_USER: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b2mxc (ro)
container-bittrex-tickers:
Image: gcr.io/testing-11111/testology:latest
Port: <none>
Command:
process_cmd
arg1
arg2
Limits:
cpu: 150m
memory: 375Mi
Requests:
cpu: 100m
memory: 375Mi
Environment:
DB_HOST: 127.0.0.1:5432
DB_PASSWORD: <set to the key 'password' in secret 'cloudsql-db-credentials'> Optional: false
DB_USER: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b2mxc (ro)
cloudsql-proxy:
Image: gcr.io/cloudsql-docker/gce-proxy:1.09
Port: <none>
Command:
/cloud_sql_proxy
--dir=/cloudsql
-instances=testing-11111:europe-west2:testology=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
Limits:
cpu: 150m
memory: 375Mi
Requests:
cpu: 100m
memory: 375Mi
Environment: <none>
Mounts:
/cloudsql from cloudsql (rw)
/etc/ssl/certs from ssl-certs (rw)
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b2mxc (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
ssl-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
cloudsql:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-b2mxc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b2mxc
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
27m 17m 44 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (1), Insufficient memory (2).
26m 8s 150 cluster-autoscaler Normal NotTriggerScaleUp pod didn't trigger scale-up (it wouldn't fit if a new node is added)
16m 2s 63 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (1), Insufficient memory (1).
user_s@testing-11111:~/gce$
> Blockquote
</code></pre>
<blockquote>
<p>kubectl get nodes</p>
</blockquote>
<pre><code>user_s@testing-11111:~/gce$ kubectl get nodes
NAME STATUS AGE VERSION
gke-test-default-pool-abdf83f7-p4zw Ready 9h v1.6.7
</code></pre>
<blockquote>
<p>kubectl get pods</p>
</blockquote>
<pre><code>user_s@testing-11111:~/gce$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-529945953-gh6cl 0/5 Pending 0 38m
</code></pre>
<blockquote>
<p>kubectl describe nodes</p>
</blockquote>
<pre><code>user_s@testing-11111:~/gce$ kubectl describe nodes
Name: gke-test-default-pool-abdf83f7-p4zw
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/fluentd-ds-ready=true
beta.kubernetes.io/instance-type=g1-small
beta.kubernetes.io/os=linux
cloud.google.com/gke-nodepool=default-pool
failure-domain.beta.kubernetes.io/region=europe-west2
failure-domain.beta.kubernetes.io/zone=europe-west2-c
kubernetes.io/hostname=gke-test-default-pool-abdf83f7-p4zw
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Tue, 26 Sep 2017 02:05:45 +0100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 26 Sep 2017 02:06:05 +0100 Tue, 26 Sep 2017 02:06:05 +0100 RouteCreated RouteController created a route
OutOfDisk False Tue, 26 Sep 2017 11:33:57 +0100 Tue, 26 Sep 2017 02:05:45 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Tue, 26 Sep 2017 11:33:57 +0100 Tue, 26 Sep 2017 02:05:45 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 26 Sep 2017 11:33:57 +0100 Tue, 26 Sep 2017 02:05:45 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Tue, 26 Sep 2017 11:33:57 +0100 Tue, 26 Sep 2017 02:06:05 +0100 KubeletReady kubelet is posting ready status. AppArmor enabled
KernelDeadlock False Tue, 26 Sep 2017 11:33:12 +0100 Tue, 26 Sep 2017 02:05:45 +0100 KernelHasNoDeadlock kernel has no deadlock
Addresses:
InternalIP: 10.154.0.2
ExternalIP: 35.197.217.1
Hostname: gke-test-default-pool-abdf83f7-p4zw
Capacity:
cpu: 1
memory: 1742968Ki
pods: 110
Allocatable:
cpu: 1
memory: 1742968Ki
pods: 110
System Info:
Machine ID: e6119abf844c564193495c64fd9bd341
System UUID: E6119ABF-844C-5641-9349-5C64FD9BD341
Boot ID: 1c2f2ea0-1f5b-4c90-9e14-d1d9d7b75221
Kernel Version: 4.4.52+
OS Image: Container-Optimized OS from Google
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.11.2
Kubelet Version: v1.6.7
Kube-Proxy Version: v1.6.7
PodCIDR: 10.4.1.0/24
ExternalID: 6073438913956157854
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-gcp-v2.0-k565g 100m (10%) 0 (0%) 200Mi (11%) 300Mi (17%)
kube-system heapster-v1.3.0-3440173064-1ztvw 138m (13%) 138m (13%) 301456Ki (17%) 301456Ki (17%)
kube-system kube-dns-1829567597-gdz52 260m (26%) 0 (0%) 110Mi (6%) 170Mi (9%)
kube-system kube-dns-autoscaler-2501648610-7q9dd 20m (2%) 0 (0%) 10Mi (0%) 0 (0%)
kube-system kube-proxy-gke-test-default-pool-abdf83f7-p4zw 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard-490794276-25hmn 100m (10%) 100m (10%) 50Mi (2%) 50Mi (2%)
kube-system l7-default-backend-3574702981-flqck 10m (1%) 10m (1%) 20Mi (1%) 20Mi (1%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
728m (72%) 248m (24%) 700816Ki (40%) 854416Ki (49%)
Events: <none>
</code></pre>
| <p>As you can see in the output of your <code>kubectl describe nodes</code> command under <code>Allocated resources:</code>, there is <code>728m (72%)</code> CPU and <code>700816Ki (40%)</code> Memory already requested by Pods running in the <code>kube-system</code> namespace on the node. The sum of resource requests of your test Pod both exceeds the remaining CPU and Memory available on your node, as you can see under <code>Events</code> of your <code>kubectl describe po/[β¦]</code> command. </p>
<p>If you want to keep all containers in a single pod, you need to reduce the resource requests of your containers or run them on a node with more CPU and Memory. The better solution would be to split your application in multiple pods, this enables distribution over multiple nodes.</p>
|
<p>I am following this tutorial: <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/container-engine/docs/tutorials/http-balancer</a>, but running it inside Minikube with <code>yml</code> files for each steps:</p>
<p><strong>Step 1: Deploy an nginx server</strong></p>
<p>production.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: pwa-app-production
labels:
app: MyApp
spec:
replicas: 1
template:
metadata:
name: app
labels:
app: MyApp
env: production
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- name: nginx
containerPort: 80
</code></pre>
<p>Then:</p>
<pre><code>$ kubectl apply -f production.yml
</code></pre>
<p><strong>Step 2: Expose your nginx deployment as a service internally</strong></p>
<p>service.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: pwa-frontend
spec:
type: NodePort
selector:
app: MyApp
ports:
- name: nginx
port: 80
protocol: TCP
</code></pre>
<p>Then:</p>
<pre><code>$ kubectl apply -f service.yml
</code></pre>
<p>Verify the Service was created and a node port was allocated:</p>
<pre><code>$ kubectl get service pwa-frontend
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pwa-frontend 10.0.0.28 <nodes> 80:30781/TCP 26m
</code></pre>
<p><strong>Step 3: Create an Ingress resource</strong></p>
<p>ingress.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pwa-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: pwa-frontend
servicePort: 80
</code></pre>
<p>Then:</p>
<pre><code>$ kubectl create -f ingress.yml
</code></pre>
<p><strong>Step 4: Visit your application</strong></p>
<p>Find out the external IP address of the load balancer serving your application by running:</p>
<pre><code>$ kubectl describe ing pwa-ingress
Name: pwa-ingress
Namespace: default
Address: 192.168.99.100
Default backend: pwa-frontend:80 (172.17.0.2:80)
Rules:
Host Path Backends
---- ---- --------
* * pwa-frontend:80 (172.17.0.2:80)
Annotations:
rewrite-target: /
</code></pre>
<p>Every thing seems working well and all infos outputs seems to correspond to the tutorial. But now:</p>
<pre><code>$ curl 192.168.99.100
default backend - 404
</code></pre>
| <p>I am assuming that you deployed the default nginx ingress controller by <code>minikube addons enable ingress</code>. The tutorial you followed is specifically for Google Container Engine, in those clusters there is a different ingress controller deployed which will create Google Cloud Load Balancers and is also capable of exposing plain TCP services. Your nginx ingress controller in minikube is only capable of processing HTTP ingresses like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pwa-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pwa-frontend
servicePort: 80
</code></pre>
<p>Use <code>kubectl replace ingress.yml</code> after you modified your file and try your request again. </p>
|
<p>I have a deployment with 2 containers inside a single pod (<strong>container-test2</strong> and <strong>cloudsql-proxy</strong>).</p>
<p><strong>container-test2</strong> runs a docker image which passes ["my_app", "arg1", "arg2"] as CMD. I would like to run several instances of this container with different argument combinations. I would also like to run them in separate pods so that I can distribute them across nodes. I am not entirely sure how to do this.</p>
<p>I can successfully run the two containers but I don't know how to make container-test2 replicate with different arguments and make each container start inside an individual pod.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- image: gcr.io/testing-11111/testology:latest
name: container-test2
command: ["my_app", "arg1", "arg2"]
env:
- name: DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:5432
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
resources:
requests:
#memory: "64Mi"
cpu: 0.1
limits:
memory: "375Mi"
cpu: 0.15
# Change [INSTANCE_CONNECTION_NAME] here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# Insert the port number used by your database.
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=testing-11111:europe-west2:testology=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
resources:
requests:
#memory: "64Mi"
cpu: 0.1
limits:
memory: "375Mi"
cpu: 0.15
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
</code></pre>
<p>EDIT: Solution</p>
<p>I solved it by creating multiple copies of the deployment config into directory "deployments", amending the names and command and then running:</p>
<blockquote>
<p>kubectl create -f deployments/</p>
</blockquote>
| <ol>
<li><p>You can not make individual replicas run with different arguments, they would not be replicas as in "exact copy". If you want to run your application multiple times with different arguments, you need to use multiple deployments. </p></li>
<li><p>The containers of a replication run in their own Pod, e.g. there should be three Pods existing for a Deployment scaled to three replications</p></li>
</ol>
|
<p>Say I have the following pod spec. </p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: deployment-example
spec:
# 3 Pods should exist at all times.
replicas: 3
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
- name: nginx
# Run this image
image: nginx:1.10
</code></pre>
<p>Here, the name of the container is <code>nginx</code>. Is there a way to get the "nginx" string from within the running container?</p>
<p>I mean, once I exec into the container with something like </p>
<pre><code>kubectl exec -it <pod-name> -c nginx bash
</code></pre>
<p>Is there a programmatic way to get to the given container name in the pod spec ? </p>
<hr>
<p>Note that this is not necessarily the docker container name that gets printed in <code>docker ps</code>. Kubernetes composes a longer name for the spawned docker container. </p>
<hr>
<p>The <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api" rel="noreferrer">downward api</a> looks promising in this regard. However <code>container name</code> is not mentioned in the <code>Capabilities of the Downward API</code> section. </p>
| <p>The container name is not available trough the downward api. You can use <a href="http://www.yaml.org/spec/1.2/spec.html#alias//" rel="noreferrer">yaml anchors and aliases (references)</a>. Unfortunately they are not scoped so you will have to come up with unique names for the anchors - it does not matter what they are as they are <em>not</em> present in the parsed document.</p>
<blockquote>
<p>Subsequent occurrences of a previously serialized node are presented as alias nodes. The first occurrence of the node must be marked by an anchor to allow subsequent occurrences to be presented as alias nodes.</p>
<p>An alias node is denoted by the β*β indicator. The <strong>alias refers to the most recent preceding node having the same anchor</strong>. It is an error for an alias node to use an anchor that does not previously occur in the document. It is not an error to specify an anchor that is not used by any alias node.</p>
<pre><code>First occurrence: &anchor Foo
Second occurrence: *anchor
Override anchor: &anchor Bar
Reuse anchor: *anchor
</code></pre>
</blockquote>
<p>Here is a full working example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: reftest
spec:
containers:
- name: &container1name first
image: nginx:1.10
env:
- name: MY_CONTAINER_NAME
value: *container1name
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: &container2name second
image: nginx:1.10
env:
- name: MY_CONTAINER_NAME
value: *container2name
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
|
<p>First I start Kubernetes using Flannel with <code>10.244.0.0</code>.</p>
<p>Then I reset all and restart with <code>10.84.0.0</code>.</p>
<p>However, the interface <code>flannel.1</code> still is <code>10.244.1.0</code></p>
<p>That's how I clean up:</p>
<pre><code>kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /run/flannel
rm -rf /etc/cni/
ifconfig cni0 down
brctl delbr cni0
ifconfig flannel.1 down
systemctl start docker
</code></pre>
<p>Am I missing something in the reset? </p>
| <p>Because your ip link have the old record</p>
<p>look by </p>
<p><code>ip link
</code>
you can see the record, and if you want to clean the record of old flannel and cni</p>
<p>please try</p>
<p><code>ip link delete cni0
ip link delete flannel.1
</code></p>
|
<p>I'm currently trying the Jenkins kubernetes plugin below, but have some problem.</p>
<p><a href="https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin" rel="noreferrer">https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin</a></p>
<p>In my case, Jenkins doesn't exist in my kubernetes cluster. This is because I have three kubernetes clusters for dev, staging, and production environments and rather than having three Jenkins service for each env, I want to have one consolidated Jenkins master which operates all three clusters.</p>
<p>Each environment is on an indivisual VPC and Jenkins server is on another VPC, so I setup VPC peering from Jenkins VPC to all other VPCs, and then opened 443 port from Jenkins to k8s master on DEV.</p>
<p>But when I click "Test connection" on "adding new cloud" -> "kubernetes", an error says </p>
<pre><code>javax.servlet.ServletException: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:796)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:876)
at org.kohsuke.stapler.MetaClass$5.doDispatch(MetaClass.java:233)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:58)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:746)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:876)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:649)
at org.kohsuke.stapler.Stapler.service(Stapler.java:238)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:686)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1494)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:134)
at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1482)
at hudson.security.csrf.CrumbFilter.doFilter(CrumbFilter.java:49)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1482)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:84)
at hudson.security.UnwrapSecurityExceptionFilter.doFilter(UnwrapSecurityExceptionFilter.java:51)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at jenkins.security.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:117)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.providers.anonymous.AnonymousProcessingFilter.doFilter(AnonymousProcessingFilter.java:125)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.ui.rememberme.RememberMeProcessingFilter.doFilter(RememberMeProcessingFilter.java:142)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:271)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at jenkins.security.BasicHeaderProcessor.doFilter(BasicHeaderProcessor.java:93)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:249)
at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:67)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:76)
at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:171)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1482)
at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:49)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1482)
at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:81)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1482)
at org.kohsuke.stapler.DiagnosticThreadNameFilter.doFilter(DiagnosticThreadNameFilter.java:30)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1474)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:533)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:960)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1021)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at winstone.BoundedExecutorService$1.run(BoundedExecutorService.java:77)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:418)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:58)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$DescriptorImpl.doTestConnection(KubernetesCloud.java:590)
at sun.reflect.GeneratedMethodAccessor1736.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.kohsuke.stapler.Function$InstanceFunction.invoke(Function.java:324)
at org.kohsuke.stapler.Function.bindAndInvoke(Function.java:167)
at org.kohsuke.stapler.Function.bindAndInvokeAndServeResponse(Function.java:100)
at org.kohsuke.stapler.MetaClass$1.doDispatch(MetaClass.java:124)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:58)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:746)
... 63 more
Caused by: java.net.NoRouteToHostException: No route to host
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at com.squareup.okhttp.internal.Platform.connectSocket(Platform.java:120)
at com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:141)
at com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:112)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:230)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:416)
... 74 more
</code></pre>
<p>I want to make sure if this plugin can be used in my case, because on most examples, it seems Jenkins master is also inside a kubernetes cluster as a k8s pod.</p>
| <p>1γprepare a service account for kubernetes-plugin in k8s</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
</code></pre>
<p>2γfind the according secret generated: jenkins-token-xxxxxx</p>
<p>From the k8s dashboard, we can get the "<strong>ca.crt</strong>" string and "<strong>token</strong>" string from the secret.</p>
<p>3γconfigure jenkins cloud</p>
<p>In the "<strong>Kubernetes server certificate key</strong>" item, fill in the "<strong>ca.crt</strong>" string.</p>
<p>In the "Credentials" item, creat a credential of type "<strong>Secret text</strong>" or "<strong>OpenShift OAuth token</strong>"(even if we use the pure k8s), fill in the "<strong>token</strong>" string, then use the credential.</p>
<p>"<strong>Test Connection</strong>" should pass now.</p>
|
<p>I created a namespace called <code>qc</code> for qc environment.</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace.name | quote }}
</code></pre>
<p><code>kubectl create -f namespace.yaml</code></p>
<p>But I can delete this namespace anytime by running <code>kubectl delete namespace qc</code>.</p>
<p>How can I disable to delete user created namespaces?</p>
<p>Thank you</p>
| <p>You do not want disable deletion of Namespaces for your kubernetes-admin user, although it could be possible. If there are other people or services interacting with your cluster, you need to define Users and/or Service Accounts for them and bind Cluster Roles to them, whitelisting their permissions. Have a look at <a href="https://kubernetes.io/docs/admin/authentication/#users-in-kubernetes" rel="nofollow noreferrer">Users in Kubernetes</a> and <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a> in the official Kubernetes Documentation.</p>
|
<p>I have setup a Kubernetes cluster with flannel network on bare metal. I have a service deployed and running in the cluster. The service would broadcast to discover the other end devices in the same subnet. </p>
<p>The problem is that the client agents which receives the broadcasts are running on resource-constrained hardware. These devices are running in the same subnet with the Master and Worker Nodes. The deployed service in pods (netmask: 10.244.0.0/16) are unable to discover the clients running in the Host OS subnet (netmask: 192.168.0.0/24). How to join the clients in the pods' subnet?</p>
<p>Any help is appreciate. </p>
| <p>I ended up making the host and pods running in the same subnet by adding <code>hostNetwork: true</code> in pod configuration. In that case, containers are using host network. Inspired from <a href="https://github.com/kubernetes/kubernetes/issues/19171" rel="nofollow noreferrer">here</a></p>
<p>The configuration looks like:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: worker
namespace: default
spec:
replicas: 4
selector:
name: worker
template:
metadata:
labels:
name: worker
spec:
hostNetwork: true
containers:
- image: 10.0.0.1:5000/worker
name: worker
imagePullPolicy: IfNotPresent
</code></pre>
|
<p>I'm trying to use the Jenkins/Kubernetes plugin to orchestrate docker slaves with Jenkins. </p>
<p>I'm using this plugin: <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a></p>
<p>My problem is that all the slaves are offline so the job can't execute:</p>
<p><a href="https://i.stack.imgur.com/2dLZc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2dLZc.png" alt="Slave status"></a></p>
<p><a href="https://i.stack.imgur.com/Go4Sw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Go4Sw.png" alt="enter image description here"></a></p>
<p>I have tried this on my local box using minikube, and on a K8 Cluster hosted by our ops group. I've tried both Jenkins 1.9 and Jenkins 2. I always get the same result. The screenshots are from Jenkins 1.642.4, K8 v1.2.0</p>
<p>Here is my configuration... note that when I click 'test connection' I get a success. Also note I didn't need any credentials (this is the only difference I can see vs the documented example).</p>
<p><a href="https://i.stack.imgur.com/9gcxQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9gcxQ.png" alt="Jenkins System Configuration"></a></p>
<p>The Jenkins log shows the following over and over:</p>
<pre><code> Waiting for slave to connect (11/100): docker-6b55f1b7fafce
Jul 20, 2016 5:01:06 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
Waiting for slave to connect (12/100): docker-6b55f1b7fafce
Jul 20, 2016 5:01:07 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
Waiting for slave to connect (13/100): docker-6b55f1b7fafce
Jul 20, 2016 5:01:08 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call
</code></pre>
<p>When I run <code>kubectl get events</code> I see this:</p>
<pre><code>24s 24s 1 docker-6b3c2ff27dad3 Pod Normal Scheduled {default-scheduler } Successfully assigned docker-6b3c2ff27dad3 to 96.xxx.xx.159
24s 23s 2 docker-6b3c2ff27dad3 Pod Warning MissingClusterDNS {kubelet 96.xxx.xx.159} kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Pulled {kubelet 96.xxx.xx.159} Container image "jenkinsci/jnlp-slave" already present on machine
23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Created {kubelet 96.xxx.xx.159} Created container with docker id 82fcf1bd0328
23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Started {kubelet 96.xxx.xx.159} Started container with docker id 82fcf1bd0328
</code></pre>
<p>Any ideas?</p>
<p>UPDATE: more log info as suggested by csanchez</p>
<pre><code> β docker git:(master) β kubectl get pods --namespace default -o wide
NAME READY STATUS RESTARTS AGE NODE
docker-6bb647254a2a4 1/1 Running 0 1m 96.x.x.159
β docker git:(master) β kubectl log docker-6bafbac10b392
Jul 20, 2016 6:45:10 PM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to 96.x.x.159:50000 (retrying:10)
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
</code></pre>
<p>I'll have to look at what this port 50000 is used for??</p>
| <p>I just want to add a bit more explanation to the above answers for newbies.</p>
<p>While exposing the jenkins UI, you also need to expose internal port 50000
Here is a simple service for a jenkins deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
name: "http"
nodePort: 30000
targetPort: 8080
- port: 50000
name: "slave"
nodePort: 30010
targetPort: 50000
selector:
app: jenkins
</code></pre>
<p>For external access to the Jenkins UI, <code>nodePort</code> is being used in the above configuration. I'm exposing port 8080 to the nodePort 30000 (jenkins UI will now available at node_ip:30000) and exposing pod port 50000 to nodeport 30010.</p>
<p>Once the svc is created:</p>
<pre><code>$ kubectl get svc -n jenkins
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins 10.233.5.94 <nodes> 8080:30000/TCP,50000:30010/TCP 23m
</code></pre>
<p>Now add <code>jenkins_ip:30010</code> as <code>Jenkins Tunnel</code>.</p>
|
<p>I'd like to implement an application for transformation of some data in Kubernetes. The application contains a chain of logical operators.</p>
<p>A --- records R1 ---> B --- records R2 ---> C</p>
<ul>
<li>The operator "A" generates records "R1" and passes them to the
operator "B" (it's not important how the operator generates the
records).</li>
<li>The operator "B" transforms the input records "R1" to the output
records"R2" and passes them to the operator "C" in the push mode.</li>
<li>The operator "C" processes the records "R2" (it's not important how
the operator works).</li>
</ul>
<p>If my transformation is simple I can create the containers "A", "B", and "C" for the operators and put the containers into a pod. I can use the pod as a logical unit to start and stop my application.</p>
<p>But if the cost of my transformation is high I need to scale my application. I'd like to increase the number of instances of the operator "B" and run the transformations in parallel. I'd like to distribute the instances of the operator "B" to several Kubernetes nodes and support the failover for them.</p>
<p>Also I'd like to have a good interface in order to start and stop my application as a regular service.</p>
<p>Can I implement such kind of the application in Kubernetes?</p>
| <p>That really depends on how you pass data between services. Are they pushing data up or pulling? Or do they store to some persistent storage? How many requests/records are passing the system and what is their size?</p>
<p>All in all, IMO you should implement each service (A/B/C) as a separate Pod/Deployment. Then you can have services defined for them to be able to call and pull or push data if it's an API based flow. On the other hand you might implement some kind of queue (ie. Kafka or RabbitMQ) and pass messages or just a database where you'd store the records in appropriate tables.</p>
<p>Hard to give a more precise answer without better understanding of the final objective.</p>
|
<p>I am new to Kubernetes</p>
<p>The goal is to get Kubernetes cluster dashboard working</p>
<p>The Kubernetes cluster was deployed using Kubespray: <a href="https://github.com/kubernetes-incubator/kubespray" rel="noreferrer">github.com/kubernetes-incubator/kubespray</a></p>
<p>Versions:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:21Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>When I do <code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false</code> as described <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="noreferrer">here</a></p>
<p>I get:</p>
<pre><code>Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
</code></pre>
<p>When I run <code>kubectl get services --namespace kube-system</code>, I get:</p>
<pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 10d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 9d
</code></pre>
<p>When I try to reach the dashboard kubernetes cluster, I get <code>Connection refused</code></p>
<p><code>kubectl logs --namespace=kube-system kubernetes-dashboard-4167803980-1dz53</code> output:</p>
<pre><code>2017/09/27 10:54:11 Using in-cluster config to connect to apiserver
2017/09/27 10:54:11 Using service account token for csrf signing
2017/09/27 10:54:11 No request provided. Skipping authorization
2017/09/27 10:54:11 Starting overwatch
2017/09/27 10:54:11 Successful initial request to the apiserver, version: v1.7.3+coreos.0
2017/09/27 10:54:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/27 10:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/27 10:54:11 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/27 10:54:11 Initializing JWE encryption key from synchronized object
2017/09/27 10:54:11 Creating in-cluster Heapster client
2017/09/27 10:54:11 Serving securely on HTTPS port: 8443
2017/09/27 10:54:11 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
</code></pre>
<p>Other outputs:</p>
<p><code>kubectl get pods --namespace=kube-system</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
calico-node-bqckz 1/1 Running 0 12d
calico-node-r9svd 1/1 Running 2 12d
calico-node-w3tps 1/1 Running 0 12d
kube-apiserver-kubetest1 1/1 Running 0 12d
kube-apiserver-kubetest2 1/1 Running 0 12d
kube-controller-manager-kubetest1 1/1 Running 2 12d
kube-controller-manager-kubetest2 1/1 Running 2 12d
kube-dns-3888408129-n0m8d 3/3 Running 0 12d
kube-dns-3888408129-z8xx3 3/3 Running 0 12d
kube-proxy-kubetest1 1/1 Running 0 12d
kube-proxy-kubetest2 1/1 Running 0 12d
kube-proxy-kubetest3 1/1 Running 0 12d
kube-scheduler-kubetest1 1/1 Running 2 12d
kube-scheduler-kubetest2 1/1 Running 2 12d
kubedns-autoscaler-1629318612-sd924 1/1 Running 0 12d
kubernetes-dashboard-4167803980-1dz53 1/1 Running 0 1d
nginx-proxy-kubetest3 1/1 Running 0 12d
</code></pre>
<p><code>kubectl proxy</code>:</p>
<pre><code>Starting to serve on 127.0.0.1:8001panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2692f20]
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl.(*ProxyServer).ServeOnListener(0x0, 0x3a95a60, 0xc420114110, 0x17, 0xc4208b7c28)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/proxy_server.go:201 +0x70
k8s.io/kubernetes/pkg/kubectl/cmd.RunProxy(0x3aa5ec0, 0xc42074e960, 0x3a7f1e0, 0xc42000c018, 0xc4201d7200, 0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:156 +0x774
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdProxy.func1(0xc4201d7200, 0xc4203586e0, 0x0, 0x2)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:79 +0x4f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4201d7200, 0xc420358500, 0x2, 0x2, 0xc4201d7200, 0xc420358500)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x234
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc4202e4240, 0x5000107, 0x0, 0xffffffffffffffff)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x2fe
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc4202e4240, 0xc42074e960, 0x3a7f1a0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
</code></pre>
<p><code>kubectl top nodes</code>:</p>
<pre><code>Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
</code></pre>
<p><code>kubectl get svc --namespace=kube-system</code>:</p>
<pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 12d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 11d
</code></pre>
<p><code>curl http://localhost:8001/ui</code>:
<code>curl: (7) Failed to connect to 10.2.3.211 port 8001: Connection refused</code></p>
<p>How can I get the dashboard working? Appreciate your help.</p>
| <p>you may be installing dashboard version 1.7. try installing version 1.6.3 its well tested.</p>
<pre><code>kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
</code></pre>
<p>Update 10/2/17:
can you try this: Delete and install 1.6.3 version.</p>
<pre><code>kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
</code></pre>
|
<p>Using <a href="https://github.com/Yolean/kubernetes-kafka" rel="noreferrer">kubernetes-kafka</a> as a starting point with minikube.</p>
<p>This uses a StatefulSet and a <a href="https://github.com/Yolean/kubernetes-kafka/blob/master/20dns.yml" rel="noreferrer">headless service</a> for service discovery within the cluster.</p>
<p>The goal is to expose the individual Kafka Brokers externally which are internally addressed as:</p>
<pre><code>kafka-0.broker.kafka.svc.cluster.local:9092
kafka-1.broker.kafka.svc.cluster.local:9092
kafka-2.broker.kafka.svc.cluster.local:9092
</code></pre>
<p>The constraint is that this external service be able to address the brokers specifically.</p>
<p>Whats the right (or one possible) way of going about this? Is it possible to expose a external service per <code>kafka-x.broker.kafka.svc.cluster.local:9092</code>?</p>
| <p>We have solved this in 1.7 by changing the headless service to <code>Type=NodePort</code> and setting the <code>externalTrafficPolicy=Local</code>. This bypasses the internal load balancing of a Service and traffic destined to a specific node on that node port will only work if a Kafka pod is on that node.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: broker
spec:
externalTrafficPolicy: Local
ports:
- nodePort: 30000
port: 30000
protocol: TCP
targetPort: 9092
selector:
app: broker
type: NodePort
</code></pre>
<p>For example, we have two nodes nodeA and nodeB, nodeB is running a kafka pod. nodeA:30000 will not connect but nodeB:30000 will connect to the kafka pod running on nodeB.</p>
<p><a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport" rel="noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport</a></p>
<p>Note this was also available in 1.5 and 1.6 as a beta annotation, more can be found here on feature availability: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip</a></p>
<p>Note also that while this ties a kafka pod to a specific external network identity, it does not guarantee that your storage volume will be tied to that network identity. If you are using the VolumeClaimTemplates in a StatefulSet then your volumes are tied to the pod while kafka expects the volume to be tied to the network identity.</p>
<p>For example, if the kafka-0 pod restarts and kafka-0 comes up on nodeC instead of nodeA, kafka-0's pvc (if using VolumeClaimTemplates) has data that it is for nodeA and the broker running on kafka-0 starts rejecting requests thinking that it is nodeA not nodeC. </p>
<p>To fix this, we are looking forward to Local Persistent Volumes but right now we have a single PVC for our kafka StatefulSet and data is stored under <code>$NODENAME</code> on that PVC to tie volume data to a particular node.</p>
<p><a href="https://github.com/kubernetes/features/issues/121" rel="noreferrer">https://github.com/kubernetes/features/issues/121</a>
<a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#local</a></p>
|
<p>I have an nginx pod deployed in my kubernetes cluster to serve static files. In order to set a specific header in different environments I have followed the instructions in the <a href="https://github.com/docker-library/docs/tree/master/nginx#using-environment-variables-in-nginx-configuration" rel="nofollow noreferrer">official nginx docker image docs</a> which uses <code>envsubst</code> to generate the config file from a template before running nginx.</p>
<p>This is my nginx template (nginx.conf.template):</p>
<pre><code>events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location ~ \.css {
add_header Content-Type text/css;
}
location ~ \.js {
add_header Content-Type application/x-javascript;
}
location / {
add_header x-myapp-env $MYAPP_ENV;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
}
</code></pre>
<p>I use the <a href="https://v1-6.docs.kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">default command override</a> feature of Kubernetes to initially generate the nginx conf file before starting nginx. This is the relevant part of the config:</p>
<pre><code> command: ["/bin/sh"]
args: ["-c", "envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx -g 'daemon off;'" ]
</code></pre>
<p>Kubernetes successfully deploys the pod however when I make a request I get a <code>ERR_TOO_MANY_REDIRECTS</code> error in my browser.</p>
<p>Strangely, when I deploy the container without running the command override using an nginx.conf almost identical to the above (but without the <code>add_header</code> directive) it works fine.</p>
<p>(All SSL certs and files to be served are happily copied onto the container at build time so there should be no issue there)</p>
<p>Any help appreciated. </p>
| <p>I am pretty sure <code>envsubst</code> is biting you by making <code>try_files $uri $uri/ /index.html;</code> into <code>try_files / /index.html;</code> and <code>return 301 https://$host$request_uri;</code> into <code>return 301 https://;</code>. This results in a loop of redirections.</p>
<p>I suggest you run <code>envsubst '$MYAPP_ENV' <template >nginx.conf</code> instead. That will only replace that single variable and not the unintended ones. (Note the escaping around the variable in the sample command!) If later on you need to add variables you can specify them all like <code>envsubsts '$VAR1$VAR2$VAR3'</code>.</p>
<p>If you want to replace <em>all</em> environment variables you can use this snippet:</p>
<pre><code>envsubst `declare -x | sed 's/^declare -x \([^=]*\)=.*/$\1/' | tr -d '\n'` <template >nginx.conf
</code></pre>
<p>Also, while it's not asked in the question you can save yourself some trouble by using <code>... && exec nginx -g 'daemon off;'</code>. The <code>exec</code> will replace the running shell (pid 1) with the nginx process instead of forking it. This also means that signals will be received by nginx, etc.</p>
|
<p>I have a Kubernetes 1.7.5 cluster which has somehow gotten into a semi-broken state. Scheduling a new deployment on this cluster partially fails: 1/2 pods starts normally, but the second pod does not start. The events are:</p>
<pre><code>default 2017-09-28 03:57:02 -0400 EDT 2017-09-28 03:57:02 -0400 EDT 1 hello-4059723819-8s35v Pod spec.containers{hello} Normal Pulled kubelet, k8s-agentpool1-18117938-2 Successfully pulled image "myregistry.azurecr.io/mybiz/hello"
default 2017-09-28 03:57:02 -0400 EDT 2017-09-28 03:57:02 -0400 EDT 1 hello-4059723819-8s35v Pod spec.containers{hello} Normal Created kubelet, k8s-agentpool1-18117938-2 Created container
default 2017-09-28 03:57:03 -0400 EDT 2017-09-28 03:57:03 -0400 EDT 1 hello-4059723819-8s35v Pod spec.containers{hello} Normal Started kubelet, k8s-agentpool1-18117938-2 Started container
default 2017-09-28 03:57:13 -0400 EDT 2017-09-28 03:57:01 -0400 EDT 2 hello-4059723819-tj043 Pod Warning FailedSync kubelet, k8s-agentpool1-18117938-3 Error syncing pod
default 2017-09-28 03:57:13 -0400 EDT 2017-09-28 03:57:02 -0400 EDT 2 hello-4059723819-tj043 Pod Normal SandboxChanged kubelet, k8s-agentpool1-18117938-3 Pod sandbox changed, it will be killed and re-created.
default 2017-09-28 03:57:24 -0400 EDT 2017-09-28 03:57:01 -0400 EDT 3 hello-4059723819-tj043 Pod Warning FailedSync kubelet, k8s-agentpool1-18117938-3 Error syncing pod
default 2017-09-28 03:57:25 -0400 EDT 2017-09-28 03:57:02 -0400 EDT 3 hello-4059723819-tj043 Pod Normal SandboxChanged kubelet, k8s-agentpool1-18117938-3 Pod sandbox changed, it will be killed and re-created.
[...]
</code></pre>
<p>The last two log messages just keep repeating themselves.</p>
<p>The dashboard of the failed pod shows:</p>
<p><a href="https://i.stack.imgur.com/vw6BN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vw6BN.png" alt="Dashboard of failed pod"></a></p>
<p>Eventually the dashboard shows the error:</p>
<pre><code>Error: failed to start container "hello": Error response from daemon: {"message":"cannot join network of a non running container: 7e95918c6b546714ae20f12349efcc6b4b5b9c1e84b5505cf907807efd57525c"}
</code></pre>
<p>This cluster is running on Azure using the CNI Azure networking plugin. Everything was working fine up until some time after I enabled the <code>--runtime-config=batch/v2alpha1=true</code> in order to use the <code>CronJob</code> functionality. Now, even after removing that API level, and rebooting the master, the problem still occurs.</p>
<p>The kubelet log on the node shows that an IP address cannot be allocated:</p>
<pre><code>E0928 20:54:01.733682 1750 pod_workers.go:182] Error syncing pod 65127a94-a425-11e7-8d64-000d3af4357e ("hello-4059723819-xx16n_default(65127a94-a425-11e7-8d64-000d3af4357e)"), skipping: failed to "CreatePodSandbox" for "hello-4059723819-xx16n_default(65127a94-a425-11e7-8d64-000d3af4357e)" with CreatePodSandboxError: "CreatePodSandbox for pod \"hello-4059723819-xx16n_default(65127a94-a425-11e7-8d64-000d3af4357e)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"hello-4059723819-xx16n_default\" network: Failed to allocate address: Failed to delegate: Failed to allocate address: No available addresses"
</code></pre>
| <p>This is a bug with Azure CNI not always correctly recycling IP addresses from terminated pods. See this issue: <a href="https://github.com/Azure/azure-container-networking/issues/76" rel="nofollow noreferrer">https://github.com/Azure/azure-container-networking/issues/76</a>.</p>
<p>The reason this occurred after enabling <code>CronJob</code> functionality is that cronjob containers are (usually) short-lived, and are allocated an IP every time they run. If those IPs are not recycled and re-usable by the underlying networking system -- in this case CNI -- they quickly run out.</p>
|
<p>I'm looking for an efficient and easy way to adapt my current Theano model so it can scale for prediction. I'm also looking for a way to easily train lots of models with different parameters.</p>
<p>It's seems that there is two main ways to do it. The first is to use Spark and the second is to use Docker and Kubernetes.</p>
<p>My experience with both is fairly limited, so, I have no idea if there are correct way to solve my problem and what are the differences between each solutions.</p>
| <p>That is two thing between Kuberbetes and Spark,
Kubernets is a Paas, it provide you the platform to run your application,
Spark is used to run your algorithm and compute distributed ,but you need to build Spark in a cluster
So kubernetes can help you to do this things</p>
<p>How to build Spark with kubernetes? You can see the <a href="https://github.com/kubernetes/examples/tree/master/staging/spark" rel="nofollow noreferrer">reference</a></p>
<p>Good Luck!</p>
|
<p>When using a self hosted kubeadm in ubuntu, I could not access other pods and external network from within k8s pod but am able to access using regular docker containers.</p>
<p>I tried with different types of pod network including calico, weave and flannel.</p>
<p>I followed the debugging instructinos from <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">here</a> without any success, below is the logs.</p>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes.default'
$ kubectl exec busybox cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
kube-dns-2425271678-9zwtd 3/3 Running 0 12m
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
I0823 16:02:58.407162 6 dns.go:48] version: 1.14.3-4-gee838f6
I0823 16:02:58.408957 6 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0823 16:02:58.409223 6 server.go:113] FLAG: --alsologtostderr="false"
I0823 16:02:58.409248 6 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0823 16:02:58.409288 6 server.go:113] FLAG: --config-map=""
I0823 16:02:58.409301 6 server.go:113] FLAG: --config-map-namespace="kube-system"
I0823 16:02:58.409309 6 server.go:113] FLAG: --config-period="10s"
I0823 16:02:58.409325 6 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0823 16:02:58.409333 6 server.go:113] FLAG: --dns-port="10053"
I0823 16:02:58.409370 6 server.go:113] FLAG: --domain="cluster.local."
I0823 16:02:58.409387 6 server.go:113] FLAG: --federations=""
I0823 16:02:58.409401 6 server.go:113] FLAG: --healthz-port="8081"
I0823 16:02:58.409411 6 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0823 16:02:58.409434 6 server.go:113] FLAG: --kube-master-url=""
I0823 16:02:58.409451 6 server.go:113] FLAG: --kubecfg-file=""
I0823 16:02:58.409458 6 server.go:113] FLAG: --log-backtrace-at=":0"
I0823 16:02:58.409470 6 server.go:113] FLAG: --log-dir=""
I0823 16:02:58.409478 6 server.go:113] FLAG: --log-flush-frequency="5s"
I0823 16:02:58.409489 6 server.go:113] FLAG: --logtostderr="true"
I0823 16:02:58.409496 6 server.go:113] FLAG: --nameservers=""
I0823 16:02:58.409521 6 server.go:113] FLAG: --stderrthreshold="2"
I0823 16:02:58.409533 6 server.go:113] FLAG: --v="2"
I0823 16:02:58.409544 6 server.go:113] FLAG: --version="false"
I0823 16:02:58.409559 6 server.go:113] FLAG: --vmodule=""
I0823 16:02:58.409728 6 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0823 16:02:58.467505 6 server.go:198] Skydns metrics enabled (/metrics:10055)
I0823 16:02:58.467640 6 dns.go:147] Starting endpointsController
I0823 16:02:58.467810 6 dns.go:150] Starting serviceController
I0823 16:02:58.557166 6 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0823 16:02:58.557335 6 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0823 16:02:58.968454 6 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
I0823 16:02:59.468406 6 dns.go:171] Initialized services and endpoints from apiserver
I0823 16:02:59.468698 6 server.go:129] Setting up Healthz Handler (/readiness)
I0823 16:02:59.469064 6 server.go:134] Setting up cache handler (/cache)
I0823 16:02:59.469305 6 server.go:120] Status HTTP port 8081
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
I0823 16:02:59.445525 11 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0823 16:02:59.445741 11 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0823 16:02:59.820424 11 nanny.go:108] dnsmasq[38]: started, version 2.76 cachesize 1000
I0823 16:02:59.820546 11 nanny.go:108] dnsmasq[38]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0823 16:02:59.820596 11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0823 16:02:59.820623 11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0823 16:02:59.820659 11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0823 16:02:59.820736 11 nanny.go:108] dnsmasq[38]: reading /etc/resolv.conf
I0823 16:02:59.820762 11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0823 16:02:59.820788 11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0823 16:02:59.820825 11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0823 16:02:59.820850 11 nanny.go:108] dnsmasq[38]: using nameserver 8.8.8.8#53
I0823 16:02:59.820928 11 nanny.go:108] dnsmasq[38]: read /etc/hosts - 7 addresses
I0823 16:02:59.821193 11 nanny.go:111]
W0823 16:02:59.821212 11 nanny.go:112] Got EOF from stdout
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar
ERROR: logging before flag.Parse: I0823 16:03:00.789793 26 main.go:48] Version v1.14.3-4-gee838f6
ERROR: logging before flag.Parse: I0823 16:03:00.790052 26 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
ERROR: logging before flag.Parse: I0823 16:03:00.790121 26 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
ERROR: logging before flag.Parse: I0823 16:03:00.790419 26 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
</code></pre>
<p>Below is the etc/resolv.conf from the master.</p>
<pre><code>$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.8.8
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T06:43:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Below is the etc/resolv.conf from worker node where the pod is running</p>
<pre><code># Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 8.8.4.4
nameserver 8.8.8.
</code></pre>
<p>Here is the output of sudo iptables -n -L</p>
<pre><code>Chain INPUT (policy ACCEPT)
target prot opt source destination
cali-INPUT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:Cz_u1IQiXIMmKD4c */
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy DROP)
target prot opt source destination
cali-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* cali:wUHhoiAYhphO9Mso */
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
WEAVE-NPC all -- 0.0.0.0/0 0.0.0.0/0
NFLOG all -- 0.0.0.0/0 0.0.0.0/0 state NEW nflog-group 86
DROP all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
cali-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:tVnHkvAo15HuiPy0 */
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-SERVICES (2 references)
target prot opt source destination
REJECT tcp -- 0.0.0.0/0 10.96.252.131 /* default/redis-cache-service:redis has no endpoints */ tcp dpt:6379 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 10.96.252.131 /* default/redis-cache-service:cluster has no endpoints */ tcp dpt:16379 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 10.105.180.126 /* default/redis-pubsub-service:redis has no endpoints */ tcp dpt:6379 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 10.105.180.126 /* default/redis-pubsub-service:cluster has no endpoints */ tcp dpt:16379 reject-with icmp-port-unreachable
Chain WEAVE-NPC (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 224.0.0.0/4
WEAVE-NPC-DEFAULT all -- 0.0.0.0/0 0.0.0.0/0 state NEW
WEAVE-NPC-INGRESS all -- 0.0.0.0/0 0.0.0.0/0 state NEW
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ! match-set weave-local-pods dst
Chain WEAVE-NPC-DEFAULT (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 match-set weave-iuZcey(5DeXbzgRFs8Szo]+@p dst
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 match-set weave-4vtqMI+kx/2]jD%_c0S%thO%V dst
Chain WEAVE-NPC-INGRESS (1 references)
target prot opt source destination
Chain cali-FORWARD (1 references)
target prot opt source destination
cali-from-wl-dispatch all -- 0.0.0.0/0 0.0.0.0/0 /* cali:X3vB2lGcBrfkYquC */
cali-to-wl-dispatch all -- 0.0.0.0/0 0.0.0.0/0 /* cali:UtJ9FnhBnFbyQMvU */
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:Tt19HcSdA5YIGSsw */
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:9LzfFCvnpC5_MYXm */
MARK all -- 0.0.0.0/0 0.0.0.0/0 /* cali:7AofLLOqCM5j36rM */ MARK and 0xf1ffffff
cali-from-host-endpoint all -- 0.0.0.0/0 0.0.0.0/0 /* cali:QM1_joSl7tL76Az7 */ mark match 0x0/0x1000000
cali-to-host-endpoint all -- 0.0.0.0/0 0.0.0.0/0 /* cali:C1QSog3bk0AykjAO */
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:DmFiPAmzcisqZcvo */ /* Host endpoint policy accepted packet. */ mark match 0x1000000/0x1000000
Chain cali-INPUT (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:i7okJZpS8VxaJB3n */ mark match 0x1000000/0x1000000
DROP 4 -- 0.0.0.0/0 0.0.0.0/0 /* cali:p8Wwvr6qydjU36AQ */ /* Drop IPIP packets from non-Calico hosts */ ! match-set cali4-all-hosts src
cali-wl-to-host all -- 0.0.0.0/0 0.0.0.0/0 [goto] /* cali:QZT4Ptg57_76nGng */
MARK all -- 0.0.0.0/0 0.0.0.0/0 /* cali:V0Veitpvpl5h1xwi */ MARK and 0xf0ffffff
cali-from-host-endpoint all -- 0.0.0.0/0 0.0.0.0/0 /* cali:3R1g0cpvSoBlKzVr */
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:efXx-pqD4s60WsDL */ /* Host endpoint policy accepted packet. */ mark match 0x1000000/0x1000000
Chain cali-OUTPUT (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:YQSSJIsRcHjFbXaI */ mark match 0x1000000/0x1000000
RETURN all -- 0.0.0.0/0 0.0.0.0/0 /* cali:KRjBsKsBcFBYKCEw */
MARK all -- 0.0.0.0/0 0.0.0.0/0 /* cali:3VKAQBcyUUW5kS_j */ MARK and 0xf0ffffff
cali-to-host-endpoint all -- 0.0.0.0/0 0.0.0.0/0 /* cali:Z1mBCSH1XHM6qq0k */
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:N0jyWt2RfBedKw3L */ /* Host endpoint policy accepted packet. */ mark match 0x1000000/0x1000000
Chain cali-failsafe-in (0 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* cali:wWFQM43tJU7wwnFZ */ multiport dports 22
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 /* cali:LwNV--R8MjeUYacw */ multiport dports 68
Chain cali-failsafe-out (0 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* cali:73bZKoyDfOpFwC2T */ multiport dports 2379
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* cali:QMFuWo6o-d9yOpNm */ multiport dports 2380
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* cali:Kup7QkrsdmfGX0uL */ multiport dports 4001
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 /* cali:xYYr5PEqDf_Pqfkv */ multiport dports 7001
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 /* cali:nbWBvu4OtudVY60Q */ multiport dports 53
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 /* cali:UxFu5cDK5En6dT3Y */ multiport dports 67
Chain cali-from-host-endpoint (2 references)
target prot opt source destination
Chain cali-from-wl-dispatch (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* cali:zTj6P0TIgYvgz-md */ /* Unknown interface */
Chain cali-to-host-endpoint (2 references)
target prot opt source destination
Chain cali-to-wl-dispatch (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* cali:7KNphB1nNHw80nIO */ /* Unknown interface */
Chain cali-wl-to-host (1 references)
target prot opt source destination
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 /* cali:aEOMPPLgak2S0Lxs */ multiport sports 68 multiport dports 67
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 /* cali:SzR8ejPiuXtFMS8B */ multiport dports 53
cali-from-wl-dispatch all -- 0.0.0.0/0 0.0.0.0/0 /* cali:MEmlbCdco0Fefcrw */
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* cali:LZBoXHDOlr3ok4R3 */ /* Configured DefaultEndpointToHostAction */
</code></pre>
| <p>Maybe it is your iptable has old record ,because i think you use kubeadm reset or setting maybe overlay network before,
Please do this thing after kubeadm reset and remove docker </p>
<p>check your ip link
<code>
ip link
</code>
and clean the old record</p>
<p><code>
ip link delete cni0
ip link delete flannel.1
</code>
(if there are other network e.g weave, please delete it)</p>
<p>then clean the iptalbes</p>
<p><code>
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X
</code></p>
<p>then reinstall docker and kubernetes, that the pod should can get the
external network </p>
<p>Good Luck!</p>
|
<p>Is it possible to take a pod out of rotation to debug some issue? or ability to create a pod from same replication controller without letting it serve live traffic</p>
| <p>I find the best solution is to take a Pod that was created by the controller and change the labels on it in a way that it is excluded from the Service(s). The controller will spawn another Pod to replace the now missing one.</p>
<pre><code>kubectl label pod mypod app- myotherlabel- debug=true
</code></pre>
|
<p>What I'm trying to do is using Celery with Kubernetes. I'm using Redis as the message broker in a different pod and I have multiple pods for each queue of Celery.</p>
<p>Imagine if I have 3 queues, I would have 3 different pods (i.e workers) that can accept and handle the requests. </p>
<p>Everything is working fine so far but my question is, what would happen if I clone the pod of one of queues to have two pods for one single queue? </p>
<p>I think client (i.e Django) creates a new message using Redis to send to the worker and start the job but it's not clear to me what would happen because I have two pods listening to the same queue? Does the first pod accept the request and start the job and prevents the other pod to accept the request? </p>
<p><em>(I tried to search a bit on the documentation of Celery to see if I can find any clues but I couldn't. That's why I'm asking this question)</em></p>
| <p>I guess you are using basic task type, which employs 'direct' queue type, not 'fanout' or 'topic' queue, the latter two have much difference, which will not be discussed here.</p>
<p>While using Redis as broker transport, celery/kombu use a Redis <code>list</code> object as a storage of queue (<a href="https://github.com/celery/kombu/blob/master/kombu/transport/redis.py" rel="noreferrer">source</a>), use command <code>LPUSH</code> to publish message, <code>BRPOP</code> to consume the message.</p>
<p>In short, <code>BRPOP</code>(<a href="https://redis.io/commands/brpop" rel="noreferrer">doc</a>) blocks the connection when there are no elements to pop from the given lists, if the list is not empty, an element is popped from the tail of the given list. It is guaranteed that <a href="https://redis.io/commands/blpop#blocking-behavior" rel="noreferrer">this operation is atomic</a>, no two connection could get the same element. </p>
<p>Celery leverage this feature to guarantees <em>at-least-once</em> message delivery. use of acknowledgment doesn't affect this guarantee.</p>
<p>In your case, there are multiple celery workers across multiple pods, but all of them connected to one same Redis server, all of them blocked for the same key, try to pop an element from the same list object. when new message arrived, there will be one and only one worker could get that message.</p>
<hr>
|
<p>I'm working on a way to discover trafic between Kubernetes Services and to monitor it ?
Does someone know how can I achieve that ?</p>
<p>Where I can for example find this kind of metrics or events ?</p>
<p>Thank you in advance</p>
| <p>If you are using <code>kube-proxy --proxy-mode iptables</code> (which is a default by the time of writing) then you can use <a href="https://github.com/aabc/ipt-netflow" rel="nofollow noreferrer">Netflow iptables module</a>.</p>
<p>Or if you need to debug something ad-hoc then just <code>grep <service_ip> /proc/net/nf_conntrack</code>. Here is an example of a DNS talk we have:</p>
<pre><code># grep '10\.3\.0\.10' /proc/net/nf_conntrack
ipv4 2 udp 17 26 src=192.168.101.1 dst=10.3.0.10 sport=41349 dport=53 src=10.2.38.2 dst=10.2.31.0 sport=53 dport=41349 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 zone=0 use=2
ipv4 2 udp 17 12 src=192.168.101.1 dst=10.3.0.10 sport=57298 dport=53 src=10.2.38.2 dst=10.2.31.0 sport=53 dport=57298 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 zone=0 use=2
ipv4 2 udp 17 102 src=192.168.101.1 dst=10.3.0.10 sport=43260 dport=53 src=10.2.38.2 dst=10.2.31.0 sport=53 dport=43260 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 zone=0 use=2
ipv4 2 udp 17 65 src=192.168.101.1 dst=10.3.0.10 sport=44899 dport=53 src=10.2.38.2 dst=10.2.31.0 sport=53 dport=44899 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 zone=0 use=2
</code></pre>
|
<p>I have installed minikube on a server which I can access from the internet.</p>
<p>I have created a kubernetes service which is available:</p>
<pre><code>>kubectl get service myservice
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice 10.0.0.246 <nodes> 80:31988/TCP 14h
</code></pre>
<p>The IP address of minikube is:</p>
<pre><code>>minikube ip
192.168.42.135
</code></pre>
<p>I would like the URL <code>http://myservice.myhost.com</code> (i.e. port 80) to map to the service in minikube.</p>
<p>I have nginx running on the host (totally unrelated to kubernetes). I can set up a virtual host, mapping the URL to <code>192.168.42.135:31988</code> (the node port) and it works fine.</p>
<p>I would like to use an ingress. I've added and enabled ingress. But I am unsure of:</p>
<p>a) what the yaml file should contain</p>
<p>b) how incoming traffic on port 80, from the browser, gets redirected to the ingress and minikube.</p>
<p>c) do I still need to use nginx as a reverse proxy?</p>
<p>d) if so, what address is the ingress-nginx running on (so that I can map traffic to it)?</p>
| <h1>Setup</h1>
<p>First of all, you need a <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx ingress controller</a>. </p>
<p>The nginx instance(s) will listen on host 80 and 443 port, and redirect every HTTP request to services which ingress configuration defined, like this.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-service-ingress
annotations:
# by default the controller redirects (301) HTTP to HTTPS,
# the following would make it disabled.
# ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
</code></pre>
<p>Use <code>https://{host-ip}/</code> to visit myservice, The host should be the one where nginx controller is running at.</p>
<h1>Outside</h1>
<p>Normally you don't need another nginx outside kubernetes cluster.</p>
<p>While Minikube is a little different, It is running kubernetes in a virtual machine instead of host.</p>
<p>We need do some port-forwards like host:80 => minikube:80, Running a reverse proxy (like nginx) in the host is an elegant way.</p>
<p>It can also be done by <a href="https://www.virtualbox.org/manual/ch06.html#natforward" rel="nofollow noreferrer">setting virtual networking port forward in Virtualbox</a>.</p>
|
<p>Flannel running in a pod is getting the wrong subnet and networking is just not happy, the symptom is flannel is being assigned /24's from the 10.105.0.0/16. it should be assigning /26's from 10.105.5.128/21. Thanks for any help. </p>
<p>here are the details:</p>
<pre><code>/usr/bin/kubeadm init \
--kubernetes-version v1.7.5 \
--pod-network-cidr 10.105.5.128/21 \
--service-cidr 10.105.5.136/21 \
--token XXXXXXXXXXX
</code></pre>
<p>kube-flannel-rbac.yml is loaded after kube-flannel.yml
only modified bit(SubenetLen and Network) from kube-flannel.yml:</p>
<pre><code>{
"Network": "10.105.5.128/21",
"SubnetLen": 26,
"Backend": {
"Type": "vxlan"
}
}
</code></pre>
<p>DNS is set in the systemd file to:</p>
<pre><code>--cluster-dns=10.105.5.136.10
</code></pre>
<p>Using Ubuntu 16.04 LTS and stock kernel</p>
<p>here is the docker daemon.json file:</p>
<pre><code>{
"hosts":[
"fd://",
"0.0.0.0"
],
"ip-masq":false,
"experimental": true,
"registry-mirrors": [
"http://hub.xyz.com"
],
"insecure-registries": [
"http://hub.xyz.com"
],
"tls": true,
"tlsverify": true,
"tlscacert":"/etc/docker/ca.pem",
"tlscert":"/etc/docker/cert.pem",
"tlskey":"/etc/docker/key.pem"
}
</code></pre>
<p>all kuberentes components are 1.7.5 installed from ubuntu k8s repos</p>
<p>here is the log of the kube-flannel container: </p>
<pre><code>I0926 03:29:10.214198 89 main.go:446] Determining IP address of default interface
I0926 03:29:10.216166 89 main.go:459] Using interface with name eth0 and address 10.105.5.12
I0926 03:29:10.216261 89 main.go:476] Defaulting external address to interface address (10.105.5.12)
I0926 03:29:10.242216 89 kube.go:283] Starting kube subnet manager
I0926 03:29:10.242055 89 kube.go:130] Waiting 10m0s for node controller to sync
I0926 03:29:11.242864 89 kube.go:137] Node controller sync successful
I0926 03:29:11.242957 89 main.go:226] Created subnet manager: Kubernetes Subnet Manager - kube-m2.XXXXX.com
I0926 03:29:11.242969 89 main.go:229] Installing signal handlers
I0926 03:29:11.243984 89 main.go:330] Found network config - Backend type: vxlan
I0926 03:29:11.288902 89 ipmasq.go:51] Adding iptables rule: -s 10.105.0.0/21 -d 10.105.0.0/21 -j RETURN
I0926 03:29:11.306692 89 ipmasq.go:51] Adding iptables rule: -s 10.105.0.0/21 ! -d 224.0.0.0/4 -j MASQUERADE
I0926 03:29:11.314413 89 ipmasq.go:51] Adding iptables rule: ! -s 10.105.0.0/21 -d 10.105.0.0/24 -j RETURN
I0926 03:29:11.323229 89 ipmasq.go:51] Adding iptables rule: ! -s 10.105.0.0/21 -d 10.105.0.0/21 -j MASQUERADE
I0926 03:29:11.329676 89 main.go:279] Wrote subnet file to /run/flannel/subnet.env
I0926 03:29:11.329746 89 main.go:284] Finished starting backend.
I0926 03:29:11.329829 89 vxlan_network.go:56] Watching for L3 misses
I0926 03:29:11.329903 89 vxlan_network.go:64] Watching for new subnet leases
</code></pre>
| <p>Did you do kubeadm reset before?
If yes, please check ip link by use
<code>ip link</code></p>
<p>and check if the flannel network is right,
if not,
please use
<code>
ip link delete cni0
ip link delete flannel.1
</code>
to clean you network setting </p>
<p>Good luck!</p>
|
<p>I setup a kubernetes cluster on Windows machine with Virtual Box . I have 4 Guest CentOS 7 systems running. I have setup the cluster using <a href="https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/</a> guide. While deploying kubernetes dashboard I got the error </p>
<pre><code>Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (BadRequest): error when creating "kubernetes-dashboard.yaml": ClusterRoleBinding in version "v1beta1" cannot be handled as a ClusterRoleBinding: no kind "ClusterRoleBinding" is registered for version "rbac.authorization.k8s.io/v1beta1"
error validating "kubernetes-dashboard.yaml": error validating data: found invalid field tolerations for v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>Then I executed the command again with -validate=false option. This time I got the below error</p>
<pre><code>Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (BadRequest): error when creating "kubernetes-dashboard.yaml": ClusterRoleBinding in version "v1beta1" cannot be handled as a ClusterRoleBinding: no kind "ClusterRoleBinding" is registered for version "rbac.authorization.k8s.io/v1beta1"
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
</code></pre>
<p>I have seen that lots of people have got the similar error but could not find the solution anywhere. Output of some of the commands</p>
<pre><code>$kubectl get pods -a -o wide --all-namespaces
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.254.25.191
Port: <unset> 80/TCP
Endpoints: <none>
Session Affinity: None
No events.
$kubectl get pods -a -o wide --all-namespaces
No resources found.
$kubectl cluster-info
Kubernetes master is running at http://localhost:8080
$kubectl get nodes
NAME STATUS AGE
centos-minion-1 Ready 2d
centos-minion-2 Ready 2d
centos-minion-3 Ready 2d
</code></pre>
<p>Please let me know if I am missing something</p>
<p>Thanks
Amol</p>
| <p>Check your kubectl version</p>
<p>I also faced the same problem with the latest build, and then installed the old version of dashboard.</p>
<p>kubectl create -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml</a></p>
|
<p>I need small persistent volumes of around 50 Mb for my pods/containers.
I'm using gce persistent disks as persistent storage for google container engine but I can't create disks smaller than 1 GB</p>
<p>I've created the disk:</p>
<pre><code> gcloud compute disks create disk2 --size=1 --zone=us-central1-a
</code></pre>
<p>and created volume and volume claim with kubectl from my yaml files.</p>
<p>What I have trouble understanding is the size in the volume.yaml: if I put a size bigger then 1Gi or a size smaller than 1Gi everything is working fine, so what is the purpose of size in a volume config.
Also the describe pv has:</p>
<pre><code>Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: disk1
FSType: ext4
Partition: 0
ReadOnly: false
</code></pre>
<p>Can I change the partition here and partition the disk into smaller chunks ? I couldn't find how to declare the partition number in volume.yaml</p>
| <p>Google Cloud Platform doesn't support smaller sizes no matter what you do, unfortunately.</p>
<p>As for k8s side, no you can't partition the disk (a volume is supposed to be directly mountable, so GCE-PD provider gives you single volume back), and even then GCP doesn't support multiple writers to single volume, even if you have necessary support in kernel.</p>
|
<p>I have docker container (Hadoop installation <a href="https://github.com/kiwenlau/hadoop-cluster-docker" rel="nofollow noreferrer">https://github.com/kiwenlau/hadoop-cluster-docker</a>) that I can run using <code>sudo docker run -itd -p 50070:50070 -p 8088:8088 --name hadoop-master kiwenlau/hadoop:1.0</code> command without any issue, however when trying to deploy same image to kubernetes, pod failing to start. In order to create deployment I'm using <code>kubectl run hadoop-master --image=kiwenlau/hadoop:1.0 --port=8088 --port=50070</code> command</p>
<p>Here log of describe pod command</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 default-scheduler Normal Scheduled Successfully assigned hadoop-master-2828539450-rnwsd to gke-mtd-cluster-default-pool-6b97d4d0-hcbt
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 1560ff87e0e7357c76cec89f5f429e0b9b5fc51523c79e4e2c12df1834d7dd75
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 1560ff87e0e7357c76cec89f5f429e0b9b5fc51523c79e4e2c12df1834d7dd75
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id c939d3336687a33e69d37aa73177e673fd56d766cb499a4235e89d554d233c37
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id c939d3336687a33e69d37aa73177e673fd56d766cb499a4235e89d554d233c37
6m 6m 2 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 10s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 7d1c67686c039e459ee0ea3936eedb4996a5201f6a1fec02ac98d219bb07745f
6m 6m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 7d1c67686c039e459ee0ea3936eedb4996a5201f6a1fec02ac98d219bb07745f
6m 6m 2 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 20s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"
5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id a8879a2c794b3e62f788ad56e403cb619644e9219b2c092e760ddeba506b2e44
5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id a8879a2c794b3e62f788ad56e403cb619644e9219b2c092e760ddeba506b2e44
5m 5m 3 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 40s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"
5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 8907cdf19c51b87cea6e1e611649e874db2c21f47234df54bd9f27515cee0a0e
5m 5m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 8907cdf19c51b87cea6e1e611649e874db2c21f47234df54bd9f27515cee0a0e
5m 3m 7 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"
3m 3m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 294072caea596b47324914a235c1882dbc521cc355644a1e25ebf06f0e04301f
3m 3m 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 294072caea596b47324914a235c1882dbc521cc355644a1e25ebf06f0e04301f
3m 1m 12 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"
6m 50s 7 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Pulled Container image "kiwenlau/hadoop:1.0" already present on machine
50s 50s 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Created Created container with id 7da7508ac864d04d47639b0d2c374a27c3e8a3351e13a2564e57453cf857426d
50s 50s 1 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Normal Started Started container with id 7da7508ac864d04d47639b0d2c374a27c3e8a3351e13a2564e57453cf857426d
6m 0s 31 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt spec.containers{hadoop-master} Warning BackOff Back-off restarting failed container
49s 0s 5 kubelet, gke-mtd-cluster-default-pool-6b97d4d0-hcbt Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hadoop-master" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=hadoop-master pod=hadoop-master-2828539450-rnwsd_default(562dae39-a757-11e7-a5a3-42010a8401c6)"
</code></pre>
<p>kubectl log output:</p>
<pre><code>kubectl logs hadoop-master-2828539450-rnwsd
* Starting OpenBSD Secure Shell server sshd
...done.
</code></pre>
<p>Note that docker container itself not starting hadoop. In order to start it, I have to connect to container and start hadoop manually, however I want to be able to simply run container in K8s for now.</p>
<p>Thanks</p>
| <p>The equivalent command in kubernetes is</p>
<pre><code>kubectl run -it hadoop-master --image=kiwenlau/hadoop:1.0 --port=8088 --port=50070
</code></pre>
<p>Actually it's not about kubernetes, there is something wrong in the Dockerfile.</p>
<p><a href="https://stackoverflow.com/a/28214133/692076">A docker container exits when its main process finishes.</a></p>
<p>With <code>CMD [ "sh", "-c", "service ssh start; bash"]</code>, SSH service is started at the background, after then the container stopped as job finished. </p>
<p>It should be an executable script/program running in the foreground always, like <code>~/start-hadoop.sh</code>.</p>
<p>FYI, usually we don't need ssh to the container, as <code>docker exec -it some_container bash</code> should be enough.</p>
|
<p>I am trying to run a <a href="https://nixos.org/nixpkgs/manual/#ssec-pkgs-dockerTools-buildImage" rel="nofollow noreferrer">Nix-built</a> Docker image in tarball form. With docker, <code>docker load -i <path></code> followed by a <code>docker run</code> works fine. Now I've uploaded the tarball to Artifactory and am trying to run the image on K8s with something like:</p>
<pre><code>$ kubectl run foo-service --image=<internal Artifactory>/foo-service/foo-service-latest.tar.gz
</code></pre>
<p>However all I see is:</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
foo-service 1 1 1 0 2h
</code></pre>
<p>Is it possible to load an image from a (remote) tarball in K8s? If yes, what is the command to do so?</p>
| <p>There is no way to do that directly in Kubernetes.</p>
<p>You can do <code>docker load</code> and then <code>docker push</code> to a registry (you can host a <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/registry/README.md" rel="nofollow noreferrer">private registry</a> in Kubernetes or use a public one) and after that <code>kubectl run</code>.</p>
<p>Minikube also has a registry addon for local development.</p>
|
<p>Kubernetes features quite a few types of volumes, <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="noreferrer">including <code>emptyDir</code></a>:</p>
<blockquote>
<p>An <code>emptyDir</code> volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the pod can all read and write the same files in the <code>emptyDir</code> volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the <code>emptyDir</code> is deleted forever.</p>
<p>...</p>
<p><strong>By default, emptyDir volumes are stored on whatever medium is backing the node.</strong></p>
</blockquote>
<p>Is the <code>emtpyDir</code> actually mounted on the node, and accessible to a container outside the pod, or to the node FS itself?</p>
| <p>Yes it is also accessible on the node. It is bind mounted into the container (sort of). The source directories are under <strong><code>/var/lib/kubelet/pods/PODUID/volumes/kubernetes.io~empty-dir/VOLUMENAME</code></strong></p>
<p>You can find the location on the host like this:</p>
<pre><code>sudo ls -l /var/lib/kubelet/pods/`kubectl get pod -n mynamespace mypod -o 'jsonpath={.metadata.uid}'`/volumes/kubernetes.io~empty-dir
</code></pre>
|
<h1>AWS CloudWatch Logs in Docker</h1>
<p>Setting an AWS CloudWatch Logs driver in <code>docker</code> is done with <code>log-driver=awslogs</code> and <code>log-opt</code>, for example -</p>
<pre><code>#!/bin/bash
docker run \
--log-driver=awslogs \
--log-opt awslogs-region=eu-central-1 \
--log-opt awslogs-group=whatever-group \
--log-opt awslogs-stream=whatever-stream \
--log-opt awslogs-create-group=true \
wernight/funbox \
fortune
</code></pre>
<h1>My Problem</h1>
<p>I would like to use AWS CloudWatch logs in a Kubernetes cluster, where each pod contains a few Docker containers. Each deployment would have a separate Log Group, and each container would have a separate stream. I could not find a way to send the logging parameters to the docker containers via Kubernetes <code>create</code> / <code>apply</code>.</p>
<h1>My Question</h1>
<p><strong>How can I send the <code>log-driver</code> and <code>log-opt</code> parameters to a Docker container in a pod / deployment?</strong></p>
<h1>What have I tried</h1>
<ul>
<li>Setting relevant parameters for the Docker daemon on each machine. It's possible, but this way all the containers on the same machine would share the same stream - therefore irrelevant for my case.</li>
<li>RTFM for <a href="https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_apply/" rel="noreferrer"><code>kubectl apply</code></a></li>
<li>Reading the <a href="https://github.com/kubernetes/kops/blob/master/vendor/github.com/docker/docker/docs/admin/logging/awslogs.md" rel="noreferrer">relevant README in <code>kops</code></a></li>
<li>Read <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="noreferrer"><code>Kubernetes Logging Architecture</code></a></li>
</ul>
| <p>From what I understand, Kubernetes prefer Cluster-level logging to Docker logging driver.</p>
<p>We could use <a href="http://fluentd.org" rel="noreferrer">fluentd</a> to collect, transform, and push container logs to CloudWatch Logs.</p>
<p>All you need is to create a fluentd DaemonSet with ConfigMap and Secret. Files can be found in <a href="https://github.com/zerda/kube-fluentd-cloudwatch" rel="noreferrer">Github</a>. It has been tested with Kubernetes v1.7.5. </p>
<p>The following are some explains.</p>
<h1>In</h1>
<p>With DaemonSet, fluentd collect every container logs from the host folder <code>/var/lib/docker/containers</code>.</p>
<h1>Filter</h1>
<p><a href="https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter" rel="noreferrer">fluent-plugin-kubernetes_metadata_filter</a> plugin load the pod's metadata from Kubernetes API server.</p>
<p>The log record would be like this.</p>
<pre><code>{
"log": "INFO: 2017/10/02 06:44:13.214543 Discovered remote MAC 62:a1:3d:f6:eb:65 at 62:a1:3d:f6:eb:65(kube-235)\n",
"stream": "stderr",
"docker": {
"container_id": "5b15e87886a7ca5f7ebc73a15aa9091c9c0f880ee2974515749e16710367462c"
},
"kubernetes": {
"container_name": "weave",
"namespace_name": "kube-system",
"pod_name": "weave-net-4n4kc",
"pod_id": "ac4bdfc1-9dc0-11e7-8b62-005056b549b6",
"labels": {
"controller-revision-hash": "2720543195",
"name": "weave-net",
"pod-template-generation": "1"
},
"host": "kube-234",
"master_url": "https://10.96.0.1:443/api"
}
}
</code></pre>
<p>Make some tags with Fluentd <a href="https://github.com/zerda/kube-fluentd-cloudwatch/blob/master/fluentd-configmap.yaml#L27-L28" rel="noreferrer">record_transformer</a> filter plugin.</p>
<pre><code>{
"log": "...",
"stream": "stderr",
"docker": {
...
},
"kubernetes": {
...
},
"pod_name": "weave-net-4n4kc",
"container_name": "weave"
}
</code></pre>
<h1>Out</h1>
<p><a href="https://github.com/ryotarai/fluent-plugin-cloudwatch-logs" rel="noreferrer">fluent-plugin-cloudwatch-logs</a> plugin send to AWS CloudWatch Logs.</p>
<p>With <code>log_group_name_key</code> and <code>log_stream_name_key</code> configuration, log group and stream name can be any field of the record.</p>
<pre><code><match kubernetes.**>
@type cloudwatch_logs
log_group_name_key pod_name
log_stream_name_key container_name
auto_create_stream true
put_log_events_retry_limit 20
</match>
</code></pre>
|
<p>I am new to kubernetes and google cloud and I need some help.</p>
<p>We have a pod with a single container runing in kubernetes in gke. There are logging messages that the container sends to its stdout and also some logging messages that it write into few log files in its storage.</p>
<p>The log messages sent to the container's stdout are picked by Stackdriver and we can see them there as expected. I want to get the messages written to the log files in stackdriver as well.
My understanding from what I read here: (<a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent</a>), is that a solution here is to add a sidecar container in the pod, and share a persistent volume between the two containers and somehow copy the log files in the shared volume and then make the sidecar container send the content of the shared log files to its own stdout (e.g. by sending a tail command to the sidecar container). Then those log messages will be picked by stackdriver, as they are in a container's stdout.</p>
<p>The problem is, how can I share the log files of my main container with the sidecar container. I tried to get the log files in the shared volume using a symbolic link (by adding a ln -s command to the first container), but then the sidecar container was not able to see the content of those files (although it was able to see the list of those files, I think because that would be only a shortcut to the storage of the main container, not a real copy of the files).</p>
<p>Another problem is, when I add a command to the main container (the ln -s command using command/args[]) in the template file where my pod is defined, then the default command of the container image will not be run! So I will not see the original logging messages of my main container in stackdriver anymore!</p>
<p>By the way, it seems even adding the sidecar container to the pod itself, disturbs the normal functionality of my main container. I assume this has to do with how I defined my sidecar container where I am probably missing something?</p>
<p>Thanks in advance for any advice!</p>
<p>Samanta</p>
| <p>You don't need to create any symbolic links. It is enough to mount the volume in your main container so that it writes to mounted volume directly. The page you linked describes exactly that. Try using following snippet (<a href="https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/concepts/cluster-administration/two-files-counter-pod-streaming-sidecar.yaml" rel="nofollow noreferrer">source</a>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-2
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
</code></pre>
<p>Just use your own main container in place of the "count" container. Of course, if your application sends logs to a different directory than /var/log, you need to change the mountPath in the main container accordingly.</p>
|
<p>I want delete label from a node or a pod by kubernetes API, my kubernetes version:1.24</p>
<pre><code>kubectl get pod --show-labels | grep all-flow-0fbah
all-flow-0fbah 1/1 Running 2 9d app=all-flow,op=vps1,version=1001
</code></pre>
<p>I use command as below:</p>
<pre><code> curl --request PATCH --header "Content-Type:application/json-patch+json" http://10.93.78.11:8080/api/v1/namespaces/default/pods/all-flow-0fbah --data '{"metadata":{"labels":{"a":"b"}}}'
</code></pre>
<p>But this doesn't work. Return message as below:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server responded with the status code 415 but did not return more information",
"details": {},
"code": 415
}
</code></pre>
<p>Then I change the curl header like this :</p>
<pre><code>curl --request PATCH --header "Content-Type:application/merge-patch+json" http://10.93.78.11:8080/api/v1/namespaces/default/pods/all-flow-0fbah --data '{"meadata":{"labels":{"op":"vps3"}}}'
</code></pre>
<p>It not delete label but add a new label to that pod.
So is there any one can tell me how to delete a label for a pod like use command :</p>
<pre><code>kubectl label pod all-flow-0fbah key-
</code></pre>
<p>Thanks!</p>
| <p>Was looking for CLI command myself. Here is what worked for me:</p>
<pre><code>kubectl patch pod <podname> --type=json -p='[{"op": "remove", "path": "/metadata/labels/somelabelkey"}]'
</code></pre>
|
<p>I have a cluster on google cloud container engine with 6 <code>n1-standard-1</code> machine.</p>
<p>I deployed several services and pod on this cluster and sometime they fail with the only reason <code>FailedSync</code> and no more explanation, I have no idea why they fail. Virtual machine are not overloaded, only 6% of the CPU is used and less than 1Gi of memory.</p>
<p>Here some events from describe command :</p>
<p><a href="https://i.stack.imgur.com/FLFnz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FLFnz.png" alt="events 1"></a>
<a href="https://i.stack.imgur.com/Khsh6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Khsh6.png" alt="enter image description here"></a></p>
<p>pods filter by <code>is system object: true</code> have the same problem, some of them have more than 900 restarts in 4 days...</p>
<p><a href="https://i.stack.imgur.com/Mkt6R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mkt6R.png" alt="enter image description here"></a></p>
<p>I miss maybe something in my kubernetes configuration and I have no idea what...</p>
<p>Thanks for your help</p>
| <p>I think the best way to find out the issue is just ssh to the node and use <code>sudo docker logs $CONTAINER_Id</code> to see what happened to your applications.</p>
<p>You can tell on what nodes your applications are deployed to by <code>kubectl describe po $PO_NAME</code> or simply <code>kubectl get po -o wide</code>.</p>
|
<p>In this <a href="https://github.com/kubernetes/ingress/blob/master/examples/deployment/nginx/nginx-ingress-controller.yaml" rel="nofollow noreferrer">example</a> on the <a href="https://github.com/kubernetes/ingress" rel="nofollow noreferrer">Kubernetes Ingress git repo</a>, I see that the default-backend service and the Nginx Ingress controller are deployed in the kube-system namespace. But in <a href="https://github.com/kubernetes/ingress/blob/master/examples/static-ip/nginx/nginx-ingress-controller.yaml" rel="nofollow noreferrer">this example regarding static-ip</a>, they don't specify the kube-system namespace.</p>
<p>And for both of those examples, I've found that placing Ingress itself (nginx-ingress.yaml) in the default namespace works.</p>
<p>Should I be putting things in the kube-system namespace? And more generally, what is the significance of the kube-system namespace?</p>
<p><a href="https://stackoverflow.com/questions/43987244/what-is-the-kube-system-namespace-for">This other StackOverflow question</a> is the only other thing I've found talking about the kube-system namespace.</p>
| <p><code>kube-system</code> is just a namespace as any other. It is usualy created by default though, and most people put cluster related stuff in it. Although ie. you will fine a <code>kubernetes.default</code> service anyway.</p>
|
<p>I have deployed our kubernetes cluster in AWS using the kube-up scripts and ec2 instances. Can someone help me in figuring out how to upgrade this cluster to 1.5.8 or to the latest kubernetes release.</p>
| <p>The way I gained confidence about the kind of upgrade you are describing is by setting up a Vagrant cluster of 1.5 api and nodes against the etcd:2 servers that were used at the time of 1.5, and then practice upgrading them to understand the moving parts and ways it can go foul.</p>
<p>Your use of <code>kube-up</code> is about the most manual(?) mechanism I know of, so you're starting from a mild disadvantage and thus need all the practice you can get.</p>
|
<p><a href="https://i.stack.imgur.com/Ioq1w.png" rel="nofollow noreferrer">Docker daemon in minikube</a></p>
<p>When I do </p>
<pre><code>docker version
</code></pre>
<p>I have </p>
<pre><code>Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.23)
</code></pre>
<p>I want to use <code>export DOCKER_API_VERSION=1.23</code> to downgrade docker.
But it doesn't work when I use minikube and use <code>eval $(minikube docker-env)</code>. The client version is always 1.23</p>
<p>Please see the image about the comparison before and after using minikube docker daemon. The DOCKER_API_VERSION is always 1.23. But the client version is not 1.23.</p>
| <p>I cannot reproduce the issue with the newest minikube image and the latest docker installation. I think that the cause is that you are using an old version of minikube. Could you check that?</p>
<pre><code> 2017-10-04 17:17:34 β ip-192-168-200-196 in ~
β β eval $(minikube docker-env)
2017-10-04 17:17:36 β ip-192-168-200-196 in ~
β β docker version
Client:
Version: 17.09.0-ce
API version: 1.24 (downgraded from 1.23)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:09 2017
OS/Arch: darwin/amd64
Server:
Version: 1.12.6
API version: 1.24 (minimum version )
Go version: go1.6.4
Git commit: 78d1802
Built: Wed Jan 11 00:23:16 2017
OS/Arch: linux/amd64
Experimental: false
2017-10-04 17:17:41 β ip-192-168-200-196 in ~
β β minikube version
minikube version: v0.22.2
</code></pre>
|
<p>I'm attempting to create a Kubernetes CronJob to run an application every minute. </p>
<p>A prerequisite is that I need to get my application code onto the container that runs within the CronJob. I figure that the best way to do so is to use a persistent volume, a pvclaim, and then defining the volume and mounting it to the container. I've done this successfully with containers running within a Pod, but it appears to be impossible within a CronJob? Here's my attempted configuration:</p>
<pre><code>apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: update_db
spec:
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
restartPolicy: OnFailure
</code></pre>
<p>The corresponding error:</p>
<blockquote>
<p>error: error validating "cron-applications.yaml": error validating
data: found invalid field volumes for v2alpha1.CronJobSpec; if you
choose to ignore these errors, turn validation off with
--validate=false</p>
</blockquote>
<p>I can't find any resources that show that this is possible. So, if not possible, how does one solve the problem of getting application code into a running CronJob?</p>
| <p>A <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#cronjob-v1beta1-batch" rel="noreferrer">CronJob</a> uses a PodTemplate as everything else based on Pods and can use Volumes. You placed your Volume specification directly in the CronJobSpec instead of the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#podspec-v1-core" rel="noreferrer">PodSpec</a>, use it like this:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-db
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
volumeMounts:
- name: application-code
mountPath: /where/ever
restartPolicy: OnFailure
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
</code></pre>
|
<p>I'm trying to dig into Rancher, and was wondering if having Rancher plugged in with Kubernetes has any additional benefits over Cattle which is Rancher's in home orchestration framework. So far, I haven't been able to figure out why someone would opt for Rancher with Kubernetes. Does it only help ease out the initial setup of Kubernetes? How do these options differ from a stand alone setup of Kubernetes ?</p>
| <p>There is now a very good answer to this. Rancher just moved 100% into Kubernetes by announcing Rancher 2.0: <a href="http://rancher.com/announcing-rancher-2-0/" rel="noreferrer">http://rancher.com/announcing-rancher-2-0/</a>. It does not use Cattle anymore.</p>
|
<p>I am creating a long-lived jump to run inside of my kubernetes cluster. It uses an EBS volume for the home folder, holds important copies of my code, and gives me fast access for routine behavior. The problem is that I can't use GNU <code>screen</code> to create similarly long-lived sessions.</p>
<p>Here's my Dockerfile:</p>
<pre><code>FROM ubuntu:zesty
ENV KUBECTL_VERSION=v1.7.6
RUN apt-get update && \
apt-get install -y \
htop vim sysstat \
build-essential make \
ruby ruby-dev rake \
postgresql-client libpq-dev \
curl wget \
python python-pip && \
pip install awscli && \
gem install --no-rdoc --no-ri bundler
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$KUBECTL_VERSION/bin/linux/amd64/kubectl && \
chmod a+x kubectl && \
mv kubectl /usr/local/bin/kubectl
ADD dotfiles /root-dotfiles
ADD code /root-code
ADD docker-entrypoint.sh /docker-entrypoint.sh
</code></pre>
<p>And here's how I deploy it:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: doit
purpse: jumpbox
name: doit
namespace: default
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: doit
purpose: jumpbox
template:
metadata:
labels:
app: doit
purpose: jumpbox
spec:
containers:
- image: 123.dkr.ecr.eu-central-1.amazonaws.com/doit:latest
imagePullPolicy: Always
name: doit
command: ["sleep", "infinity"]
workingDir: /root
env:
- name: TERM
value: xterm
volumeMounts:
- mountPath: /root
subPath: root-homedir
name: doit-home
volumes:
- name: doit-home
persistentVolumeClaim:
claimName: doit-home
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 1
</code></pre>
<p>But when I <code>kubectl exec</code> in to the container and try creating a screen:</p>
<pre><code>root@doit-2561276907-kl2h6:~# screen -S asdf
Cannot open your terminal '' - please check.
</code></pre>
<p>I can work around this by doing:</p>
<pre><code>root@doit-2561276907-kl2h6:~# script /dev/null
Script started, file is /dev/null
# bash
root@doit-2561276907-kl2h6:~# screen -S asdf
# now inside of the screen
</code></pre>
<p>Also, here's how I connect to the Pod:</p>
<pre><code>function doit {
doit_pods=$(kubectl get pods -l 'app==doit' -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
kubectl exec -it $doit_pods bash
}
</code></pre>
<p>So under the hood I am doing <code>kubectl exec -it</code>.</p>
<p>But I don't want all the users of this jumpbox instance to have to run script and switch back to bash. How can I create Pod which is already configured properly for <code>screen</code>?</p>
| <p>I tried this with centos image it works, need to check the ubuntu image.</p>
<pre><code>kubectl run -it screentest --image=centos -- bash
kubectl exec -it screentest-cbd49447f-286wq -- bash
yum -y install screen
screen
</code></pre>
<p>Tested this in ubuntu also, it works for me.</p>
<pre><code>kubectl run -it ubuntuest --image=ubuntu -- bash
apt-get update -qq && apt-get install screen -y
screen
</code></pre>
<p>also tried with <code>kubectl exec -it ubuntuest-78df75fbb-9sk6f -- bash</code> it works.</p>
|
<p>I am getting "partner is not a Perforce client/server" when using ingress to route the service, but I am able to directly query the perforce server in the Kubernetes cluster. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-notls
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: perforce.domain.com
http:
paths:
- path: /*
backend:
serviceName: p4-server
servicePort: 80
</code></pre>
<p>p4 service </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: p4-server
spec:
type: NodePort
ports:
- port: 80
targetPort: 1666
nodePort: 30166
name: p4-server
selector:
run: p4-server
</code></pre>
<p>if I am in the cluster: </p>
<pre><code>$ p4 -p p4-server:80 info
User name: root
Client name: platform-3101934619-wtxs5
Client host: platform-3101934619-wtxs5
Client unknown.
Current directory: /
Peer address: 10.4.0.218:49924
Client address: 10.4.0.218
Server address: p4-server-1400441787-fcmd9:1666
Server root: /codelingo
Server date: 2017/10/04 02:19:17 +0000 UTC
Server uptime: 380:53:52
Server version: P4D/LINUX26X86_64/2017.1/1511680 (2017/05/05)
Server license: none
Case Handling: sensitive
</code></pre>
<p>p4 logs:</p>
<pre><code>Perforce server info:
2017/10/04 02:19:17 pid 23038 root@platform-3101934619-wtxs5 10.4.0.218 [p4/2017.1/LINUX26X86_64/1511680] 'user-info'
</code></pre>
<p>Failed attempt via ingress:</p>
<pre><code>$ p4 -p perforce.domain.com:80 info
(hangs)
</code></pre>
<p>p4 logs:</p>
<pre><code>Perforce server error:
Date 2017/10/04 02:18:30:
Pid 23012
Connection from 10.4.0.1:38622 broken.
RpcTransport: partner is not a Perforce client/server.
RpcTransport: partner is not a Perforce client/server.
RpcTransport: partner is not a Perforce client/server.
</code></pre>
| <blockquote>
<p>Peer address: 10.4.0.218:49924</p>
</blockquote>
<p>looks suspiciously like a bi-directional protocol, meaning that client and server expect to have unfettered access to one another, ala (non-passive mode) ftp</p>
<p><code>
http:
paths:
- path: /*
</code></p>
<p>I don't believe that <code>http:</code> stanza is an accurate statement, as I doubt super, super seriously that Perforce speaks http between the client and the server. There are ongoing discussions around teaching Ingress about TCP, but for the time being I think you've gotten most of the way to where you want to go by already having a <code>NodePort</code> for :1666</p>
<p>Create a GCE <strong>tcp</strong> load balancer (which effectively is just a firewall to keep the wild Internet away from your cluster) and point its 1666 to port 30166 on every Node in your cluster. It's unclear if anything further needs to happen around Perforce, but from the "establishing tcp/ip connectivity between outsiders and your in-cluster P4" point of view, I think that would do it</p>
|
<p>I would like to know, what's the best practice to handle configs for a Java App in Kubernetes?
I have seen some examples with System variables in the yaml file. Or is there a "better" approach with java property files?</p>
<p>Is there also a way to let different instances of a pod use different configurations?</p>
<p>Thanks in advance!
Mananana</p>
| <p>There is no canonical solution, it depends from case to case</p>
<p>If you have one or a few parameters passing them as environment variables would be ok.</p>
<p>In case if you have many parameters (lets say you have different set of parameters for production and for test environment) it is more convenient to gather them into property file and pass the property file name during application startup. It could be done either using env variable or just as an argument to the startup command.</p>
<p>Property files (or profiles) are also well supported with the popular frameworks, like spring boot, they allow to pass a profile during startup. In this case kubernates yaml could look somehow like this:</p>
<pre><code>env:
- name: SPRING_PROFILES_ACTIVE
value: production
command: [ "bash", "-c", "java -jar App.jar"]
</code></pre>
|
<p>Lets say, you need to run a custom app listening on a fixed port on every worker node?, like a monitoring agent, here's my POC for the case:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: monitor
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
app: monitor-nginx
spec:
# nodeSelector:
# app: node-monitor-nginx
containers:
- name: node-monitor-nginx-container
image: nginx:alpine
ports:
- containerPort: 80
hostPort: 31179
protocol: TCP
</code></pre>
<p>Let's say that my agent reports node status on an nginx pod, so you can get the data on the TCP31179 on every node.</p>
<p>Why the pod it's not listening on that port on the worker nodes??</p>
<pre><code>root@ip-10-0-1-109:~# telnet 10.0.1.109 31179
Trying 10.0.1.109...
telnet: Unable to connect to remote host: Connection refused
</code></pre>
| <p>There is an issue about hostPort when CNI is used, you can find informative discussion in this <a href="https://github.com/kubernetes/kubernetes/issues/23920" rel="nofollow noreferrer">GitHub issue</a>.</p>
<p>Other then that, you might also look into <code>hostNetwork: true</code> as a workaround.</p>
|
<p>I have a Go struct for which I want to generate an OpenAPI schema automatically. Once I have an OpenAPI definition of that struct I wanna generate JSONSchema of it, so that I can validate the input data that comes and is gonna be parsed into those structs.</p>
<p>The struct looks like the following:</p>
<pre><code>// mySpec: io.myapp.MinimalPod
type MinimalPod struct {
Name string `json:"name"`
// k8s: io.k8s.kubernetes.pkg.api.v1.PodSpec
v1.PodSpec
}
</code></pre>
<p>Above struct is clearly an augmentation of what Kubernetes <code>PodSpec</code> is.</p>
<p>Now the approach that I have used is to <a href="https://github.com/kedgeproject/json-schema-generator/blob/30c91750ee456480c7021ff1c30df455a22856ae/main.go#L32" rel="nofollow noreferrer">generate <code>definition</code></a> part for my struct <code>MinimalPod</code>, the definition for <code>PodSpec</code> will come from <a href="https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json" rel="nofollow noreferrer">upstream OpenAPI spec</a> of Kubernetes. <code>PodSpec</code> has a key <code>io.k8s.kubernetes.pkg.api.v1.PodSpec</code> in the <a href="https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json" rel="nofollow noreferrer">upstream OpenAPI spec</a>, this definition is <a href="https://github.com/kedgeproject/json-schema-generator/blob/30c91750ee456480c7021ff1c30df455a22856ae/parsego.go#L152" rel="nofollow noreferrer">injected from there in my Properties</a>. Now in my code that parses above struct I have templates of what to do if <a href="https://github.com/kedgeproject/json-schema-generator/blob/30c91750ee456480c7021ff1c30df455a22856ae/parsego.go#L289" rel="nofollow noreferrer">struct field is <code>string</code></a>. </p>
<p>If the field has a comment that <a href="https://github.com/kedgeproject/json-schema-generator/blob/30c91750ee456480c7021ff1c30df455a22856ae/parsego.go#L405" rel="nofollow noreferrer">starts with <code>k8s: ...</code></a> the next part is Kubernetes object's <em>OpenAPI definition key</em>. In our case the <em>OpenAPI definition key</em> is <code>io.k8s.kubernetes.pkg.api.v1.PodSpec</code>. So I retrieve that field's definition from the upstream OpenAPI definition and embed it into the definition of my struct.</p>
<p>Once I have generated an OpenAPI definition for this struct which is injected in Kubernetes OpenAPI schema's definition with key being <code>io.myapp.MinimalPod</code>. Now I can use the tool <a href="https://github.com/garethr/openapi2jsonschema" rel="nofollow noreferrer"><code>openapi2jsonschema</code></a> to generate JSONSchema out of this one. Which generates a JSONSchema file named <code>MinimalPod.json</code>.</p>
<p>Now <code>jsonschema</code> tool and the file <code>MinimalPod.json</code> can be used for validating input given to my tool parser to see if all fields were given right.</p>
<p>Is this the right approach of doing things, or is there a tool/library and if I feed Go structs to it, it gives me OpenAPI schema? It would be fine if it does not identify where to inject Kubernetes OpenAPI schema from even automatic parsing of Go structs and giving OpenAPI definition would be much appreciated.</p>
<hr>
<h2>Update 1</h2>
<p>After following @mehdy 's instructions, this is what I have tried:</p>
<p>I have used this import path <code>github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1</code> to import the <code>PodSpec</code> definition instead of <code>k8s.io/api/core/v1</code> and code looks like this:</p>
<pre><code>package foomodel
import "github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1"
// MinimalPod is a minimal pod.
// +k8s:openapi-gen=true
type MinimalPod struct {
Name string `json:"name"`
v1.PodSpec
}
</code></pre>
<p>Now when I generate the same with flag <code>-i</code> changed from <code>k8s.io/api/core/v1</code> to <code>github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1</code></p>
<pre><code>$ go run example/openapi-gen/main.go -i k8s.io/kube-openapi/example/model,github.com/kedgeproject/kedge/vendor/k8s.io/client-go/pkg/api/v1 -h example/foomodel/header.txt -p k8s.io/kube-openapi/example/foomodel
</code></pre>
<p>This is what is generated:</p>
<pre><code>$ cat openapi_generated.go
// +build !ignore_autogenerated
/*
======
Some random text
======
*/
// This file was autogenerated by openapi-gen. Do not edit it manually!
package foomodel
import (
spec "github.com/go-openapi/spec"
common "k8s.io/kube-openapi/pkg/common"
)
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
"k8s.io/kube-openapi/example/model.Container": {
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Container defines a single application container that you want to run within a pod.",
Properties: map[string]spec.Schema{
"health": {
SchemaProps: spec.SchemaProps{
Description: "One common definitions for 'livenessProbe' and 'readinessProbe' this allows to have only one place to define both probes (if they are the same) Periodic probe of container liveness and readiness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
Ref: ref("k8s.io/client-go/pkg/api/v1.Probe"),
},
},
"Container": {
SchemaProps: spec.SchemaProps{
Ref: ref("k8s.io/client-go/pkg/api/v1.Container"),
},
},
},
Required: []string{"Container"},
},
},
Dependencies: []string{
"k8s.io/client-go/pkg/api/v1.Container", "k8s.io/client-go/pkg/api/v1.Probe"},
},
}
}
</code></pre>
<p>I get only this much of the configuration generated. While when I switch back to <code>"k8s.io/api/core/v1"</code> I get config code auto generated which is more than 8k lines. What am I missing here?</p>
<p>Here definition of <code>k8s.io/client-go/pkg/api/v1.Container</code> and <code>k8s.io/client-go/pkg/api/v1.Probe</code> is missing while when I use <code>k8s.io/api/core/v1</code> as import everything is generated.</p>
<p><strong>Note</strong>: To generate above steps, please <code>git clone https://github.com/kedgeproject/kedge</code> in <code>GOPATH</code>.</p>
| <p>You can use <a href="https://github.com/kubernetes/kube-openapi" rel="nofollow noreferrer">kube-openapi</a> package for this. I am going to add a sample to the repo but I've tested this simple model:</p>
<pre><code>// Car is a simple car model.
// +k8s:openapi-gen=true
type Car struct {
Color string
Capacity int
// +k8s:openapi-gen=false
HiddenFeature string
}
</code></pre>
<p>If you assume you created this file in </p>
<pre><code>go run example/openapi-gen/main.go -h example/model/header.txt -i k8s.io/kube-openapi/example/model -p k8s.io/kube-openapi/example/model
</code></pre>
<p>(you also need to add a header.txt file). You should see a new file created in example/model folder called openapi_generated.go. This is an intermediate generated file that has your OpenAPI model in it:</p>
<pre><code>func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
"k8s.io/kube-openapi/example/model.Car": {
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Description: "Car is a simple car model.",
Properties: map[string]spec.Schema{
"Color": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"Capacity": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
Format: "int32",
},
},
},
Required: []string{"Color", "Capacity"},
},
},
Dependencies: []string{},
},
}
}
</code></pre>
<p>From there you should be able to call the generated method, get the model for your Type and get its Schema.</p>
<p>With some go get magic and changing the command line a little, I was able to generate the model for your model. Here is what you should change in your code:</p>
<pre><code>package model
import "k8s.io/api/core/v1"
// MinimalPod is a minimal pod.
// +k8s:openapi-gen=true
type MinimalPod struct {
Name string `json:"name"`
v1.PodSpec
}
</code></pre>
<p>and then change the run command a little to include PodSpec in the generation:</p>
<pre><code>go run example/openapi-gen/main.go -h example/model/header.txt -i k8s.io/kube-openapi/example/model,k8s.io/api/core/v1 -p k8s.io/kube-openapi/example/model
</code></pre>
<p>Here is what I got: <a href="https://gist.github.com/mbohlool/e399ac2458d12e48cc13081289efc55a" rel="nofollow noreferrer">https://gist.github.com/mbohlool/e399ac2458d12e48cc13081289efc55a</a></p>
|
<p>I have a kubernetes service running on Azure. After the deployment and service are created, the service publishes an External-IP address and I am able to access the service on that IP:Port.</p>
<p>However, I want to access the service through a regular domain name. I know that the kubernetes cluster running on Azure has its own DNS, but how can I figure out what the service DNS name is???</p>
<p>I am running multiple services, and they refer to one another using the <_ServiceName>.<_Namespace>.svc.cluster.local naming convention, but if I attempt to access the Service using <_ServiceName>.<_Namespace>.svc.<_kubernetesDNS>.<_location>.azureapp.com, it doesnt work.</p>
<p>Any help would be greatly appreciated.</p>
| <p>In Azure, you can use the "Public IP addresses" resource and associate public Ip that is being used by your service, to a DNS name under the default Azure DNS Namespace : <code>.location.couldapp.azure.com</code>.
eg: <code>demo.k8s.service.centralindia.couldapp.azure.com</code></p>
<p>Note: The Dns record should be unique.</p>
<p>Else try creating Azure DNS to create your own domain.</p>
<p><a href="https://i.stack.imgur.com/FxWJy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FxWJy.png" alt="associating a public Ip to default dns record provided by azure "></a></p>
|
<p>I am trying to automate kubernetes worker nodes using the official <a href="https://github.com/kubernetes-incubator/client-python" rel="noreferrer">kubernetes python-client</a>. I am currently looking for a way to <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="noreferrer">safely move al the running applications to other nodes</a> . We can do so using "kubectl drain". I did not find a way to simulate that functionality using python client. I am currently looking into Does this library support drain functionality yet?</p>
| <p>I found the answer. Python client does have support for draining a node but it is not a single command. "kubectl drain" operation utilizes <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api" rel="noreferrer">Eviction API</a> to safely delete all the workloads running on a node. The python-client has a function <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/docs/CoreV1Api.md#create_namespaced_pod_eviction" rel="noreferrer">create_namespaced_pod_eviction</a> that safely deletes all the pods in a namespace. However, "safely" depends on the <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="noreferrer">Pod Disruption Budgets (PDB)</a> that you have defined for the apps running on that node.</p>
<p>I am posting this answer hoping that someone might find it useful :)</p>
|
Subsets and Splits