prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am trying to use kubelet to start kubernetes api server as a staic pod, but failed with following errors:</p>
<pre><code>I0523 11:13:41.192680 9248 remote_runtime.go:41] Connecting to runtime service /var/run/dockershim.sock
I0523 11:13:41.196764 9248 kuberuntime_manager.go:171] Container runtime docker initialized, version: 1.12.3, apiVersion: 1.24.0
E0523 11:13:41.199242 9248 kubelet.go:1165] Image garbage collection failed: unable to find data for container /
E0523 11:13:41.199405 9248 event.go:208] Unable to write event: 'Post https://127.0.0.1:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: getsockopt: connection refused' (may retry after sleeping)
I0523 11:13:41.199529 9248 server.go:869] Started kubelet v1.6.4
I0523 11:13:41.199711 9248 server.go:127] Starting to listen on 0.0.0.0:10250
I0523 11:13:41.200017 9248 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
I0523 11:13:41.203018 9248 server.go:294] Adding debug handlers to kubelet server.
E0523 11:13:41.207486 9248 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0523 11:13:41.207554 9248 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
E0523 11:13:41.214231 9248 kubelet.go:1246] Failed to start ContainerManager failed to initialise top level QOS containers: root container /kubepods doesn't exist
</code></pre>
<p>The full log is here: <a href="https://travis-ci.org/reachlin/k8s0/jobs/235187507" rel="nofollow noreferrer">https://travis-ci.org/reachlin/k8s0/jobs/235187507</a></p>
<p>This is the api server deployment yml: <a href="https://github.com/reachlin/k8s0/blob/master/roles/k8s/templates/apiserver.yml.j2" rel="nofollow noreferrer">https://github.com/reachlin/k8s0/blob/master/roles/k8s/templates/apiserver.yml.j2</a></p>
<p>Later, I found the error actually matters is:</p>
<pre><code>Failed to start ContainerManager failed to initialise top level QOS containers: root container /kubepods doesn't exist
</code></pre>
| <p>after some research, i found the solution here: <a href="https://github.com/kubernetes/kubernetes/issues/43704" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/43704</a></p>
<p>by adding these two parameters to kubelet:</p>
<pre><code> --cgroups-per-qos=false
--enforce-node-allocatable=""
</code></pre>
|
<p>I init k8s cluster master with kubeadm, but I felt very confused. The version of kubeadm:</p>
<pre><code># ./kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>When I run command with <code>kubeadm init</code>, told me must start kubelet.service:</p>
<pre><code># ./kubeadm init
Running pre-flight checks
preflight check errors:
kubelet service is not active, please run 'systemctl start kubelet.service'
</code></pre>
<p>And then When I retry this command after <code>systemctl restart kubelet.service</code>, told me <code>Port 10250 in use</code>:</p>
<pre><code># systemctl restart kubelet.service
# ./kubeadm init
Running pre-flight checks
preflight check errors:
Port 10250 is in use
/var/lib/kubelet is not empty
</code></pre>
<p>Is there any way to run kubelet with no port OR can I change the port of kubelet?</p>
| <p>The reason that it mentions the port is in use is because you already ran <code>kubeadm init</code> once and it has already changed a number of things. </p>
<p>run <code>kubeadm reset</code> first to undo all of the changes from the first time you ran it.</p>
<p>Then run <code>systemctl restart kubelet</code></p>
<p>Finally, when you run <code>kubeadm init</code> you should no longer get the error.</p>
<p>Any time kubeadm does something that's not right or otherwise fails, it needs to be reset to work properly again.</p>
|
<p>I'm trying to make use of init container to prepare some files before the main container starts up. In the init container I'd like to mount a <code>hostPath</code> volume so that I can share prepare some file for the main container.</p>
<p>My cluster is using pre 1.6 version of kubernetes so I'm using the <code>meta.annotation</code> syntax:</p>
<pre><code>pod.beta.kubernetes.io/init-containers: '[
{
"name": "init-myservice",
"image": "busybox",
"command": ["sh", "-c", "mkdir /tmp/jack/ && touch cd /tmp/jack && touch a b c"],
"volumeMounts": [{
"mountPath": "/tmp/jack",
"name": "confdir"
}]
}
]'
</code></pre>
<p>But it doesn't seem to work. The addition of <code>volumeMounts</code> cause the container <code>init-myserver</code> go into CrashLoop. Without it the pod gets created successfully but it doesn't achieve what I want.</p>
<p>Is it not possible in <1.5 to mount volume in init container?
What about 1.6+?</p>
| <p>You don't need to do <code>hostPath</code> volume to share data generated by init-container with the containers of Pod. You can use <code>emptyDir</code> to achieve same result. The benefit of using <code>emptyDir</code> is that you don't need to do anything on host and this will work on any kind of cluster even if you don't have access to nodes on that cluster.</p>
<p>Another set of problems with using <code>hostPath</code> is setting proper permissions on that folder on host also if you are using any SELinux enabled distro you have to setup the right context on that directory.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: init
labels:
app: init
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "download",
"image": "axeclbr/git",
"command": [
"git",
"clone",
"https://github.com/mdn/beginner-html-site-scripted",
"/var/lib/data"
],
"volumeMounts": [
{
"mountPath": "/var/lib/data",
"name": "git"
}
]
}
]'
spec:
containers:
- name: run
image: docker.io/centos/httpd
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html
name: git
volumes:
- emptyDir: {}
name: git
</code></pre>
<p>Checkout the above example where the init-container and the container in pod are sharing same volume named <code>git</code>. And the type of volume is <code>emptyDir</code>. I just want the init-container to pull the data everytime this pod comes up and then it is served from the <code>httpd</code> container of pod.</p>
<p>HTH.</p>
|
<p>I am using rookio on Kubernetes with CoreOS for dynamically creating Persistent Volume. </p>
<p>So I create a PersistentVolumeClaim (<code>kubectl create -f postgres-pvc.yaml</code>) and apply a patch for persistentVolumeReclaimPolicy to Retain. I do a <code>"kubectl get pv"</code>and I can see a dynamically created persistentvolume and is bound. Now when I delete the PersitentVolumeClaim the status goes to Released. </p>
<p>I have stored some precious data in that persistentvolume. Is there a way I can reuse that persistentvolume that has gone into Released status? </p>
<p>thanks
-sonam</p>
| <p>If you have precious data that you want to use in another PostgreSQL pod, maybe StatefulSets is which you are looking for, as it allows:</p>
<blockquote>
<p>Stable, persistent storage [...] across Pod (re)schedulings.</p>
</blockquote>
<p>Therefore, I would advise you to deploy your PostgreSQL database as a StatefulSet. You would need to check that your already existing Volume is bound. </p>
<hr>
<p>[1] <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/</a></p>
|
<p>All of a sudden, I cannot deploy some images which could be deployed before. I got the following pod status:</p>
<pre><code>[root@webdev2 origin]# oc get pods
NAME READY STATUS RESTARTS AGE
arix-3-yjq9w 0/1 ImagePullBackOff 0 10m
docker-registry-2-vqstm 1/1 Running 0 2d
router-1-kvjxq 1/1 Running 0 2d
</code></pre>
<p>The application just won't start. The pod is not trying to run the container. From the Event page, I have got <code>Back-off pulling image "172.30.84.25:5000/default/arix@sha256:d326</code>. I have verified that I can pull the image with the tag with <code>docker pull</code>.</p>
<p>I have also checked the log of the last container. It was closed for some reason. I think the pod should at least try to restart it.</p>
<p>I have run out of ideas to debug the issues. What can I check more?</p>
| <p>You can use the '<em><strong>describe pod</strong></em>' syntax</p>
<p><strong>For OpenShift use:</strong></p>
<pre><code>oc describe pod <pod-id>
</code></pre>
<p><strong>For vanilla Kubernetes:</strong></p>
<pre><code>kubectl describe pod <pod-id>
</code></pre>
<p>Examine the events of the output.
In my case it shows <code>Back-off pulling image unreachableserver/nginx:1.14.22222</code></p>
<p>In this case the image <code>unreachableserver/nginx:1.14.22222</code> can not be pulled from the Internet because there is no Docker registry unreachableserver and the image <code>nginx:1.14.22222</code> does not exist.</p>
<p><strong>NB: If you do not see any events of interest and the pod has been in the 'ImagePullBackOff' status for a while (seems like more than 60 minutes), you need to delete the pod and look at the events from the new pod.</strong></p>
<p><strong>For OpenShift use:</strong></p>
<pre><code>oc delete pod <pod-id>
oc get pods
oc get pod <new-pod-id>
</code></pre>
<p><strong>For vanilla Kubernetes:</strong></p>
<pre><code>kubectl delete pod <pod-id>
kubectl get pods
kubectl get pod <new-pod-id>
</code></pre>
<p>Sample output:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32s default-scheduler Successfully assigned rk/nginx-deployment-6c879b5f64-2xrmt to aks-agentpool-x
Normal Pulling 17s (x2 over 30s) kubelet Pulling image "unreachableserver/nginx:1.14.22222"
Warning Failed 16s (x2 over 29s) kubelet Failed to pull image "unreachableserver/nginx:1.14.22222": rpc error: code = Unknown desc = Error response from daemon: pull access denied for unreachableserver/nginx, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 16s (x2 over 29s) kubelet Error: ErrImagePull
Normal BackOff 5s (x2 over 28s) kubelet Back-off pulling image "unreachableserver/nginx:1.14.22222"
Warning Failed 5s (x2 over 28s) kubelet Error: ImagePullBackOff
</code></pre>
<p><strong>Additional debugging steps</strong></p>
<ol>
<li>try to pull the docker image and tag manually on your computer</li>
<li>Identify the node by doing a 'kubectl/oc get pods -o wide'</li>
<li>ssh into the node (if you can) that can not pull the docker image</li>
<li>check that the node can resolve the DNS of the docker registry by performing a ping.</li>
<li>try to pull the docker image manually on the node</li>
<li>If you are using a private registry, check that your <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">secret</a> exists and the secret is correct. Your secret should also be in the same namespace. Thanks <a href="https://stackoverflow.com/users/2677943/swenzel">swenzel</a></li>
<li>Some registries have firewalls that limit ip address access. The firewall may block the pull</li>
<li>Some CIs create deployments with temporary docker secrets. So the secret expires after a few days (You are asking for production failures...)</li>
</ol>
|
<p>I'm following a kubernetes tutorial. I got halfway through but now I want to start over. </p>
<p>I tried <code>kubectl delete pods --all --force</code> but when I do <code>kubectl get pods</code> it still shows my deployments, and the age is reset. How do I get back to a pristine state to restart the tutorial?</p>
| <p>Firt, delete the deployments, so that the <code>ReplicaSet</code> won't recreate pods:</p>
<pre><code>kubectl delete deployments --all --force --grace-period=0
</code></pre>
<p>Then, use the <code>--grace-period</code> flag:</p>
<pre><code>kubectl delete pods <pod> --grace-period=0 --force
</code></pre>
<p>From the documentation:</p>
<blockquote>
<p>--grace-period int Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. (default -1)</p>
</blockquote>
|
<p>My Kubernetes version is :</p>
<pre><code># kubectl --version
Kubernetes v1.4.0
</code></pre>
<p>I am planning to use Prometheus to monitor my Kube cluster. For this, I need to annotate the metrics URL.</p>
<p>My current metrics URL is like :</p>
<pre><code>http://172.16.33.7:8080/metrics
</code></pre>
<p>But I want it to be like :</p>
<pre><code>http://172.16.33.7:8080/websocket/metrics
</code></pre>
<p>First I tried to do this manually ::</p>
<pre><code>kubectl annotate pods websocket-backend-controller-db83999c5b534b277b82badf6c152cb9m1 prometheus.io/path=/websocket/metrics
kubectl annotate pods websocket-backend-controller-db83999c5b534b277b82badf6c152cb9m1 prometheus.io/scrape='true'
kubectl annotate pods websocket-backend-controller-db83999c5b534b277b82badf6c152cb9m1 prometheus.io/port='8080'
</code></pre>
<p>All these commands work perfectly fine and I am able to see the annotations.</p>
<pre><code>{
"metadata": {
"name": "websocket-backend-controller-v1krf",
"generateName": "websocket-backend-controller-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/websocket-backend-controller-v1krf",
"uid": "e323994b-4081-11e7-8bd0-0050569b6f44",
"resourceVersion": "27534379",
"creationTimestamp": "2017-05-24T13:07:06Z",
"labels": {
"name": "websocket-backend"
},
"annotations": {
"kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"websocket-backend-controller\",\"uid\":\"e321f1a8-4081-11e7-8bd0-0050569b6f44\",\"apiVersion\":\"v1\",\"resourceVersion\":\"27531840\"}}\n",
"prometheus.io/path": "/websocket/metrics",
"prometheus.io/port": "8080",
"prometheus.io/scrape": "true"
}
</code></pre>
<p>But since I want this configuration to remain permanent, I am setting the following annotations in my services files.</p>
<pre><code># cat websocket-service.yaml
apiVersion: v1
kind: Service
metadata:
name: websocket-service
labels:
baseApi: websocket
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /websocket/metrics
prometheus.io/port: '8080'
spec:
selector:
name: websocket-backend
ports:
- port: 8080
targetPort: 8080
nodePort: 30800
protocol: TCP
type: NodePort
clusterIP: 10.100.10.45
</code></pre>
<p>I restarted my websocket service and the corresponding pods but these configs do not seem to be taking effect.</p>
<pre><code>kubectl create -f websocket-service.yaml
kubectl create -f ../controllers/websocket-replication-controller.yaml
</code></pre>
<p>The result does not show the annotations configured.</p>
<pre><code>{
"metadata": {
"name": "websocket-backend-controller-v1krf",
"generateName": "websocket-backend-controller-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/websocket-backend-controller-v1krf",
"uid": "e323994b-4081-11e7-8bd0-0050569b6f44",
"resourceVersion": "27531879",
"creationTimestamp": "2017-05-24T13:07:06Z",
"labels": {
"name": "websocket-backend"
},
"annotations": {
"kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"websocket-backend-controller\",\"uid\":\"e321f1a8-4081-11e7-8bd0-0050569b6f44\",\"apiVersion\":\"v1\",\"resourceVersion\":\"27531840\"}}\n"
}
</code></pre>
<p>All I am doing is rather than using a command line, I am setting the configs using services config but it does not seem to be working.</p>
| <p>If you annotate the service, it doesn't take any effect on the possibly matched pods. Your pods are managed either by a ReplicationController, or over a ReplicaSet / Deployment. In that case, annotate these resources to make the annotations reach the pods. In example of deployments, you must use the template section, like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: deployment-example
spec:
# 3 Pods should exist at all times.
replicas: 3
# Keep record of 2 revisions for rollback
revisionHistoryLimit: 2
template:
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /websocket/metrics
prometheus.io/port: '8080'
</code></pre>
|
<p>I have configured the Kubernetes plugin to spin up slaves.
However I am having problems with access-control.
Getting an error when the master tries to spin up new pods (slaves) </p>
<blockquote>
<p>Unexpected exception encountered while provisioning agent Kubernetes Pod Template
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: <a href="https://kubernetes.default/api/v1/namespaces/npd-test/pods" rel="nofollow noreferrer">https://kubernetes.default/api/v1/namespaces/npd-test/pods</a>. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:315)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:266)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:237)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:230)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:643)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:300)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:636)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback.call(KubernetesCloud.java:581)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)</p>
</blockquote>
<p>I have checked the access of the default service account located at <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> and tried to create a pod in <code>https://kubernetes.default/api/v1/namespaces/npd-test/pods.</code> using the token and it works.</p>
<p>Not sure why the plugin is complaining that the service account does not have access.</p>
<p>I have tried configuring the Kubernetes plugin with None credentials and a Kubernetes Service Account Credential (no way to specify account), but neither works. </p>
| <p>It is odd that the service account worked for you normally but didn't work in Jenkins. In my setup, I had to add a <code>RoleBinding</code> to give the service account the <code>edit</code> role (my namespace is actually <code>jenkins</code> but I changed it here to match your namespace).</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
namespace: npd-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: default
namespace: npd-test
</code></pre>
<p>After I did that, I configured the Kubernetes Cloud plugin like this and it works for me.</p>
<pre><code>Kubernetes URL: https://kubernetes.default.svc.cluster.local
Kubernetes server certificate key:
Disable https certificate check: off
Kubernetes Namespace: npd-test
Credentials: - none -
</code></pre>
|
<p>I have the following Deployment... </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: socket-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: socket-server
spec:
containers:
- name: socket-server
image: gcr.io/project-haswell-recon/socket-server:production-production-2
env:
- name: PORT
value: 80
ports:
- containerPort: 80
</code></pre>
<p>But I get the following error when I run <code>kubectl create -f ./scripts/deployment.yml --namespace production</code> </p>
<p><code>Error from server (BadRequest): error when creating "./scripts/deployment.yml": Deployment in version "v1beta1" cannot be handled as a Deployment: [pos 321]: json: expect char '"' but got char '8'</code></p>
<p>I pretty much copy and pasted this deployment from a previous working deployment, and altered a few details so I'm at a loss as to what this could be. </p>
| <p>The problem is with the number <code>80</code>. Here it's within an <code>EnvVar</code> context, so it has to be of type <code>string</code> and not <code>int</code></p>
|
<p>I'm following a kubernetes tutorial. I got halfway through but now I want to start over. </p>
<p>I tried <code>kubectl delete pods --all --force</code> but when I do <code>kubectl get pods</code> it still shows my deployments, and the age is reset. How do I get back to a pristine state to restart the tutorial?</p>
| <p>You need to delete the deployments, not the pods. Otherwise the ReplicaSet from the Deployment will just recreate the PODs (self healing capability).</p>
|
<p>Trying to understand how sticky session should be configured when working with service type=loadbalancer in AWS
My backend are 2 pods running tomcat app
I see that the service create the AWS LB as well and I set the right cookie value in the AWS LB configuration ,but when accessing the system I see that I keep switching between my pods/tomcat instances</p>
<p>My service configuration </p>
<pre><code>kind: Service
apiVersion: v1
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
labels:
app: app1
name: AWSELB
namespace: local
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
selector:
app: app1
</code></pre>
<p>Is there any additional settings that are missing?
Thank you
Jack</p>
| <p>It's not supported. Please see <a href="https://github.com/kubernetes/kubernetes/issues/2867" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/2867</a> which has the gory details.</p>
|
<p>I'm trying to develop a sample application using spring cloud and minikube which consist of 3 spring boot applications.</p>
<p>The first two are two different application (servers) which have the same endpoint but different functionality, and the third one is a client used to integrates the two other applications into one single exposed api.</p>
<p>I managed to deploy all three applications in minikube and managed to develop the full stack and make them communicate between each other, but now I want to go a step further and make the discovery of the two servers automatically, without hard coding the service names.</p>
<p>I deployed the two servers in minikube using the same label and would like to find something so that the client is able to find the services related to the two server apps automatically. This will allow expanding the application easily, so that when I add a new server to the stack the client will find it and expose it without need of any change.</p>
<p>Using Netflix Eureka this can be easily achieved by using something like </p>
<pre><code>discoveryClient.getInstances("service-name").forEach((ServiceInstance s)
</code></pre>
<p>But I do no want to add an extra eureka server to the list of microservices since we are going to use kubernetes.</p>
<p>Is there any library which gives this functionality for kubernetes?</p>
| <p>You can use:</p>
<p>CLI: <code>kubectl get services --selector=YOUR-LABEL-NAME</code>.</p>
<p>API: <code>GET /api/v1/namespaces/{namespace}/services</code> with <code>labelSelector</code> parameter see <a href="https://kubernetes.io/docs/api-reference/v1.6/#list-161" rel="nofollow noreferrer">API docs</a>.</p>
<p>However be careful with dynamic service discovery inside services. As derSteve mentions in the comments below - The best practice for dependencies between services is to deploy them as logical bundles called <code>deployments</code> for which it is not necessary to perform discovery of previously unknown services. See this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">link</a>. </p>
|
<p>I need a way to get service cluster ip range (as CIDR) that works accross all Kubernetes clusters.</p>
<p>I tried the following, which works fine for clusters created with kubeadm as it greps arguments of apiserver pod:</p>
<pre><code>$ kubectl cluster-info dump | grep service-cluster-ip-range
"--service-cluster-ip-range=10.96.0.0/12",
</code></pre>
<p>This does not work on all Kubernetes clusters, i.e. gcloud</p>
<p>So the question is, what is the best way to get service ip range programatically?</p>
| <p>I don't think there is a way to access such information through K8s Api, there is an open issue to address lack of this functionality: <a href="https://github.com/kubernetes/kubernetes/issues/25533" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/25533</a> . If you have access to the etcd of the k8s cluster in question, then there is a key with information about service cidr range: <code>/registry/ranges/serviceips</code> . You can get the value, by using etcdctl (assuming, you have the proper permissions): <code>etcdctl --enpoints=<your etcd> --<any authentication flags> get "/registry/ranges/serviceips" --prefix=true</code>.</p>
|
<p>I have kubernetes cluster with weave CNI plugin consisting of 3 nodes: </p>
<ul>
<li>1 master node (virtual machine)</li>
<li>2 worker baremetall nodes (4 cores xeon with hyperthreading - 8 logical nodes)</li>
</ul>
<p>The trouble is that <code>top</code> shows that kubelet has 60-100% CPU usage on first worker.
In <code>journalctl -u kubelet</code> I see a lot of messages (hundreds every minute)</p>
<pre><code>May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.075243 3843 docker_sandbox.go:205] Failed to stop sandbox "011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640": Error response from daemon: {"message":"No such container: 011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640"}
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.075360 3843 remote_runtime.go:109] StopPodSandbox "011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-p6kwb_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.075380 3843 kuberuntime_gc.go:138] Failed to stop sandbox "011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-p6kwb_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 011cf10cf46dbc6bf2e11d1cb562af478eee21eba0c40521bf7af51ee5399640
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.076549 3843 docker_sandbox.go:205] Failed to stop sandbox "0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf": Error response from daemon: {"message":"No such container: 0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf"}
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.076654 3843 remote_runtime.go:109] StopPodSandbox "0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-6g8jq_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.076676 3843 kuberuntime_gc.go:138] Failed to stop sandbox "0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-6g8jq_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 0125de37634ef7f3aa852c999cfb5849750167b1e3d63293a085ceca416e4ebf
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.079585 3843 docker_sandbox.go:205] Failed to stop sandbox "014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772": Error response from daemon: {"message":"No such container: 014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772"}
May 19 09:57:38 kube-worker1 bash[3843]: E0519 09:57:38.079805 3843 remote_runtime.go:109] StopPodSandbox "014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "cron-task-2533948c46c1-r30cw_namespace" network: CNI failed to retrieve network namespace path: Error: No such container: 014135ede46ee45c176528da02782a38ded36bd10566f864c147ccb66a617772
</code></pre>
<p>It's happen after wrong cronetes tasks which failed during creation. I removed all pods with <code>--force</code> but kubelet still try to remove them. Also I restarted kubelet on that worker with no result. How can I talk to kubelet to forget them?</p>
<p>Version info </p>
<pre><code>Kubernetes v1.6.1
Docker version 1.12.0, build 8eab29e
Linux kube-worker1 4.4.0-72-generic #93-Ubuntu SMP
</code></pre>
<p>Container manifest (without metadata)</p>
<pre><code> job:
apiVersion: batch/v1
kind: Job
spec:
template:
spec:
containers:
- name: cron-task
image: docker.company.ru/image:v2.3.2
command: ["rake", "db:refresh_views"]
env:
- name: RAILS_ENV
value: namespace
- name: CONFIG_PATH
value: /config
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: task-conf
restartPolicy: Never
</code></pre>
<p>Also I didn't found any mention of this pod's part of name (2533948c46c1) in cluster's etcd.</p>
| <p>Finally I found the solution.<br>
Kubelet stores information about all pods, running on it in</p>
<pre><code>/var/lib/dockershim/sandbox
</code></pre>
<p>So when I <code>ls</code> in that folder I found files for all missing pods. Then I deleted these files and log messages disappeared and CPU usage returns to normal value (even without kubelet restart)</p>
|
<p>Our scenarios:</p>
<p>We use ceph rbd to store some machine learning training dataset, the workflow as below:</p>
<p>Create a ceph-rbd pvc pvc-tranining-data with AccessMode: ReadWriteOnce.<br>
Create a write job with 1 pod to mount pvc-training-data and write training data in to pvc-training-data.<br>
After writing the training data into pvc-training-data, the container will exit and the pvc-trainiing-data pvc is unmounted by k8s, the write job is done.<br>
Create a read job with n pods where <code>n >= 1</code> to mount pvc-training-data with <code>readOnly: true</code> to consume the training data.
btw: we use k8s 1.6.1</p>
<p>So far the workflow works well for our use scenarios, but I have some questions about PVC AccessMode and ceph rbd with AccessMode: ReadWriteOnce.</p>
<blockquote>
<ol>
<li><p>How to understand AccessModes: ReadOnlyMany, ReadWriteOnce, ReadWriteMany?
I think the usage scope is ReadOnlyMany < ReadWriteOnce < ReadWriteMay, so if I apply a PVC with AccessMode: ReadWriteOnce, it is ok I use
it as AccessMode: ReadOnlyMany PVC, am I right?</p></li>
<li><p>ceph rbd is a block device, each container(on different host) mount the same ceph rbd device will have their own filesystem, so the only allowed AccessMode is ReadOnlyMany or ReadWriteOnce, should we impose restrictions on ReadWriteOnce usage in k8s code?</p></li>
</ol>
<blockquote>
<ol>
<li>If a ReadWriteOnce pvc is mounted by a Pod with readOnly:false then no more Pod can mount this device, until it is unmounted.</li>
<li>If a ReadWriteOnce pvc is mounted by a Pod with readOnly:true, it only can be mounted to other Pod as long as they set readOnly: true.</li>
<li>there's no restrictions on container with in the same Pod, as they share the same filesystem from host ${KUBELET_ROOT}/plugins/{xx}/</li>
</ol>
</blockquote>
</blockquote>
| <p>Regarding your first question:</p>
<blockquote>
<ol>
<li>How to understand AccessModes: ReadOnlyMany, ReadWriteOnce,
ReadWriteMany? I think the usage scope is ReadOnlyMany < ReadWriteOnce
< ReadWriteMay, so if I apply a PVC with AccessMode: ReadWriteOnce, it
is ok I use it as AccessMode: ReadOnlyMany PVC, am I right?</li>
</ol>
</blockquote>
<p>The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">docs</a> clearly state: </p>
<pre><code>Important! A volume can only be mounted using one access mode at a
time, even if it supports many.
</code></pre>
<p>Your second question was not clear to me. But I think it may not be valid considering the answer to the first one?</p>
|
<p>If I run Kubernetes on a cluster of Ubuntu machines, how does NFS work inside Kubernetes when it is mounted on each of the ubuntu nodes?</p>
<p>My use case is for databases and RabbitMQ to utilize that storage available on the nodes the pods are running.</p>
<p>Do I mount that NFS as a regular volume when deploying or should I use NFS directly from a persistent volume and that way not mount NFS on the Ubuntu nodes? How does the NFS work to distinguish the instances running, are the volume claim separate based on the pods/containers?</p>
| <p>For you to use <code>NFS</code> with Kubernetes you have to create a <code>PV</code> and then utilize it via <code>PVC</code>.</p>
<p>Your <code>PV</code>s will decide which <code>NFS</code> node they are backed with. Since that is where you will mention the server address. Look at the sample example below.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
nfs:
path: /tmp
server: 172.17.0.2
</code></pre>
<p>Now if you want some pods to use storage from specific <code>PV</code> you can add a field in <code>PVC</code> called <code>volumeName</code> which basically asks the <code>PVC</code> to be bound to that <code>PV</code>.</p>
<p>The way generally people do things are set up dedicated nodes for storage, since they don't want to loose the data. Keeping data on the nodes might be risky. Since if the node goes down you loose all your data on that node, unless that is backed up somewhere.</p>
<p>Read more about the Persistent Volumes <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I'm trying to develop a sample application using spring cloud and minikube which consist of 3 spring boot applications.</p>
<p>The first two are two different application (servers) which have the same endpoint but different functionality, and the third one is a client used to integrates the two other applications into one single exposed api.</p>
<p>I managed to deploy all three applications in minikube and managed to develop the full stack and make them communicate between each other, but now I want to go a step further and make the discovery of the two servers automatically, without hard coding the service names.</p>
<p>I deployed the two servers in minikube using the same label and would like to find something so that the client is able to find the services related to the two server apps automatically. This will allow expanding the application easily, so that when I add a new server to the stack the client will find it and expose it without need of any change.</p>
<p>Using Netflix Eureka this can be easily achieved by using something like </p>
<pre><code>discoveryClient.getInstances("service-name").forEach((ServiceInstance s)
</code></pre>
<p>But I do no want to add an extra eureka server to the list of microservices since we are going to use kubernetes.</p>
<p>Is there any library which gives this functionality for kubernetes?</p>
| <p>I found fabric8 library that helped me to achieve this.
Still don't know if this is the correct answer, but it works :D</p>
<p><a href="https://github.com/fabric8io/kubernetes-client/tree/master/kubernetes-client" rel="nofollow noreferrer">https://github.com/fabric8io/kubernetes-client/tree/master/kubernetes-client</a></p>
<pre><code>@RequestMapping("/")
private String getResponse() {
String ret = "hello from Client L0L!!!\n";
//Config config = new ConfigBuilder().withMasterUrl("https://mymaster.com").build();
//KubernetesClient client = new DefaultKubernetesClient(config);
KubernetesClient client = new DefaultKubernetesClient();
ServiceList services = client.services().withLabel("APIService").list();
Service server = null;
log.warn("---------------------------------------------->");
for (Service s : services.getItems()) {
log.warn(s.getMetadata().getName());
log.warn(s.toString());
if (s.getMetadata().getLabels().containsKey("ServiceType") && s.getMetadata().getLabels().get("ServiceType").equals("server"))
server = s;
}
log.warn("---------------------------------------------->");
String s = "";
if (server != null) {
RestTemplate t = new RestTemplate();
String url = "http://" + server.getMetadata().getName() + ":" + server.getSpec().getPorts().get(0).getPort() + "/";
log.warn("Contacting server service on: " + url);
s = t.getForObject(url, String.class);
log.warn("Response: " + s);
} else {
log.warn("Didn't find service with label ServiceType=server!!!");
}
return ret + " - " + s;
}
</code></pre>
<p>I create the two services and added the two labels used in the code.</p>
|
<p>After migrating the image type from container-vm to cos for the nodes of a GKE cluster, it seems no longer possible to mount a NFS volume for a pod.</p>
<p>The problem seems to be missing NFS client libraries, as a mount command from command line fails on all COS versions I tried (cos-stable-58-9334-62-0, cos-beta-59-9460-20-0, cos-dev-60-9540-0-0).</p>
<pre><code>sudo mount -t nfs mynfsserver:/myshare /mnt
</code></pre>
<p>fails with</p>
<pre><code>mount: wrong fs type, bad option, bad superblock on mynfsserver:/myshare,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
</code></pre>
<p>But this contradicts the supported volume types listed here:
<a href="https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support" rel="nofollow noreferrer">https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support</a></p>
<p>Mounting a NFS volume in a pod works in a pool with image-type <code>container-vm</code> but not with <code>cos</code>.</p>
<p>With cos I get following messages with <code>kubectl describe pod</code>:</p>
<pre><code>MountVolume.SetUp failed for volume "kubernetes.io/nfs/b6e6cf44-41e7-11e7-8b00-42010a840079-nfs-mandant1" (spec.Name: "nfs-mandant1") pod "b6e6cf44-41e7-11e7-8b00-42010a840079" (UID: "b6e6cf44-41e7-11e7-8b00-42010a840079") with: mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1 nfs []
Output: Mount failed: Mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1]
Output: mount.nfs: Failed to resolve server singlefs-1-vm: Temporary failure in name resolution
</code></pre>
| <p>Martin, are you setting up the mounts manually (executing mount yourself), or are you letting kubernetes do it on your behalf via a pod referencing an NFS volume?</p>
<p>The former will not work. The later will. As you've discovered COS does not ship with NFS client libraries, so GKE gets around this by setting up a chroot (at /home/kubernetes/containerized_mounter/rootfs) with the required binaries and calling mount inside that.</p>
|
<p>I am running kubernetes on minikube, I am behind a proxy, so I had set the env variables(HTTP_PROXY & NO_PROXY) for docker in /etc/systemd/system/docker.service.d/http-proxy.conf.
I was able to do docker pull but when I run the below example</p>
<pre><code>kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod
</code></pre>
<p>pod never starts and I get the error</p>
<p><code>desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\"</code></p>
<p><code>docker pull gcr.io/google_containers/echoserver:1.4</code> works fine</p>
| <p>I ran into the same problem and am sharing what I learned after making a couple of wrong turns. This is with minikube v0.19.0. If you have an older version you might want to update.</p>
<p>Remember, there are two things we need to accomplish:</p>
<ol>
<li>Make sure kubctl does not go through the proxy when connecting to minikube on your desktop.</li>
<li>Make sure that the docker daemon in minikube <em>does</em> go through the proxy when it needs to connect to image repositories.</li>
</ol>
<p>First, make sure your proxy settings are correct in your environment. Here is an example from my .bashrc:</p>
<pre><code>export {http,https,ftp}_proxy=http://${MY_PROXY_HOST}:${MY_PROXY_PORT}
export {HTTP,HTTPS,FTP}_PROXY=${http_proxy}
export no_proxy="localhost,127.0.0.1,localaddress,.your.domain.com,192.168.99.100"
export NO_PROXY=${no_proxy}
</code></pre>
<p>A couple things to note:</p>
<ol>
<li>I set both lower and upper case. Sometimes this matters.</li>
<li>192.168.99.100 is from <code>minikube ip</code>. You can add it after your cluster is started.</li>
</ol>
<p>OK, so that should take care of kubectl working correctly. Now we have the next issue, which is making sure that the Docker daemon in minikube is configured with your proxy settings. You do this, as mentioned by PMat like this:</p>
<pre><code>$ minikube delete
$ minikube start --docker-env HTTP_PROXY=${http_proxy} --docker-env HTTPS_PROXY=${https_proxy} --docker-env NO_PROXY=192.168.99.0/24
</code></pre>
<p>To verify that theses settings have taken, do this:</p>
<pre><code>$ minikube ssh -- systemctl show docker --property=Environment --no-pager
</code></pre>
<p>You should see the proxy environment variables listed.</p>
<p>Why do the <code>minikube delete</code>? Because without it the start won't update the Docker environment if you had previously created a cluster (say without the proxy information). Maybe this is why PMat did not have success passing --docker-env to start (or maybe it was on older version of minikube).</p>
|
<p>I am incrementing a Datadog counter in python:</p>
<pre><code>from datadog import initialize
from datadog import ThreadStats
stats.increment('api.request_count', tags=['environment:' + environment])
</code></pre>
<p>And have set the metric type to "count" and the unit to "requests per none" in the metadata for the metric.</p>
<p>The code runs in a docker container on a kubernetes node in a Container Engine in Google Cloud... I have docker-dd-agent (<a href="https://github.com/DataDog/docker-dd-agent" rel="nofollow noreferrer">https://github.com/DataDog/docker-dd-agent</a>) running on each node.</p>
<p>I can move the container to any node and it logs around 200 requests per minute. But as soon as I scale it up and launch a second container, it only logs around 100 requests per minute. If I scale down to one container again, it spikes to 200 rpm again: </p>
<p><a href="https://i.stack.imgur.com/9dXzg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9dXzg.png" alt="enter image description here"></a></p>
<p>What could be causing the requests to drop or get overwritten from other pods?</p>
| <p>Why not use dogstatsd instead of threadstats? If you're already running the dd-agent on your node in a way that's reachable by your containers, you can use the <code>datadog.statsd.increment()</code> method instead to send the metric over statsd to the agent, and from there it would get forwarded to your datadog account. </p>
<p>Dogstatsd has the benefits of being more straight-forward and of being somewhat easier to troublehsoot, at least with debug-level logging. Threadstats sometimes has the benefit of not requiring a dd-agent, but it does very little (if any) error logging, so makes it difficult to troubleshoot cases like these.</p>
<p>If you went the dogstatsd route, you'd use the following code:</p>
<pre><code>from datadog import initialize
from datadog import statsd
statsd.increment('api.request_count', tags=['environment:' + environment])
</code></pre>
<p>And from there you'd find your metric metadata with the "rate" type and with an interval of "10", and you could use the "as_count" function to translate the values to counts.</p>
|
<p>The following sections are: the errors, the configuration and the kubernetes version and the etcd version.</p>
<pre><code>[root@xt3 kubernetes]# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES; systemctl enable $SERVICES; systemctl status $SERVICES ; done
etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled)
Active: active (running) since Fri 2016-03-25 11:11:25 CST; 58ms ago
Main PID: 6382 (etcd)
CGroup: /system.slice/etcd.service
й╕йд6382 /usr/bin/etcd
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: data dir = /var/lib/etcd/default.etcd
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: member dir = /var/lib/etcd/default.etcd/member
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: heartbeat = 100ms
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: election = 1000ms
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: snapshot count = 10000
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: advertise client URLs = http://localhost:2379,http://localhost:4001
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: loaded cluster information from store: default=http://localhost:2380,default=http://localhost:7001
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 etcdserver: restart member ce2a822cea30bfca in cluster 7e27652122e8b2ae at commit index 10686
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 raft: ce2a822cea30bfca became follower at term 8
Mar 25 11:11:25 xt3 etcd[6382]: 2016/03/25 11:11:25 raft: newRaft ce2a822cea30bfca [peers: [ce2a822cea30bfca], term: 8, commit: 10686, applied: 10001, lastindex: 10686, lastterm: 8]
Job for kube-apiserver.service failed. See 'systemctl status kube-apiserver.service' and 'journalctl -xn' for details.
kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2016-03-25 11:11:35 CST; 58ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 6401 (code=exited, status=255)
Mar 25 11:11:35 xt3 systemd[1]: Failed to start Kubernetes API Server.
Mar 25 11:11:35 xt3 systemd[1]: Unit kube-apiserver.service entered failed state.
kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled)
Active: active (running) since Fri 2016-03-25 11:11:35 CST; 73ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 6437 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
й╕йд6437 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.954951 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers: dia... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955075 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes: dial tcp 127.... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955159 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955222 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes: dial tcp 127.... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955248 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: ge... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955331 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims: dia... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955379 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: ge... connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955430 6437 resource_quota_controller.go:62] Synchronization error: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: getsockopt: connection refused (&url....or)(0xc8204f2000)})
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955576 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:35 xt3 kube-controller-manager[6437]: E0325 11:11:35.955670 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?fieldSelector=meta... connection refused
Hint: Some lines were ellipsized, use -l to show in full.
kube-scheduler.service - Kubernetes Scheduler Plugin
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled)
Active: active (running) since Fri 2016-03-25 11:11:36 CST; 71ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 6466 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
й╕йд6466 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080
Mar 25 11:11:36 xt3 systemd[1]: Started Kubernetes Scheduler Plugin.
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031318 6466 reflector.go:180] pkg/scheduler/factory/factory.go:194: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers: dial tcp 127.0.0.1:...: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031421 6466 reflector.go:180] pkg/scheduler/factory/factory.go:189: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031564 6466 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D: dial tcp 127....: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031644 6466 reflector.go:180] pkg/scheduler/factory/factory.go:184: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse: dial tcp 127...: connection refused
Mar 25 11:11:36 xt3 kube-scheduler[6466]: E0325 11:11:36.031677 6466 reflector.go:180] pkg/scheduler/factory/factory.go:177: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D: dial tcp 127.0.0.1:8080:...: connection refused
Hint: Some lines were ellipsized, use -l to show in full.
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
</code></pre>
<p>The error details are the following.</p>
<pre><code>[root@xt3 kubernetes]# journalctl -xn
-- Logs begin at Sat 2016-03-19 15:30:07 CST, end at Fri 2016-03-25 11:11:42 CST. --
Mar 25 11:11:41 xt3 kube-controller-manager[6437]: E0325 11:11:41.958470 6437 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?fieldSelector=metadata.name%3Ddefault: d
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034315 6466 reflector.go:180] /usr/lib/golang/src/runtime/asm_amd64.s:1696: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D: dial tcp 127.0.0.1:8080: getsockopt:
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034325 6466 reflector.go:180] pkg/scheduler/factory/factory.go:184: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse: dial tcp 127.0.0.1:8080: getsockopt
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034324 6466 reflector.go:180] pkg/scheduler/factory/factory.go:189: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034413 6466 reflector.go:180] pkg/scheduler/factory/factory.go:194: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers: dial tcp 127.0.0.1:8080: getsockopt: conne
Mar 25 11:11:42 xt3 kube-scheduler[6466]: E0325 11:11:42.034434 6466 reflector.go:180] pkg/scheduler/factory/factory.go:177: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D: dial tcp 127.0.0.1:8080: getsockopt: connection
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206743 6487 reflector.go:180] pkg/admission/namespace/lifecycle/admission.go:95: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: getsockopt: connection refus
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206767 6487 reflector.go:180] pkg/admission/limitranger/admission.go:102: Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206816 6487 reflector.go:180] pkg/admission/namespace/exists/admission.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: getsockopt: connection refused
Mar 25 11:11:42 xt3 kube-apiserver[6487]: E0325 11:11:42.206831 6487 reflector.go:180] pkg/admission/resourcequota/admission.go:59: Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: getsockopt: connection ref
[root@xt3 kubernetes]#
</code></pre>
<p>The configurations are the following:</p>
<pre><code>[root@xt3 kubernetes]# pwd
/etc/kubernetes
[root@xt3 kubernetes]# cat config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# ls
apiserver apiserver.rpmsave config config.rpmsave controller-manager kubelet proxy scheduler
[root@xt3 kubernetes]# cat controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# Add your own!
KUBELET_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]#
[root@xt3 kubernetes]# cat scheduler
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS=""
</code></pre>
<p>The versions of the kubernetes and etcd:</p>
<pre><code>[root@xt3 kubernetes]# rpm -qa | grep kuber
kubernetes-node-1.1.0-0.4.git2bfa9a1.el7.x86_64
</code></pre>
<p>I do all the configurations as the kubernetes sites told.(<a href="http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/</a>)</p>
<pre><code> kubernetes-client-1.1.0-0.4.git2bfa9a1.el7.x86_64
kubernetes-1.1.0-0.4.git2bfa9a1.el7.x86_64
kubernetes-master-1.1.0-0.4.git2bfa9a1.el7.x86_64
[root@xt3 kubernetes]# rpm -qa | grep etcd
etcd-2.0.9-1.el7.x86_64
</code></pre>
<p>I look forward to replying for the answers. Please contact me. Thanks very much.</p>
| <p>I had a very similar issue, in my case I fixed it changing the <code>KUBE_API_PORT=</code> to another one avaliable on my system like <code>KUBE_API_PORT=9090</code></p>
<p>Try to <code>curl 127.0.0.1:8080</code> to figure out if that port is being used by another service</p>
|
<p>I have a minikube cluster running locally (v0.17.1), with two deployments: one is a Redis instance and one is a custom app that is trying to connect to the Redis instance. My configuration is more or less copy/pasted from the official docs and the <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml" rel="nofollow noreferrer">Kubernetes guestbook example</a>. </p>
<p>Service definition and deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller-redis
spec:
replicas: 1
template:
metadata:
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
containers:
- name: poller-redis
image: gcr.io/jmen-1266/jmen-redis:a67b5f4bfd8ea8441ed66a8fcb6596f276017a1c
ports:
- containerPort: 6379
env:
- name: GET_HOSTS_FROM
value: dns
imagePullSecrets:
- name: gcr-json-key
</code></pre>
<p>App deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/jmen-1266/poller:a96a452292e894e46339309cc024cac67647cc25
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-json-key
</code></pre>
<p>Relevant (I hope) Kubernetes info:</p>
<pre><code>$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 24d
poller-redis 10.0.0.137 <none> 6379/TCP 20d
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
poller 1 1 1 1 12d
poller-redis 1 1 1 1 4d
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 24d
poller-redis 172.17.0.7:6379 20d
</code></pre>
<p>Inside the <code>poller</code> pod (custom app), I get environment variables created for Redis:</p>
<pre><code># env | grep REDIS
POLLER_REDIS_SERVICE_HOST=10.0.0.137
POLLER_REDIS_SERVICE_PORT=6379
POLLER_REDIS_PORT=tcp://10.0.0.137:6379
POLLER_REDIS_PORT_6379_TCP_ADDR=10.0.0.137
POLLER_REDIS_PORT_6379_TCP_PORT=6379
POLLER_REDIS_PORT_6379_TCP_PROTO=tcp
POLLER_REDIS_PORT_6379_TCP=tcp://10.0.0.137:6379
</code></pre>
<p>However, if I try to connect to that port, I cannot. Doing something like:</p>
<pre><code>nc -vz poller-redis 6379
</code></pre>
<p>fails.</p>
<p>What I have noticed is that I cannot access the Redis service via its ClusterIP but I can via the IP of the pod running Redis.</p>
<p>Any ideas, please?</p>
| <p>Figured this out in the end, it looks like I misunderstood how the service selectors work in Kubernetes. </p>
<p>I have posted that my service definition is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller
tier: backend
role: service
ports:
- port: 6379
targetPort: 6379
</code></pre>
<p>The problem is that <code>metadata.labels</code> and <code>spec.selector</code> are different, when they should actually be the same. I still do not exactly understand why this is the case judging by the Kubernetes documentation, but there you have it. Now my service definition looks like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: poller-redis
labels:
app: poller-redis
tier: backend
role: database
target: poller
spec:
selector:
app: poller-redis
tier: backend
role: database
target: poller
ports:
- port: 6379
targetPort: 6379
</code></pre>
<p>I also now use straight up DNS lookup (i.e. <code>ping poller-redis</code>) rather than trying to connect to <code>localhost:6379</code> from my target pods.</p>
|
<p>I am deploying a 5 docker containers topology into Kubernetes, one of the containers is RabbitMQ container X, and every other container has a link to that container. After setting up a link between Y and X(rabbit), Y has a new DNS record in hosts file in order to reach that container by name. Now that I am migrating to Kubernetes, is it possible to reach (ping) other container by name? am I missing something?</p>
| <p>You probably want to create a <code>Service</code> object for your deployment of RabbitMQ. It will be accessible by name in the DNS from within the cluster.</p>
<blockquote>
<p>If you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service"</p>
</blockquote>
<p>For more information see the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">docs</a>.</p>
|
<p>Is it possible to get a list of pods that have status <code>Running</code> from <code>kubectl</code>?</p>
<p>Using an external command it would be:</p>
<p><code>kubectl get pods | grep Running</code></p>
<p>Can I ask this from <code>kubectl</code> directly instead of string matching with grep or awk?</p>
| <p>You can use a golang template:
<code>kubectl get pods --all-namespaces -o go-template --template '{{range .items}}{{if eq (.status.phase) ("Running")}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}'
</code>
Of course <code>{{.metadata.name}}</code> can be replaced or extended with any informations you need.</p>
|
<p>I want to set up connections from a kubernetes cluster (created via <code>az acs create</code> with mostly default settings) to an Azure Postgresql instance, and I'd like to know what source-IP range to enter in postgres HBA (this is the thing Azure calls a <code>firewall-rule</code> under <code>az postgres server</code>).</p>
<p>The thing is, although I can see from the console errors (when using <code>psql</code> to test) what the current IP is that the cluster requests come from</p>
<pre><code>FATAL: no pg_hba.conf entry for host "x.x.x.x" [...]
</code></pre>
<p>... I just don't see this IP address anywhere in the cluster properties - and anyway, it would seem a very fragile configuration to just whitelist this one IP address without knowing how it's assigned.</p>
<p>(In the Azure Portal, I do see one "Public IP" associated with the cluster master, but that's not the same as the IP seen by postgres, and, I assume, mainly for ingress.)</p>
<p>So ideally, does ACS let me control the outbound IP addresses for the cluster? And if not, can I figure out programmatically what IP or range of IPs to allow?</p>
| <p>It should be the external IP for the node that the pod is scheduled on, e.g. on container engine:</p>
<pre><code>$ kubectl get no -o wide
NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
gke-cluster-1-node-1 Ready 58d v1.5.4 <example node IP> Container-Optimized OS from Google 4.4.21+
$ ssh gke-cluster-1-node-1
$ curl icanhazip.com
<example node IP>
$ kubectl get po -o wide | grep node-1
example-pod-1 1/1 Running 0 11d <pod IP> gke-cluster-1-node-1
$ kubectl exec -it example-pod-1 curl icanhazip.com
<example node IP>
</code></pre>
|
<p>I have installed fresh Kubernetes 1.6.2 master on a single host and now trying to start Flannel using <a href="https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml</a></p>
<p>The pod does not come up: </p>
<pre><code>$ kubectl get pods kube-flannel-ds-l6gn4 --namespace kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-l6gn4 1/2 CrashLoopBackOff 36 2h
$ kubectl logs kube-flannel-ds-l6gn4 --namespace kube-system kube-flannel
E0427 15:35:52.232093 1 main.go:127] Failed to create
SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-l6gn4': the server does not allow access to the requested resource (get pods kube-flannel-ds-l6gn4)
</code></pre>
<p>I've also tried this using the default serviceaccount, but it won't come up.</p>
| <p>Note that to install Kubernetes with flannel you need to specify the <code>--pod-network-cidr</code> flag. See <a href="https://kubernetes.io/docs/admin/kubeadm#kubeadm-init" rel="noreferrer">kubeadm init section</a></p>
<p>Example</p>
<pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>then as Menionned by Jordan, on some environments you need to install <a href="https://en.wikipedia.org/wiki/Role-based_access_control" rel="noreferrer">RBAC</a></p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
</code></pre>
<p>If you are still having issues check that </p>
<ol>
<li><p>Make sure your cni plugin binaries are in place in /opt/cni/bin. You should see corresponding binaries for each CNI add-on</p></li>
<li><p>Make sure the CNI configuration file for the network add-on is in place under /etc/cni/net.d
[root@node1]# ls /etc/cni/net.d
10-flannel.conf</p></li>
<li><p>Run ifconfig to check docker, flannel bridge and virtual interfaces are up</p></li>
</ol>
<p>as mentionned here on github
<a href="https://github.com/kubernetes/kubernetes/issues/36575#issuecomment-264622923" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/36575#issuecomment-264622923</a></p>
<p>I have written a <a href="https://ronanquillevere.github.io/2017/05/16/kubernetes-ovh.html#.WSwwdxJ96V4" rel="noreferrer">complete blog post on the topic</a> if it can help.</p>
|
<p>I'm trying to deploy an application in a GKE 1.6.2 cluster running ContainerOS but the instructions on the website / k8s are not accurate anymore.</p>
<p>The error that I'm getting is:</p>
<pre><code>Error from server (Forbidden): User "[email protected]"
cannot get deployments.extensions in the namespace "gopher-slack-bot".:
"No policy matched.\nRequired \"container.deployments.get\" permission."
(get deployments.extensions gopher-slack-bot)
</code></pre>
<p>The repository for the application is available here <a href="https://github.com/gopheracademy/gopher" rel="nofollow noreferrer">available here</a>.</p>
<p>Thank you.</p>
| <p>I had a few breaking changes in the past with using the gcloud tool to authenticate kubectl to a cluster, so I ended up figuring out how to auth kubectl to a specific namespace independent of GKE. Here's what works for me:</p>
<p>On CircleCI:</p>
<pre><code>setup_kubectl() {
echo "$KUBE_CA_PEM" | base64 --decode > kube_ca.pem
kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
kubectl config set-credentials default-admin --token=$KUBE_TOKEN
kubectl config set-context default-system --cluster=default-cluster --user=default-admin --namespace default
kubectl config use-context default-system
}
</code></pre>
<p>And here's how I get each of those env vars from kubectl.</p>
<pre><code>kubectl get serviceaccounts $namespace -o json
</code></pre>
<p>The service account will contain the name of it's secret. In my case, with the default namespace, it's </p>
<pre><code>"secrets": [
{
"name": "default-token-655ls"
}
]
</code></pre>
<p>Using the name, I get the contents of the secret</p>
<pre><code>kubectl get secrets $secret_name -o json
</code></pre>
<p>The secret will contain <code>ca.crt</code> and <code>token</code> fields, which match the <code>$KUBE_CA_PEM</code> and <code>$KUBE_TOKEN</code> in the shell script above. </p>
<p>Finally, use <code>kubectl cluster-info</code> to get the <code>$KUBE_URL</code> value. </p>
<p>Once you run <code>setup_kubectl</code> on CI, your <code>kubectl</code> utility will be authenticated to the namespace you're deploying to. </p>
|
<p>I have a project running Java in a docker image on Kubernetes. Logs are automatically ingested by the fluentd agent and end up in Stackdriver.</p>
<p>However, the format of the logs is wrong: Multiline logs get put into separate log lines in Stackdriver, and all logs have "INFO" log level, even though they are really warning, or error.</p>
<p>I have been searching for information on how to configure logback to output the correct format for this to work properly, but I can find no such guide in the google Stackdriver or GKE documentation.</p>
<p>My guess is that I should be outputting JSON of some form, but where do I find information on the format, or even a guide on how to properly set up this pipeline.</p>
<p>Thanks!</p>
| <p>This answer contained most of the information I needed: <a href="https://stackoverflow.com/a/39779646">https://stackoverflow.com/a/39779646</a></p>
<p>I have adapted the answer to fit my exact question, and to fix some weird imports and code that seems to have been deprecated.</p>
<p>logback.xml:</p>
<pre><code><configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="my.package.logging.GCPCloudLoggingJSONLayout">
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg</pattern>
</layout>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
</code></pre>
<p>GCPCloudLoggingJSONLayout:</p>
<pre><code>import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.PatternLayout;
import ch.qos.logback.classic.spi.ILoggingEvent;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.Map;
import static ch.qos.logback.classic.Level.DEBUG_INT;
import static ch.qos.logback.classic.Level.ERROR_INT;
import static ch.qos.logback.classic.Level.INFO_INT;
import static ch.qos.logback.classic.Level.TRACE_INT;
import static ch.qos.logback.classic.Level.WARN_INT;
/**
* GKE fluentd ingestion detective work:
* https://cloud.google.com/error-reporting/docs/formatting-error-messages#json_representation
* http://google-cloud-python.readthedocs.io/en/latest/logging-handlers-container-engine.html
* http://google-cloud-python.readthedocs.io/en/latest/_modules/google/cloud/logging/handlers/container_engine.html#ContainerEngineHandler.format
* https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/logging/google/cloud/logging/handlers/_helpers.py
* https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
*/
public class GCPCloudLoggingJSONLayout extends PatternLayout {
private static final ObjectMapper objectMapper = new ObjectMapper();
@Override
public String doLayout(ILoggingEvent event) {
String formattedMessage = super.doLayout(event);
return doLayoutInternal(formattedMessage, event);
}
/**
* For testing without having to deal wth the complexity of super.doLayout()
* Uses formattedMessage instead of event.getMessage()
*/
private String doLayoutInternal(String formattedMessage, ILoggingEvent event) {
GCPCloudLoggingEvent gcpLogEvent =
new GCPCloudLoggingEvent(formattedMessage, convertTimestampToGCPLogTimestamp(event.getTimeStamp()),
mapLevelToGCPLevel(event.getLevel()), event.getThreadName());
try {
// Add a newline so that each JSON log entry is on its own line.
// Note that it is also important that the JSON log entry does not span multiple lines.
return objectMapper.writeValueAsString(gcpLogEvent) + "\n";
} catch (JsonProcessingException e) {
return "";
}
}
private static GCPCloudLoggingEvent.GCPCloudLoggingTimestamp convertTimestampToGCPLogTimestamp(
long millisSinceEpoch) {
int nanos =
((int) (millisSinceEpoch % 1000)) * 1_000_000; // strip out just the milliseconds and convert to nanoseconds
long seconds = millisSinceEpoch / 1000L; // remove the milliseconds
return new GCPCloudLoggingEvent.GCPCloudLoggingTimestamp(seconds, nanos);
}
private static String mapLevelToGCPLevel(Level level) {
switch (level.toInt()) {
case TRACE_INT:
return "TRACE";
case DEBUG_INT:
return "DEBUG";
case INFO_INT:
return "INFO";
case WARN_INT:
return "WARN";
case ERROR_INT:
return "ERROR";
default:
return null; /* This should map to no level in GCP Cloud Logging */
}
}
/* Must be public for Jackson JSON conversion */
public static class GCPCloudLoggingEvent {
private String message;
private GCPCloudLoggingTimestamp timestamp;
private String thread;
private String severity;
public GCPCloudLoggingEvent(String message, GCPCloudLoggingTimestamp timestamp, String severity,
String thread) {
super();
this.message = message;
this.timestamp = timestamp;
this.thread = thread;
this.severity = severity;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public GCPCloudLoggingTimestamp getTimestamp() {
return timestamp;
}
public void setTimestamp(GCPCloudLoggingTimestamp timestamp) {
this.timestamp = timestamp;
}
public String getThread() {
return thread;
}
public void setThread(String thread) {
this.thread = thread;
}
public String getSeverity() {
return severity;
}
public void setSeverity(String severity) {
this.severity = severity;
}
/* Must be public for JSON marshalling logic */
public static class GCPCloudLoggingTimestamp {
private long seconds;
private int nanos;
public GCPCloudLoggingTimestamp(long seconds, int nanos) {
super();
this.seconds = seconds;
this.nanos = nanos;
}
public long getSeconds() {
return seconds;
}
public void setSeconds(long seconds) {
this.seconds = seconds;
}
public int getNanos() {
return nanos;
}
public void setNanos(int nanos) {
this.nanos = nanos;
}
}
}
@Override
public Map<String, String> getDefaultConverterMap() {
return PatternLayout.defaultConverterMap;
}
}
</code></pre>
<p>As I said earlier, the code was originally from another answer, I have just cleaned up the code slightly to fit my use-case better.</p>
|
<p>I am following the Kubernetes tutorials and am using Minikube as my Kubernetes environment on my MacBook. All of the steps in the tutorial work well, with the exception of getting Ingress working (tutorial for Ingress that I am following is at: <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">https://cloud.google.com/container-engine/docs/tutorials/http-balancer</a>). I am getting a "301 Moved Permanently" error when accessing via Ingress.</p>
<p>My environment:</p>
<ul>
<li>MacBook (macOS Sierra, version: 10.12.5 (16F73))</li>
<li>xhyve driver recommended for Minikube</li>
<li>minikube version: v0.19.0</li>
</ul>
<p>I am using the default Ingress controller (nginx for minikube) and have
successfully enabled ingress:</p>
<pre><code>minikube addons enable ingress
</code></pre>
<p>I then followed the steps in the tutorial:</p>
<p>Step 1: Deploy an nginx server (SUCCESSFUL)</p>
<pre><code>kubectl run nginx --image=nginx --port=80
</code></pre>
<p>Step 2a: Expose your nginx deployment as a service internally (SUCCESSFUL)</p>
<pre><code>kubectl expose deployment nginx --target-port=80 --type=NodePort
</code></pre>
<p>Step 2b: Verify that the service is available:</p>
<pre><code>kubectl get service nginx
</code></pre>
<p>Output:</p>
<pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx 10.0.0.170 <nodes> 80:31635/TCP 7s
</code></pre>
<p>From the above, I know that the service is created properly...</p>
<p>Step 3: Create an Ingress resource (SUCCESSFUL)</p>
<p>Ingress config YAML (basic-ingress.yaml):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
</code></pre>
<p>And now applying the YAML:</p>
<pre><code>kubectl apply -f basic-ingress.yaml
</code></pre>
<p>Step 4a: Verify the Ingress</p>
<pre><code>kubectl get ingress basic-ingress
</code></pre>
<p>Output:</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 192.168.64.37 80 16s
</code></pre>
<p>Step 4b: Visit the Application (<strong>* UNSUCCESSFUL *</strong>)</p>
<p>The tutorial states that I should "point my browser to the external IP address of your application and see the web page titled “Welcome to nginx!”.</p>
<p>When I point my browser to the site (<a href="http://192.168.64.37" rel="noreferrer">http://192.168.64.37</a>) is tries to convert to https which gives an error "secure connection failed" (from Firefox, but similar error from Chrome)</p>
<p>However, when I curl the site I get the "301" error:</p>
<pre><code>curl 192.168.64.37
</code></pre>
<p>Output:</p>
<pre><code><html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.11.12</center>
</body>
</html>
</code></pre>
<p>I have been trying to debug this but no luck so far, however, I have provided further information below... Perhaps it will be useful in diagnosing the issue:</p>
<p>Full Ingress Description:</p>
<pre><code>kubectl describe ingress
</code></pre>
<p>Output:</p>
<pre><code>Name: basic-ingress
Namespace: default
Address: 192.168.64.38
Default backend: nginx:80 (172.17.0.3:80)
Rules:
Host Path Backends
---- ---- --------
* * nginx:80 (172.17.0.3:80)
Annotations:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 {ingress-controller } Normal CREATE Ingress default/basic-ingress
3m 3m 1 {ingress-controller } Normal UPDATE Ingress default/basic-ingress
</code></pre>
<p>The following are the nginx Ingress pod logs:</p>
<pre><code>2017-05-26T16:08:27.142309346Z I0526 16:08:27.142156 1 launch.go:101] &{NGINX 0.9.0-beta.4 git-72bb2222 [email protected]:ixdy/kubernetes-ingress.git}
2017-05-26T16:08:27.142345769Z I0526 16:08:27.142218 1 launch.go:104] Watching for ingress class: nginx
2017-05-26T16:08:27.142350322Z I0526 16:08:27.141160 1 nginx.go:180] starting NGINX process...
2017-05-26T16:08:27.142834005Z I0526 16:08:27.142764 1 launch.go:257] Creating API server client for https://10.0.0.1:443
2017-05-26T16:08:27.166946862Z I0526 16:08:27.166808 1 launch.go:120] validated kube-system/default-http-backend as the default backend
2017-05-26T16:08:27.174640373Z I0526 16:08:27.174527 1 controller.go:1184] starting Ingress controller
2017-05-26T16:08:27.175954273Z I0526 16:08:27.175092 1 leaderelection.go:203] attempting to acquire leader lease...
2017-05-26T16:08:27.183187824Z I0526 16:08:27.183085 1 leaderelection.go:213] successfully acquired lease kube-system/ingress-controller-leader-nginx
2017-05-26T16:08:28.175881543Z W0526 16:08:28.175472 1 backend_ssl.go:42] deferring sync till endpoints controller has synced
2017-05-26T16:08:28.179906454Z W0526 16:08:28.179769 1 queue.go:94] requeuing kube-system/default-http-backend, err deferring sync till endpoints controller has synced
2017-05-26T16:08:31.207329775Z I0526 16:08:31.206860 1 event.go:217] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"basic-ingress", UID:"8fd367b9-422d-11e7-9dd4-d68827e778d4", APIVersion:"extensions", ResourceVersion:"278", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/basic-ingress
2017-05-26T16:08:37.353651374Z I0526 16:08:37.353525 1 metrics.go:34] changing prometheus collector from to default
2017-05-26T16:08:37.416440774Z I0526 16:08:37.416333 1 controller.go:421] ingress backend successfully reloaded...
2017-05-26T16:08:57.183350506Z I0526 16:08:57.183046 1 status.go:302] updating Ingress default/basic-ingress status to [{192.168.64.38 }]
2017-05-26T16:08:57.186454653Z I0526 16:08:57.186366 1 event.go:217] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"basic-ingress", UID:"8fd367b9-422d-11e7-9dd4-d68827e778d4", APIVersion:"extensions", ResourceVersion:"327", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/basic-ingress
2017-05-26T16:08:57.471160018Z W0526 16:08:57.471017 1 queue.go:94] requeuing kube-system/ingress-controller-leader-nginx, err
2017-05-26T16:08:57.471182113Z -------------------------------------------------------------------------------
2017-05-26T16:08:57.471185648Z Error: exit status 1
2017-05-26T16:08:57.471188375Z nginx: the configuration file /tmp/nginx-cfg585054790 syntax is ok
2017-05-26T16:08:57.47119123Z 2017/05/26 16:08:57 [emerg] 164#164: no "events" section in configuration
2017-05-26T16:08:57.471194521Z nginx: [emerg] no "events" section in configuration
2017-05-26T16:08:57.471197512Z nginx: configuration file /tmp/nginx-cfg585054790 test failed
2017-05-26T16:08:57.471200655Z
2017-05-26T16:08:57.471203144Z -------------------------------------------------------------------------------
2017-05-26T16:09:37.260238379Z E0526 16:09:37.260068 1 controller.go:417] unexpected failure restarting the backend:
2017-05-26T16:09:37.260266173Z 2017/05/26 16:09:37 [emerg] 182#182: no "events" section in configuration
2017-05-26T16:09:37.260271749Z nginx: [emerg] no "events" section in configuration
2017-05-26T16:09:37.260276045Z W0526 16:09:37.260095 1 queue.go:94] requeuing kube-system/kube-dns, err exit status 1
2017-05-26T16:09:47.20646199Z I0526 16:09:47.206349 1 controller.go:421] ingress backend successfully reloaded...
2017-05-26T16:10:22.518854138Z 192.168.64.1 - [192.168.64.1] - - [26/May/2017:16:10:22 +0000] "GET / HTTP/1.1" 404 21 "-" "curl/7.51.0" 77 0.001 [upstream-default-backend] 172.17.0.4:8080 21 0.001 404
2017-05-26T16:10:30.797507654Z 192.168.64.1 - [192.168.64.1] - - [26/May/2017:16:10:30 +0000] "GET / HTTP/1.1" 404 21 "-" "curl/7.51.0" 77 0.000 [upstream-default-backend] 172.17.0.4:8080 21 0.000 404
</code></pre>
<p>One last note... When I added a "host:" tag in the basic-ingress.yaml file, and mapped the hostname used in the basic-ingress.yaml files to an IP address (192.168.64.37) in /etc/hosts, then I was able to make it work. Not sure why Minikube would require a "host" tag where the near-identical tutorial runs on GCE without the host tag.</p>
| <p>After some experimentation, it appears that the nginx ingress addon incorrectly tries to re-direct to an https protocol (port 443) but if this is not configured (which is my case), then the "301 Moved Permanently" error occurs... so to address this, I set the ssl-redirect flag as follows
(see the new annotation tag ingress.kubernetes.io/ssl-redirect which
was set to "false") which seemed to fix the issue.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
backend:
serviceName: nginx
servicePort: 80
</code></pre>
|
<p>I'm trying to get the name of the container image from within the container.</p>
<p>I tried to look around the Downward APIs, but it seems that it's not possible:<br />
<a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</a></p>
<p>For now my workaround is:</p>
<pre><code> env:
- name: IMAGE
value: "myimage:latest"
</code></pre>
<p>Is there maybe a better way?</p>
| <p>Your solution is probably OK, assuming you build your containers with CI/CD. Please see <a href="https://stackoverflow.com/questions/30690186/how-do-i-access-the-kubernetes-api-from-within-a-pod-container">this answer</a> about how to access the kubernetes API from within a cluster as an alternative. Here's the link to <a href="https://kubernetes.io/docs/api-reference/v1.6/#pod-v1-core" rel="nofollow noreferrer">POD API docs</a> which should contain the image information.</p>
|
<p>We are using kubernetes plugin for Jenkins to construct a special CI system. We want to achieve that:</p>
<ol>
<li><p>For a given build job(named <em>job_A</em>), it will be built more than one time;</p></li>
<li><p>We hope this job <em>job_A</em> to be bound to a specific jenkins-slave(named <em>pod_A</em>), and the <em>pod_A</em> should only provide service for <em>job_A</em>.</p></li>
<li><p>After the <em>job_A</em> finished the build, <em>pod_A</em> should keep alive for the period of time <em>time_T</em>;</p></li>
<li><p>During the <em>time_T</em>, the pod_A should not build other jobs even through it is idle.</p></li>
<li><p>After another time interval which is less than <em>time_T</em>(the <em>pod_A</em> is still alive), we want to re-build the <em>job_A</em>, it will be send to <em>pod_A</em> and be build there.</p></li>
</ol>
<p>Is there any way to achieve this?</p>
| <p>In the PodTemplate definition there is an option idle time <em>Time in minutes to retain slave when idle</em> (<code>time_T</code>) that you can set to a large value.</p>
<p>Then set the label of the pod template to a unique value and same label in your job. That way that pod will only take builds of that job</p>
|
<p>I <a href="https://stackoverflow.com/questions/44233242/kubernetes-cluster-and-phoenix-automate-mix-ecto-migrate/44233465#44233465">learned recently</a> that Kubernetes has a feature called <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">Init Containers</a>. Awesome, because I can use this feature to wait for my postgres service and create/migrate the database before my web application service runs.</p>
<p>However, it appears that Init Containers can only be configured in a Pod yaml file. Is there a way I can do this via a Deployment yaml file? Or do I have to choose?</p>
| <p>To avoid confusion, ill answer your specific question. i agree with oswin that you may want to consider another method.</p>
<p>Yes, you can use init containers with a deployment. this is an example using the old style (pre 1.6) but it should work </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: 'nginx'
spec:
replicas: 1
selector:
matchLabels:
app: 'nginx'
template:
metadata:
labels:
app: 'nginx'
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "install",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["wget", "-O", "/application/index.html", "http://kubernetes.io/index.html"],
"volumeMounts": [
{
"name": "application",
"mountPath": "/application"
}
]
}
]'
spec:
volumes:
- name: 'application'
emptyDir: {}
containers:
- name: webserver
image: 'nginx'
ports:
- name: http
containerPort: 80
volumeMounts:
- name: 'application'
mountPath: '/application'
</code></pre>
|
<p>I am running grafana as a pod inside my Kubernetes cluster. Once Grafana is initialized, it create a DB on localhost and saves all data there. This means that whenever a pod is destroyed and recreated, the whole DB is reinitialized and I lose all previous Data.</p>
<p>The grafana config inside the Pod for DB is ::</p>
<pre><code>#################################### Database ####################################
[database]
# Either "mysql", "postgres" or "sqlite3", it's your choice
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
;password =
</code></pre>
<p>Inorder to get rid of this problem, I have to create an external DB and point my Grafana to use that DB instance everytime I create the Grafana Pod. My current default implementation to create the Grafana pod is ::</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: lb-grafana-service
spec:
ports:
- port: 4545
targetPort: 4545
protocol: TCP
clusterIP: 10.100.10.100
----
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: grafana
name: grafana
name: grafana
spec:
ports:
- name: scrape
port: 4545
nodePort: 30999
protocol: TCP
type: NodePort
selector:
app: grafana
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:develop
env:
- name: Prometheus_SERVICE_URL
value: http://172.29.219.105:30901
- name: GF_SECURITY_ADMIN_PASSWORD
value: "grafana"
- name: GF_SERVER_HTTP_PORT
value: "4545"
ports:
- containerPort: 9101
volumeMounts:
- mountPath: /var
name: grafana-storage
volumes:
- name: grafana-storage
emptyDir: {}
</code></pre>
<p>So what I want to do is overwrite the <code>/etc/grafana/grafana.ini</code> file before Grafana pod comes online OR just rewrite the current file with new values. I have no idea how I can do that right now. A little guidance will be much appreciated. </p>
| <p>In general, you could use <code>ConfigMaps</code> like the comment said.</p>
<p>The Grafana image itself provides the ability to provide all configuration parameters via environment variables. This is only mentioned in the GitHub <a href="https://github.com/grafana/grafana-docker/blob/5a6ae51eb9f15c2448234003eb5857069e269511/README.md" rel="nofollow noreferrer">readme</a>.</p>
<p>This way you could set the environment variables with Kubernetes, like:</p>
<pre><code> spec:
template:
spec:
containers:
- name: grafana
image: grafana/grafana:4.1.1
env:
- name: "GF_SERVER_ROOT_URL"
value: "http://grafana.{{.clusterDomain}}"
- name: "GF_DATABASE_TYPE"
value: "{{.gfDatabaseType}}"
- name: "GF_DATABASE_HOST"
value: "{{.gfDatabaseHost}}"
- name: "GF_DATABASE_NAME"
value: "{{.gfDatabaseName}}"
- name: "GF_DATABASE_USER"
value: "{{.gfDatabaseUser}}"
- name: "GF_DATABASE_PASSWORD"
value: "{{.gfDatabasePassword}}"
- name: "GF_DATABASE_SSL_MODE"
value: "disable"
- name: "GF_AUTH_ANONYMOUS_ENABLED"
value: "true"
</code></pre>
|
<p>I'm pushing my Phoenix app to a Kubernetes cluster for review. I use GitLab to create a service for the web server and another service for a temporary postgres pod.</p>
<p>What I would like to do is automate <code>mix ecto.create</code> and <code>mix ecto.migrate</code>. However, there is a timing issue - there's a short period of time when the postgres server is not ready yet.</p>
<p>I could poll the postgres service in my deployment script before creating the web application service. But is this the most practical method?</p>
| <p>Kubernetes has something called <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init containers</a> which may help you.</p>
<p>From the documentation, here is an example of an app container which waits for a dB container.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
</code></pre>
|
<p>I have read the Load Balancing page at <a href="https://github.com/grpc/grpc/blob/master/doc/load-balancing.md" rel="noreferrer">https://github.com/grpc/grpc/blob/master/doc/load-balancing.md</a> to start of, but still am confused as to the right approach to loadbalancing between back end GRPC instances. We are deploying multiple gRPC 'microservice' instances and want our other gRPC clients to be able to be routed between them. We are deploying these as pods in kubernetes (actually Google Container Engine).</p>
<p>Can anyone explain the 'recommended' approach to loadbalancing gRPC client requests between the gRPC servers? It seems that clients need to be aware of the endpoints - is it not possible to take advantage of the inbuilt LoadBalancer in Container Engine to help?</p>
| <p>I can't talk about kubernetes, but regarding gRPC loadbalancing, there are basically two approaches:</p>
<ol>
<li>For simple usecases, you can enable round robin over the list of addresses returned for a given name (ie, the list of IPs returned for service.foo.com). The way to do this is language-dependent. For C++, you'd use <a href="http://www.grpc.io/grpc/cpp/classgrpc_1_1_channel_arguments.html#a860058f6b9fa340bb2075ae113e423a6" rel="noreferrer"><code>grpc::ChannelArguments::SetLoadBalancingPolicyName</code></a> with "round_robin" as the argument (in the future it'd also be possible to select via "<a href="https://github.com/grpc/grpc/blob/master/doc/service_config.md" rel="noreferrer">service configuration</a>", but the design for how to encode that config in DNS records hasn't been finalized yet).</li>
<li>Use the grpclb protocol. This is suitable for more complex deployements. This feature required the <a href="https://c-ares.haxx.se/" rel="noreferrer">c-ares DNS resolver</a>, which <a href="https://github.com/grpc/grpc/pull/11237" rel="noreferrer">#11237</a> introduces (this PR is very close to being merged). This is the piece that's missing for making grpclb work in open source. In particular:
<ul>
<li>Have a look at <a href="https://github.com/grpc/proposal/blob/master/A5-grpclb-in-dns.md" rel="noreferrer">this document</a>. It goes over the DNS configuration changes needed to control which addresses are marked as balancers. It's currently a "proposal", to be promoted to a doc shortly. It can be taken quite authoritatively, it's what <a href="https://github.com/grpc/grpc/pull/11237" rel="noreferrer">#11237</a> is implementing for balancer discovery.</li>
<li>Write a regular gRPC server (in any language) implementing <a href="https://github.com/grpc/grpc/blob/master/src/proto/grpc/lb/v1/load_balancer.proto" rel="noreferrer">the load balancer protocol</a>. This is the server to be marked in your DNS records as a balancer (as described in <a href="https://github.com/grpc/proposal/blob/master/A5-grpclb-in-dns.md" rel="noreferrer">the aforementioned document</a>), with which the client's grpclb will talk to to obtain the list of backend addresses (what's called <a href="https://github.com/grpc/grpc/blob/master/src/proto/grpc/lb/v1/load_balancer.proto#L120" rel="noreferrer">server_lists</a>). It's up to you to make the logic inside this balancer as simple or as complex as you want.</li>
<li>The client would use the DNS name of the <em>balancer</em> when creating a channel. Note also that your balancer DNS name may point to several addresses. If one or more of them are marked as balancers, grpclb will be used. Which balancer will be picked up if there's more than one? The first one the client connects to.</li>
</ul></li>
</ol>
<p>Let me know if you have any questions.</p>
|
<p>I have started pods with command</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl run busybox \
--image=busybox \
--restart=Never \
--tty \
-i \
--generator=run-pod/v1
</code></pre>
<p>Something went wrong, and now I can't delete this <code>Pod</code>.</p>
<p>I tried using the methods described below but the <code>Pod</code> keeps being recreated.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl delete pods busybox-na3tm
pod "busybox-na3tm" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-vlzh3 0/1 ContainerCreating 0 14s
$ kubectl delete pod busybox-vlzh3 --grace-period=0
$ kubectl delete pods --all
pod "busybox-131cq" deleted
pod "busybox-136x9" deleted
pod "busybox-13f8a" deleted
pod "busybox-13svg" deleted
pod "busybox-1465m" deleted
pod "busybox-14uz1" deleted
pod "busybox-15raj" deleted
pod "busybox-160to" deleted
pod "busybox-16191" deleted
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox-c9rnx 0/1 RunContainerError 0 23s
</code></pre>
| <p>You need to delete the deployment, which should in turn delete the pods and the replica sets <a href="https://github.com/kubernetes/kubernetes/issues/24137" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/24137</a></p>
<p>To list all deployments:</p>
<pre><code>kubectl get deployments --all-namespaces
</code></pre>
<p>Then to delete the deployment:</p>
<pre><code>kubectl delete -n NAMESPACE deployment DEPLOYMENT
</code></pre>
<p>Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment. If NAMESPACE is <code>default</code>, leave off the <code>-n</code> option altogether.</p>
<p>In some cases it could also be running due to a job or daemonset.
Check the following and run their appropriate delete command.</p>
<pre><code>kubectl get jobs
kubectl get daemonsets.app --all-namespaces
kubectl get daemonsets.extensions --all-namespaces
</code></pre>
|
<p>My pod have more than one log file, such as <code>php-fpm</code> + <code>nginx</code> stack. How to collect the log files?</p>
<p>I know nginx doing it with symlink. But this cannot deal with more than 2 log files.</p>
<p>I want to mount volume in host to pods, but how to set different folder for every pod? Is there anyway to mount a folder named by <code>podname</code> in the host to pod folder <code>/logs</code>.</p>
<p><a href="https://github.com/raychaser/powerstrip-logfiles" rel="nofollow noreferrer">raychaser</a> gave a way to collect folders to <code>/var/log/containers/</code>, but I don't think it works in kubernetes.</p>
| <p>Finally, I find a way to collect the logs. </p>
<ol>
<li>Make sure all apps write all log files to a folder, e.g.
<code>/med/log</code>.</li>
<li>Make sure all apps start the job with a script, e.g.
<code>entrypoint.sh</code>.</li>
<li>Create a folder named by $HOSTNAME in mount volume, e.g.
<code>/log/$HOSTNAME</code></li>
<li>Link the folder: <code>ln -sf /log/$HOSTNAME /med/log</code>.</li>
</ol>
<p>In k8s, you need to mount the host log folder <code>/var/log/k8s/</code> to <code>/log</code>. And the log dir looks like:</p>
<pre><code>/var/log/k8s/
|-- app1-${container_id}/
| |-- access.log
| |-- error.log
|-- app2-${container_id}/
|-- access.log
|-- error.log
</code></pre>
|
<p>I'm trying to create a redis cluster on K8s. I need a sidecar container to create the cluster after the required number of redis containers are online.</p>
<p>I've got 2 containers, <code>redis</code> and a sidecar. I'm running them in a <code>statefulset</code> with 6 replicas. I need the sidecar container to run just once for each replica then terminate. It's doing that, but K8s keeps rerunning the sidecar. </p>
<p>I've tried setting a <code>restartPolicy</code> at the container level, but it's invalid. It seems K8s only supports this at the pod level. I can't use this though because I want the <code>redis</code> container to be restarted, just not the sidecar.</p>
<p>Is there anything like a <code>post-init container</code>? My sidecar needs to run <strong>after</strong> the main <code>redis</code> container to make it join the cluster. So an <code>init container</code> is no use.</p>
<p>What's the best way to solve this with K8s 1.6?</p>
| <p>I advise you to use Kubernetes Jobs: </p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/</a></p>
<p>This kind of Job will keep running until it is completed once. In this job you could try detecting if all the required nodes are available in order to form the cluster. </p>
|
<p>I have created some Persistent Volumes myself. Is it possible to make the <code>volumeClaimTemplates</code> in my Stateful Set select those? It seems that it will always provision new persistent volumes, rather than use the existing ones.</p>
<p><em>Kubernetes: 1.5</em></p>
| <p>You need to use the <code>selector</code> option. If you label your pv's accordingly, the pods would use the volumes you previously created. </p>
<p>Example From: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims</a></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}
</code></pre>
<p>Even though it is a PersistentVolumeClaim, it should be applicable to your volumeClaimTemplate</p>
|
<p>K8 Version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I tried to launch spinnaker pods(<a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple/svcs" rel="noreferrer">yaml files here</a>). I choose <code>Flannel</code>(<code>kubectl apply -f kube-flannel.yml</code>) while installing K8. Then I see the pods are not starting, it is struck in "ContainerCreating" status. I <code>kubectl describe</code> a pod, showing <code>NetworkPlugin cni failed to set up pod</code></p>
<pre><code>veeru@ubuntu:/opt/spinnaker/experimental/kubernetes/simple$ kubectl describe pod data-redis-master-v000-38j80 --namespace=spinnaker
Name: data-redis-master-v000-38j80
Namespace: spinnaker
Node: ubuntu/192.168.6.136
Start Time: Thu, 01 Jun 2017 02:54:14 -0700
Labels: load-balancer-data-redis-server=true
replication-controller=data-redis-master-v000
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"spinnaker","name":"data-redis-master-v000","uid":"43d4a44c-46b0-11e7-b0e1-000c29b...
Status: Pending
IP:
Controllers: ReplicaSet/data-redis-master-v000
Containers:
redis-master:
Container ID:
Image: gcr.io/kubernetes-spinnaker/redis-cluster:v2
Image ID:
Port: 6379/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
Requests:
cpu: 100m
Environment:
MASTER: true
Mounts:
/redis-master-data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-71p4q (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-71p4q:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-71p4q
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
45m 45m 1 default-scheduler Normal Scheduled Successfully assigned data-redis-master-v000-38j80 to ubuntu
43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 8265d80732e7b73ebf8f1493d40403021064b61436c4c559b41330e7592fd47f"
43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: b972862d763e621e026728073deb9a304748c4ec4522982db0a168663ab59d36
42m 42m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 72b39083a3a81c0da1d4b7fa65b5d6450b62a3562a05452c27b185bc33197327"
41m 41m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d315511bfa9f6f09d7ef4cd277bde44e4885291ea566e3089460356c1ed34413"
40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: a03d776d2d7c5c4ae9c1ec31681b0b6e40759326a452916cff0e60c4d4e2c954"
40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: acf30a4aacda0c53bdbb8bc2d416704720bd1b623c43874052b4029f15950052"
39m 39m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ea49f5f9428d585be7138f4ebce54f713eef549b16104a3d7aa728175b6ebc2a"
38m 38m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ec2483435b4b22576c9bd7bffac5d67d53893c189c0cf26aca1ae6af79d09914"
38m 1m 39 kubelet, ubuntu Warning FailedSync (events with common reason combined)
45m 1s 448 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
45m 0s 412 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"data-redis-master-v000-38j80_spinnaker\" network: open /run/flannel/subnet.env: no such file or directory"
</code></pre>
<p>How can I resolve above issue? </p>
<p><strong>UPDATE-1</strong></p>
<p>I have reinitialized K8 with <code>kubeadm init --pod-network-cidr=10.244.0.0/16</code> and deployed sample <a href="http://containertutorials.com/get_started_kubernetes/k8s_example.html" rel="noreferrer">nginx pod</a>. Still getting same error</p>
<pre><code>-----------------OUTPUT REMOVED-------------------------------
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 3m 1 default-scheduler Normal Scheduled Successfully assigned nginx-622qj to ubuntu
1m 1m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 38250afd765f0108aeff6e31bbe5a642a60db99b97cbbf15711f810cbe8f3829"
24s 24s 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 3bebcef02cb5f6645a65dcf06b2730144080f9d3c4fb18267feca5c5ce21031c"
2m 9s 33 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
3m 7s 32 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"nginx-622qj_default\" network: open /run/flannel/subnet.env: no such file or directory"
</code></pre>
| <p>You error message shows flanel subnet.evn file is missing. you need to fix flannel configuration first. What version of kubernetes your using?</p>
<blockquote>
<p>network: open /run/flannel/subnet.env: no such file or directory"</p>
</blockquote>
<p>if your using kubernetes 1.6 and above, we can use below yaml file to configure the flannel container process.</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
|
<p>Use case:</p>
<p>I want to be able to re-run a job from where the first job left off. I am using Helm to deploy into Kubernetes.</p>
<p>I have the idea of saving the state of the first job in a ConfigMap. The ConfigMap yaml defining the ConfigMap is packaged up with the job and both are deployed at the same time with Helm. </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: NameOfMyConfigMap
data:
someKey: someValue
MY_STATE: state <---- See below as to whether this should be included or not
</code></pre>
<p>The job is run with an ENV variable set from the ConfigMap:</p>
<pre><code>env:
- name: MY_STATE
valueFrom:
configMapKeyRef:
name: NameOfMyConfigMap
key: MY_STATE
</code></pre>
<p>The job runs a script that looks to see if $MY_STATE is set and if it is not set then the job is being run for the first time, otherwise the job closes down the already running first job, saves the first job's state into the MY_STATE ConfigMap variable and launches the job again using the saved state.</p>
<p>If I don't declare the MY_STATE key in the initial ConfigMap definition then the first run of the job will fail, as the ENV definition above cannot find the ConfigMap variable.</p>
<p>If I do declare the value (<code>MY_STATE: ""</code>) in the ConfigMap definition, then the first deployment will work. However, if I re-deploy the job with <code>helm upgrade</code> then does the value I enter in the definition not overwrite an existing value in the existing ConfigMap?</p>
<p>What is the best method of storing state in between runs of the same job?</p>
| <p>Have you tried using volumes? In this case it should not be overwritten when using helm upgrade.</p>
<p>Could an example like this work? (From
<a href="https://groups.google.com/forum/#!msg/kubernetes-users/v2806ezEdPk/1geJCO8-AQAJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!msg/kubernetes-users/v2806ezEdPk/1geJCO8-AQAJ</a>)</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: keystore-configmap-job
spec:
template:
metadata:
name: keystore-configmap
spec:
containers:
- name: keystore
image: ubuntu
volumeMounts:
- name: keystore-configmap-volume
mountPath: /config-base64
command: [ "sh", "-c", "cat /config-base64/keystore.jks | base64 --decode | sha256sum" ]
restartPolicy: Never
volumes:
- name: keystore-configmap-volume
configMap:
name: keystore-configmap
</code></pre>
|
<p>I'm running Kubernetes 1.6.2 with RBAC enabled. I've created a user <code>kube-admin</code> that has the following Cluster Role binding</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: k8s-admin
subjects:
- kind: User
name: kube-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>When I attempt to <code>kubectl exec</code> into a running pod I get the following error.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n kube-system exec -it kubernetes-dashboard-2396447444-1t9jk -- /bin/bash
error: unable to upgrade connection: Forbidden (user=system:anonymous, verb=create, resource=nodes, subresource=proxy)
</code></pre>
<p>My guess is I'm missing a <code>ClusterRoleBinding</code> ref, which role am I missing?</p>
| <p>The connection between kubectl and the api is fine, and is being authorized correctly.</p>
<p>To satisfy an exec request, the apiserver contacts the kubelet running the pod, and that connection is what is being forbidden.</p>
<p>Your kubelet is configured to authenticate/authorize requests, and the apiserver is not providing authentication information recognized by the kubelet.</p>
<p>The way the apiserver authenticates to the kubelet is with a client certificate and key, configured with the <code>--kubelet-client-certificate=... --kubelet-client-key=...</code> flags provided to the API server.</p>
<p>See <a href="https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#overview" rel="noreferrer">https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#overview</a> for more information. </p>
|
<p><a href="https://traefik.io/" rel="noreferrer">Traefik</a> is a reverse HTTP proxy with several supported backends, Kubernetes included. How does Istio compare?</p>
| <p>It's something of an apples-to-oranges comparison. </p>
<p>Edge proxies like Traefik or Nginx are best compared to <a href="https://lyft.github.io/envoy/" rel="noreferrer">Envoy</a> - the proxy that Istio leverages. An Envoy proxy is installed automatically by Istio adjacent to every pod.</p>
<p>Istio provides several higher level capabilities beyond Envoy, including routing, ACLing and service discovery and access policy <em>across a set</em> of services. In effect, it stitches a set of Envoy enabled services together. This design pattern is often called a <em>service mesh</em>.</p>
<p>Istio is also currently limited to Kubernetes deployments in a single cluster, though work is in place to remove these restrictions in time.</p>
|
<p>How can I deserialize a Kubernetes YAML file into an Go struct? I took a look into the <code>kubectl</code> code, but somehow I get an error for every YAML file:</p>
<pre><code>no kind "Deployment" is registered for version "apps/v1beta1"
</code></pre>
<p>This is an MWE:</p>
<pre><code>package main
import (
"fmt"
"k8s.io/client-go/pkg/api"
)
var service = `
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
`
func main() {
decode := api.Codecs.UniversalDecoder().Decode
//decode := api.Codecs.UniversalDeserializer().Decode
obj, _, err := decode([]byte(service), nil, nil)
if err != nil {
panic(err)
}
fmt.Printf("%#v\n", obj)
}
</code></pre>
<p>I am using client version <code>2.0.0</code>. The <code>glide.yaml</code> looks like this:</p>
<pre><code>package: test/stackoverflow
import:
- package: k8s.io/client-go
version: ^2.0.0
</code></pre>
<p>These are the references to <code>kubectl</code>:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/43ac38e29e6ecf83e78bc7c5d9f804310b051c95/pkg/kubectl/cmd/apply.go#L637" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/43ac38e29e6ecf83e78bc7c5d9f804310b051c95/pkg/kubectl/cmd/apply.go#L637</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/blob/43ac38e29e6ecf83e78bc7c5d9f804310b051c95/pkg/kubectl/cmd/util/factory_client_access.go#L205-L213" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/43ac38e29e6ecf83e78bc7c5d9f804310b051c95/pkg/kubectl/cmd/util/factory_client_access.go#L205-L213</a></li>
</ul>
<p>Unfortunately, the <a href="https://godoc.org/k8s.io/client-go" rel="noreferrer">docs</a> are very confusing to me, so I have no idea how to tackle this problem.</p>
<p><strong>Edit:</strong></p>
<p>This problem also exists with other resource types:</p>
<ul>
<li><code>no kind "Service" is registered for version "v1"</code></li>
</ul>
| <p>You need to import <code>_ "k8s.io/client-go/pkg/apis/extensions/install"</code> otherwise the schema is empty, see also <a href="https://godoc.org/k8s.io/client-go/pkg/apis/extensions/install" rel="noreferrer">docs</a>.</p>
<p>The complete working example is:</p>
<pre><code>$ go get -u github.com/golang/dep/cmd/dep
$ dep init
$ go run main.go
</code></pre>
<p>With the following <code>main.go</code>:</p>
<pre><code>package main
import (
"fmt"
"k8s.io/client-go/pkg/api"
_ "k8s.io/client-go/pkg/api/install"
_ "k8s.io/client-go/pkg/apis/extensions/install"
)
var deployment = `
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
`
func main() {
// decode := api.Codecs.UniversalDecoder().Decode
decode := api.Codecs.UniversalDeserializer().Decode
obj, _, err := decode([]byte(deployment), nil, nil)
if err != nil {
fmt.Printf("%#v", err)
}
fmt.Printf("%#v\n", obj)
}
</code></pre>
<p>Note that I also imported <code>_ "k8s.io/client-go/pkg/api/install"</code> for you so that you can use objects in <code>v1</code> such as pods or services.</p>
<p>EDIT: Kudos to my colleague <a href="https://twitter.com/the1stein" rel="noreferrer">Stefan Schimanski</a> who proposed the initial solution.</p>
|
<p>I have been trying to find a way to define a service in one namespace that links to a Pod running in another namespace. I know that containers in a Pod running in <code>namespaceA</code> can access <code>serviceX</code> defined in <code>namespaceB</code> by referencing it in the cluster DNS as <code>serviceX.namespaceB.svc.cluster.local</code>, but I would rather not have the code inside the container need to know about the location of <code>serviceX</code>. That is, I want the code to just lookup <code>serviceX</code> and then be able to access it.</p>
<p>The <a href="http://kubernetes.io/docs/user-guide/services/" rel="noreferrer">Kubernetes documentation</a> suggests that this is possible. It says that one of the reasons that you would define a service without a selector is that <strong>You want to point your service to a service in another Namespace or on another cluster</strong>.</p>
<p>That suggests to me that I should:</p>
<ol>
<li>Define a <code>serviceX</code> service in <code>namespaceA</code>, without a selector (since the POD I want to select isn't in <code>namespaceA</code>).</li>
<li>Define a service (which I also called <code>serviceX</code>) in <code>namespaceB</code>, and then</li>
<li>Define an Endpoints object in <code>namespaceA</code> to point to <code>serviceX</code> in <code>namespaceB</code>.</li>
</ol>
<p>It is this third step that I have not been able to accomplish.</p>
<p>First, I tried defining the Endpoints object this way:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Endpoints
apiVersion: v1
metadata:
name: serviceX
namespace: namespaceA
subsets:
- addresses:
- targetRef:
kind: Service
namespace: namespaceB
name: serviceX
apiVersion: v1
ports:
- name: http
port: 3000
</code></pre>
<p>That seemed the logical approach, and <em>obviously</em> what the <code>targetRef</code> was for. But, this led to an error saying that the <code>ip</code> field in the <code>addresses</code> array was mandatory. So, my next try was to assign a fixed ClusterIP address to <code>serviceX</code> in <code>namespaceB</code>, and put that in the IP field (note that the <code>service_cluster_ip_range</code> is configured as <code>192.168.0.0/16</code>, and <code>192.168.1.1</code> was assigned as the ClusterIP for <code>serviceX</code> in <code>namespaceB</code>; <code>serviceX</code> in <code>namespaceA</code> was auto assigned a different ClusterIP on the <code>192.168.0.0/16</code> subnet):</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Endpoints
apiVersion: v1
metadata:
name: serviceX
namespace: namespaceA
subsets:
- addresses:
- ip: 192.168.1.1
targetRef:
kind: Service
namespace: namespaceB
name: serviceX
apiVersion: v1
ports:
- name: http
port: 3000
</code></pre>
<p>That was accepted, but accesses to <code>serviceX</code> in <code>namespaceA</code> did not get forwarded to the Pod in <code>namespaceB</code> - they timed out. Looking at the iptables setup, it looks like it would have had to do NAT pre-routing twice to accomplish that.</p>
<p>The only thing I did find that worked - but is not a satisfactory solution - is to lookup the actual IP address of the Pod providing <code>serviceX</code> in <code>namespaceB</code> and put that address in the Endpoints object in <code>namespaceA</code>. That isn't satisfactory, of course, because the Pod IP address may change over time. That's the problem service IPs are there to solve.</p>
<p>So, is there a way to meet what seems to be the promise of the documentation that I can point a service in one namespace to a <em>service</em> running in a different namespace?</p>
<p>A commenter questioned why you would want to do this - here is a use case that makes sense to me, at least:</p>
<p>Say you have a multi-tenant system, which also includes a common data-access function that can be shared between tenants. Now imagine that there are different flavors of this data-access function with common APIs, but different performance characteristics. Some tenants get access to one of them, other tenants have access to another one.</p>
<p>Each tenant's pods run in their own namespaces, but each one needs to access one of these common data-access services, which will necessarily be in another namespace (since it is accessed by multiple tenants). But, you wouldn't want the tenant to have to change their code if their subscription changes to access the higher-performing service.</p>
<p>A potential solution (the cleanest one I can think of, if only it worked) is to include a service definition in each tenant's namespace for the data-access service, with each one configured for the appropriate endpoint. This service definition would be configured to point to the proper data-access service each tenant is entitled to use.</p>
| <p>I stumbled over the same issue and found a nice solution which does not need any static ip configuration:</p>
<p>You can access a service via it's <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="noreferrer">DNS name</a> (as mentioned by you): <em>servicename.namespace.svc.cluster.local</em></p>
<p>You can use that DNS name to reference it in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="noreferrer">another namespace via a local service</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: service-y
namespace: namespace-a
spec:
type: ExternalName
externalName: service-y.namespace-b.svc.cluster.local
ports:
- port: 80
</code></pre>
|
<p>Is it possible to change an existing PVC to ReadWriteMany from ReadWriteOnce without recreating it? I have an existing database pod and the file system has become read only for some reason. I wanted to connect another pod to the PVC to look at it.</p>
| <p>You can do so without changing the PVC. The <em>Once</em> in <code>ReadWriteOnce</code> is not about the number of Pods accessing the PVC but about the number of Nodes accessing it.</p>
<p>Create another Pod mounting the same PVC and set <code>nodeSelector</code> (use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#interlude-built-in-node-labels" rel="nofollow noreferrer">hostname</a>) so it schedules on the same node where the PVC is mounted now.</p>
<p>Alternatively SSH into the node that has the PVC mounted already for the Pod. <code>kubectl describe ...</code> gives you the id you are looking for if you have many mounts.</p>
|
<p>In Kubernetes, let’s say we have three pods, which are physically hosted on Node X, Y and Z. When I expose them as a service using ‘kubectl expose’, are all nodes in the cluster (in addition to X, Y and Z) configured the same way? Specifically, kube-proxy in each node within the cluster watches the apiserver, builds a bunch of iptables rules and references the portal IP (chosen by apiserver), and inserts those rules to the node which it lives on?</p>
<p>I assume the reason it has to be done on all nodes is that the cluster has no idea from which node the client would come from to hit the portal IP?</p>
| <p>You are correct. The portal network (aka service network, cluster network) has no network interface but is a collection of iptables rules managed by kube-proxy. Each node needs to have these rules as a pod on any of them could connect any portal IP (aka service IP, cluster IP).</p>
<p>Read more here:
<a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies</a></p>
|
<p>I am trying to execute this rolling update example in v1.6.2 cluster. my kubectl command giving this error message.</p>
<pre><code>Error from server: json: cannot unmarshal string into Go value of type map[string]interface {}
</code></pre>
<p>Here is the YMAL file from this page: <a href="https://www.mirantis.com/blog/scaling-kubernetes-daemonsets/" rel="noreferrer">https://www.mirantis.com/blog/scaling-kubernetes-daemonsets/</a></p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: frontend
spec:
updateStrategy: RollingUpdate
maxUnavailable: 1
minReadySeconds: 0
template:
metadata:
labels:
app: frontend-webserver
spec:
nodeSelector:
app: frontend-node
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>How to resolve this error?</p>
<p>Thanks
SR</p>
| <p>The <code>updateStrategy</code> appears to be incorrect:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: frontend
spec:
updateStrategy:
type: RollingUpdate
maxUnavailable: 1
template:
metadata:
labels:
app: frontend-webserver
spec:
nodeSelector:
app: frontend-node
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
</code></pre>
|
<p>GKE seems to create a cluster using one availability zone for the master although it provides an option to deploy nodes to multiple availability zones. I am concerned that if master AZ goes down, I cannot manage my cluster anymore. I understand my apps will continue to run but it is a big concern that I cannot scale up my service or deploy a new version of my apps, etc.</p>
<p>Is my understanding of "GKE cluster is vulnerable to master zone going down" correct? If not, can you please explain how? If it is correct, what are my options to make it highly available so that it can tolerate one availability zone going down?</p>
| <p>The GKE master today is not highly available and if a zone goes down, your cluster's Kubernetes API will go down with it. However you should note that GKE master is managed service with a 99.5% SLA. <a href="https://cloud.google.com/container-engine/sla" rel="nofollow noreferrer">https://cloud.google.com/container-engine/sla</a> In the future, GKE may offer high-availability solutions for the master (API server).</p>
<p>Your understanding is correct that if the Kubernetes master/API becomes unavailable for a brief amount of time, it does not impact your deployed workloads (e.g. websites or other services) running on the cluster. But you will not be able to scale up/down things.</p>
<p>As a user, you cannot do anything to make the master highly available today.</p>
<p>However, I would say 99.5% is a pretty good uptime. It corresponds to 7 minutes a day (<a href="https://uptime.is/99.5" rel="nofollow noreferrer">https://uptime.is/99.5</a>) and if you are not managing your cluster 24/7, you are likely to see issues every now and then. If you are using automation, you should probably have some retry logic.</p>
|
<p>Does Kubernetes support connection draining?</p>
<p>For example, my deployment rolls out a new version of my web app container.
In connection draining mode Kubernetes should spin up a new container from the new image and route all new traffic coming to my service to this new instance. The old instance should remain alive long enough to send a response for existing connections.</p>
| <p>Kubernetes <strong>does</strong> support connection draining, but how it happens is controlled by the Pods, and is called <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="noreferrer">graceful termination</a>.</p>
<h2>Graceful Termination</h2>
<p>Let's take an example of a set of Pods serving traffic through a Service. This is a simplified example, the full details can be found in the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="noreferrer">documentation</a>.</p>
<ol>
<li>The system (or a user) notifies the API that the Pod needs to stop.</li>
<li>The Pod is set to the <code>Terminating</code> state. This removes it from a Service serving traffic. Existing connections are maintained, but new connections should stop as soon as the load balancers recognize the change.</li>
<li>The system sends SIGTERM to all containers in the Pod.</li>
<li>The system waits <code>terminationGracePeriodSeconds</code> (default 30s), or until the Pod completes on it's own.</li>
<li>If containers in the Pod are still running, they are sent SIGKILL and terminated immediately. At this point the Pod is forcefully terminated if it is still running.</li>
</ol>
<p>This not only covers the simple termination case, but the exact same process is used in rolling update deployments, each Pod is terminated in the exact same way and is given the opportunity to clean up.</p>
<h2>Using Graceful Termination For Connection Draining</h2>
<p><strong>If you do not handle SIGTERM in your app, your Pods will immediately terminate</strong>, since the default action of SIGTERM is to terminate the process immediately, and the grace period is not used since the Pod exits on its own.</p>
<p>If you need "connection draining", this is the basic way you would implement it in Kubernetes:</p>
<ol>
<li>Handle the SIGTERM signal, and clean up your connections in a way that your application decides. This may simply be "do nothing" to allow in-flight connections to clear out. Long running connections may be terminated in a way that is (more) friendly to client applications.</li>
<li>Set the <code>terminationGracePeriodSeconds</code> long enough for your Pod to clean up after itself.</li>
</ol>
|
<p>My app consists of two containers: the app itself and a database. I'm planning to wrap the app into a chart, thus paving a way for easy reproducible deployment.</p>
<p>Apart from setting/reading environment envs (which helm+kubernetes seems to handle really well), part of app's configuration is:</p>
<blockquote>
<p>making sure the database is pre-filled with special auxiliary data (e.g. admin user exists, some user role names required to create new users are there, etc.).</p>
</blockquote>
<p>I like the idea of having readable yaml files hold the entire configuration in a human readable format. However at a glance it doesn't seem that <code>helm</code> in any way would help with <em>this</em> (DB records) kind of configuration.</p>
<p>That being said, <strong>what is the best place</strong> to put code/configuration ensuring that DB contains certain auxiliary records? A config yaml file? An container init script, written in bash?</p>
| <p>You are right, Kubernetes or Helm cannot help with preparing your pre-filled database records/schema.</p>
<p>You should probably have your application initialize those pre-filled data. If you don't want to put this logic into your application, you can ship an initialization script and configure an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> with Kubernetes.</p>
<p>Kubernetes makes sure every time your application container is restarted, the init container runs first. In the init container, you can execute a bash/python/... script that makes sure the records you want are there.</p>
|
<p>I have a <code>configMap</code> created from file:</p>
<pre><code>kubectl create configmap ssportal-apache-conf --from-file=ssportal.conf=ssportal.conf
</code></pre>
<p>and then I need to mount this file into the deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ssportal
spec:
replicas: 2
template:
metadata:
labels:
app: ssportal
spec:
containers:
- name: ssportal
image: eu.gcr.io/my-project/ssportal:0.0.0
ports:
- containerPort: 80
volumeMounts:
- name: apache2-config-volume
mountPath: /etc/apache2/
volumes:
- name: apache2-config-volume
configMap:
name: ssportal-apache-conf
items:
- key: ssportal.conf
path: sites-enabled/ssportal.conf
</code></pre>
<p>But this effectively removes the existing <code>/etc/apache2/</code> directory from the container and replaces it with one an only file <code>/etc/apache2/sites-enabled/ssportal.conf</code>.</p>
<p>Is it possible to overlay only one file over the existing config directory?</p>
| <p>Okay, it's a bit tricky. The final working YAML spec is</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ssportal
spec:
replicas: 2
template:
metadata:
labels:
app: ssportal
spec:
containers:
- name: ssportal
image: eu.gcr.io/my-project/ssportal:0.0.0
command: ["sleep","120d"]
ports:
- containerPort: 80
volumeMounts:
- name: test
mountPath: /etc/apache2/conf-enabled/test.conf
subPath: test.conf
volumes:
- name: test
configMap:
name: sstest
</code></pre>
<p>and <code>configMap</code> creation steps:</p>
<pre><code>echo "# comment" > test.conf
kubectl create configmap sstest --from-file=test.conf=test.conf
</code></pre>
|
<p>Is there a way to prevent readiness probe from execution once container has successfully started? I suppose that liveness probe should be enough to monitor container health.</p>
| <p>The readiness and liveness probes serve slightly different purposes.</p>
<p>The readiness probe controls whether the pod IP is included in the list of endpoints for a service, and so also whether a target for a route when it is exposed via an external URL.</p>
<p>The liveness probe determines whether a pod is still running normally or whether it should be restarted.</p>
<p>Technically an application could still be running fine, but is perhaps backlogged, and so you want to use the readiness probe to temporarily remove it from the set of endpoints for a service to avoid further requests being routed its way and simply being blocked in the request queue for that specific pod when another pod could handle it.</p>
<p>So I personally would agree the duplication seems strange, but it is that way so the different situations can be distinguished.</p>
|
<p>Using the latest Kubernetes version in GCP (<code>1.6.4</code>), I have the following <code>Ingress</code> definition:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myproject
namespace: default
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: staging.myproject.io
http:
paths:
- path: /poller
backend:
serviceName: poller
servicePort: 8080
</code></pre>
<p>Here is my service and deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: poller
labels:
app: poller
tier: backend
role: service
spec:
type: NodePort
selector:
app: poller
tier: backend
role: service
ports:
- port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/myproject-1364/poller:latest
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: staging
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
</code></pre>
<p>In my <code>/etc/hosts</code> I have a line like:</p>
<pre><code>35.190.37.148 staging.myproject.io
</code></pre>
<p>However, I get <code>default backend - 404</code> when curling any endpoint on <code>staging.myproject.io</code>:</p>
<pre><code>$ curl staging.myproject.io/poller/cache/status
default backend - 404
</code></pre>
<p>I have the exact same configuration working locally inside Minikube, with the only difference being the domain (<code>dev.myproject.io</code>), and that works like a charm.</p>
<p>I have read and tried pretty much everything that I could find, including stuff from <a href="https://github.com/kubernetes/ingress/tree/master/controllers/nginx#running-multiple-ingress-controllers" rel="noreferrer">here</a> and <a href="https://github.com/kubernetes/ingress/blob/master/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc" rel="noreferrer">here</a> and <a href="https://github.com/kubernetes/ingress/issues/349" rel="noreferrer">here</a>, but maybe I'm just missing something... any ideas?</p>
| <p>It does take 5-10 minutes for an Ingress to actually become usable in GKE. In the meanwhile, you can see responses with status codes 404, 502 and 500.</p>
<p>There is an ingress tutorial here: <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">https://cloud.google.com/container-engine/docs/tutorials/http-balancer</a> I recommend following it. Based on what you pasted, I can say the following:</p>
<ul>
<li>You use service.Type=NodePort, which is correct.</li>
<li>I am not sure about the <code>ingress.kubernetes.io/rewrite-target</code> annotation, maybe that's the issue.</li>
<li>Make sure your application responds <code>200 OK</code> to <code>GET /</code> request. </li>
<li>Also I realize you <code>curl http://<ip>/</code> but your Ingress spec only handles <code>/poller</code> endpoint. So it's normal you get <code>default backend - 404</code> response while querying <code>/</code>. You didn't configure any backends for <code>/</code> path in your Ingress spec.</li>
</ul>
|
<p>Is it possible to use configMap values for port values like containerPort or targetPort?</p>
<p>Here's the possible example how it could work:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: {{someImage}}
ports:
- name: CONTAINER_PORT
containerPort:
valueFrom:
configMapKeyRef:
name: auth-config
key: PORT
env:
- name: PORT
valueFrom:
configMapKeyRef:
name: auth-config
key: PORT
</code></pre>
| <p>No, it is not possible for the <code>ports</code> section.</p>
<p>You can use <code>env</code> keys in container's commands and args. Find more here: <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/expansion.md" rel="noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/expansion.md</a></p>
<p>Usually most docker images have static port numbers encoded in the image with <code>EXPOSE</code> keyword, so having a dynamically configurable port is not a best practice from configuration standpoint. Try sticking to fixed port numbers as you can always remap them while exposing the port on Service.</p>
|
<p>I am trying to build an HTTPs proxy server in front of another service in Kubernetes, using either an NginX proxy LoadBalancer server, or Ingress. Either way, I need a certificate and key so that my external requests get authenticated.</p>
<p>I'm looking at <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">how to manage tls in a cluster</a>, and I've noticed that the certificate used to connect to the container cluster is the same one as is mounted at <code>/var/run/secrets/kubernetes.io/serviceaccount/ca.crt</code> on a running pod.</p>
<p>So I'm thinking that my node cluster already has a registered certificate, all I need is the key, throw it into a secret and mount that into my proxy server. But I can't find how.</p>
<p>Is it this simple? How would I do that? Or do I need to create a new certificate, sign it etc etc? Would I then need to replace the current certificate?</p>
| <p>If you want an external request to get into your K8s cluster then this is the job of an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress" rel="noreferrer">ingress controller</a>, or configuring the service with a loadbalancer, if your cloud provider supports it.</p>
<p>The certificate discussed in <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="noreferrer">your reference</a> is really meant to be used for intra-cluster communications, as it says:</p>
<blockquote>
<p>Every Kubernetes cluster has a cluster root Certificate Authority (CA). The CA is generally used by cluster components to validate the API server’s certificate, by the API server to validate kubelet client certificates, etc.</p>
</blockquote>
<p>If you go for an ingress approach then <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="noreferrer">here is the doc for tls</a>. At the bottom a list of <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#alternatives" rel="noreferrer">alternatives</a>, such as the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer" rel="noreferrer">load balancer</a> approach.</p>
<p>I guess you could use the internal certificate externally if you are able to get all your external clients to trust it. Personally I'd probably use <a href="https://github.com/jetstack/kube-lego" rel="noreferrer">kube-lego</a>, which automates getting certificates from <a href="https://letsencrypt.org/" rel="noreferrer">Let's Encrypt</a>, since most browsers trust this CA now.</p>
<p>Hope this helps</p>
|
<p>We have an HTTP <code>livenessProbe</code> setup which returns a 500 if service is unhealthy, and prints out what the problem is.<br />
Is there any way to view the output that was returned by <code>livenessProbe</code>?</p>
<p>I could log it in the application, but maybe it's possible to view from Kubernetes?</p>
<p>Currently, the only thing that I see doing <code>pod describe</code>:</p>
<pre><code>Killing container with id docker://12568746c312e6646fd6ecdb2123db448be0bc6808629b1a63ced8b7298be444:pod "test-3893895584-7f4cr_test(524091bd-49d8-11e7-bd00-42010a840224)" container "test" is unhealthy, it will be killed and re-created.
</code></pre>
<p>Running on GKE</p>
| <p>Unfortunately, there does not seem to be a way to access the HTTP response body of a failed HTTP probe.</p>
<p>To confirm this suspicion, let's have a look at the <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/probe/http/http.go#L65" rel="noreferrer">HTTP Prober's source code</a>, which is run within the Kubelet daemon:</p>
<pre><code>func DoHTTPProbe(url *url.URL, headers http.Header, client HTTPGetInterface) (probe.Result, string, error) {
// ...
body := string(b)
if res.StatusCode >= http.StatusOK && res.StatusCode < http.StatusBadRequest {
glog.V(4).Infof("Probe succeeded for %s, Response: %v", url.String(), *res)
return probe.Success, body, nil
}
glog.V(4).Infof("Probe failed for %s with request headers %v, response body: %v", url.String(), headers, body)
return probe.Failure, fmt.Sprintf("HTTP probe failed with statuscode: %d", res.StatusCode), nil
}
</code></pre>
<p>As you can see, the Kubelet daemon will log the HTTP response body of a failed probe in its own log, but even then only if verbosity was set to 4 or higher. Beyond logging the response in its own log, it is not passed back from the <code>DoHTTPProbe</code> method and will not be processed any further by the Kubelet.</p>
<p>As already noted by yourself, I'd think your safest bet would be to log the data you need from within your application itself.</p>
|
<p>The CentOS Atomic Host is shipped without the kubernetes-master package built into the image. Instead, you need to run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files. Do you have any good tutorials on how to form a kubernetes cluster of atomic hosts? the tutorials and the documentations I have seen so far was done on fedora atomic and centOS 7. </p>
| <p>You can try Kubernetes the Hard Way on Github:</p>
<p><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way</a></p>
|
<p>I am trying to setup an Ingress on Kubernetes on Google Container Engine and am getting quota exceeded errors (see abbrieviated output below).</p>
<pre><code>Name: my-ingress
Address:
Default backend: default-http-backend:80 (10.0.2.2:8080)
Rules:
Host Path Backends
---- ---- --------
*
service1 service1:7010 (<none>)
service2 service2:6884 (<none>)
Annotations:
ssl-redirect: false
Events:
FirstSeen LastSeen Count From Type Reason Message
--------- -------- ----- ---- -------- ------ -------
21s 21s 1 loadbalancer-controller Normal ADD reference/reference-ingress
13s 3s 10 loadbalancer-controller Warning GCE :Quota googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 5.0, quotaExceeded
</code></pre>
<p>I know how to increase my quotas, but my question is more specific: <em>how can I tell which "backends" are being consumed that are contributing the usage of the quota?</em> (I will then want to see if I may be able to turn them off if needed).</p>
| <p>According to <a href="https://cloud.google.com/compute/docs/load-balancing/http/backend-service" rel="nofollow noreferrer">this page</a>:</p>
<pre><code>gcloud compute backend-services list
</code></pre>
<p>will list all your backend services in all clusters.
For my case, it lists 6 and match my usage reported by:</p>
<pre><code>gcloud compute project-info describe --project PROJECT_NAME
</code></pre>
|
<p>I'm trying to create a service account with either no secrets or just secret I specify and the kubelet always seems to be attaching the default secret no matter what.</p>
<h3>Service Account definition</h3>
<pre><code>apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
name: test
secrets:
- name: default-token-4pbsm
</code></pre>
<p><strong>Submit</strong></p>
<pre><code>$ kubectl create -f service-account.yaml
serviceaccount "test" created
</code></pre>
<p><strong>Get</strong></p>
<pre><code>$ kubectl get -o=yaml serviceaccount test
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2017-05-30T12:25:30Z
name: test
namespace: default
resourceVersion: "31414"
selfLink: /api/v1/namespaces/default/serviceaccounts/test
uid: 122b0643-4533-11e7-81c6-42010a8a005b
secrets:
- name: default-token-4pbsm
- name: test-token-5g3wb
</code></pre>
<p>As you can see above the <code>test-token-5g3wb</code> was automatically created & attached to the service account without me specifying it.</p>
<p>As far as I understand the <code>automountServiceAccountToken</code> only affects mounting of those secrets to a pod which was launched via that service account. (?)</p>
<p>Is there any way I can avoid that default secret being ever created and attached?</p>
<h3>Versions</h3>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T20:41:24Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>Your understanding of <code>automountServiceAccountToken</code> is right it is for pod that will be launched.</p>
<p>The automatic token addition is done by Token controller. Even if you edit the config to delete the token it will be added again.</p>
<blockquote>
<p>You must pass a service account private key file to the token controller in the controller-manager by using the <strong><code>--service-account-private-key-file</code></strong> option. The private key will be used to sign generated service account tokens. Similarly, you must pass the corresponding public key to the kube-apiserver using the <strong><code>--service-account-key-file</code></strong> option. The public key will be used to verify the tokens during authentication.</p>
</blockquote>
<p>Above is taken from k8s <a href="https://kubernetes.io/docs/admin/service-accounts-admin/#token-controller" rel="nofollow noreferrer">docs</a>. So try not passing those flags, but not sure how to do that. But I not recommending doing that.</p>
<p>Also <a href="https://kubernetes.io/docs/admin/authentication/#service-account-tokens" rel="nofollow noreferrer">this doc</a> you might helpful.</p>
|
<p>I have more than one Kubernetes context. When I change contexts, I have been using <code>kill -9</code> to kill the port-forward in order to redo the <code>pachtctl port-forward &</code> command. I wonder if this is the right way of doing it.</p>
<p>In more detail:</p>
<p>I start off being in a Kubernetes context, we'll call it context_x. I then want to change context to my local context, called minikube. I also want to see my repos for this minikube context, but when I use <code>pachctl list-repo</code>, it still shows context_x's Pachyderm repos. When I do <code>pachctl port-forward</code>, I then get an error message about the address being already in use. So I have to ps -a, then kill -9 on those port forward processes, and then do pachctl port-forward command again.</p>
<p>An example of what I've been doing:</p>
<pre><code>$ kubectl config use-context minikube
$ pachctl list-repo #doesn't show minikube context's repos
$ pachctl port-forward &
...several error messages along the lines of:
Unable to create listener: Error listen tcp4 127.0.0.1:30650: bind: address already in use
$ ps -a | grep forward
33964 ttys002 0:00.51 kubectl port-forward dash-12345678-abcde 38080:8080
33965 ttys002 0:00.51 kubectl port-forward dash-12345679-abcde 38081:8081
37245 ttys002 0:00.12 pachctl port-forward &
37260 ttys002 0:00.20 kubectl port-forward pachd-4212312322-abcde 30650:650
$ kill -9 37260
$ pachctl port-forward & #works as expected now
</code></pre>
<p>Also, kill -9 on the <code>pachctl port-forward</code> process 37245 doesn't work, it seems like I have to kill -9 on the <code>kubectl port-forward</code></p>
| <p>You can specify the port if you want, as a different one using <code>-p</code> flag as mentioned in <a href="http://docs.pachyderm.io/en/latest/pachctl/pachctl_port-forward.html" rel="nofollow noreferrer">docs</a> Is there a reason of not doing it?</p>
<p>Also starting processes in background and then sending it a <code>SIGKILL</code> causes the resources to be unallocated properly so when you try to join again you might see it giving errors since it cannot allocate the same port again. So try running it without <code>&</code> at the end.</p>
<p>So whenever you change the context all you need to do is <code>CTRL + C</code> and start it again, this will release the resources properly and gain thema gain.</p>
|
<p>In my kubernetes cluster, each node is a virtualbox vm with two NICs, eth0 for NAT and eth1 for Host-Local communicating. </p>
<p><code>kubectl get pod --all-namespaces -o wide</code> shows</p>
<p><a href="https://i.stack.imgur.com/GIa22.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GIa22.png" alt="enter image description here"></a></p>
<p>We can see k8s-3 and k8s-4 reports correct IPs while k8s-2 doesn't. </p>
<p>I've tried to add <code>--bind-address=192.168.99.202</code> in <code>k8s-2</code>'s <code>kube-proxy.yaml</code> but it just don't work. </p>
<p>Anyone has any advice?</p>
| <p><a href="https://github.com/kubernetes/kubernetes/issues/44702" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/44702</a></p>
<p>Problem solved by passing <code>--node-ip=</code> to kubelet.</p>
|
<p>I've used this command to deploy a Kubernetes cluster in Azure:</p>
<pre><code>az acs create -n acs-cluster -g acsrg1 -d applink789 --generate-ssh-keys
</code></pre>
<p>Everything is working- I can connect to the cluster with <code>kubectl</code>. Now I want to define an SSH step in a Continuous Delivery pipeline. The documentation indicates that this command created a public/private key pair. Where is the private key stored? I've looked in the .ssh, .kube, and .azure folders in my home directory but I can't tell if any of those values are the private key.</p>
| <p>Figured it out- the documentation says the keys will be generated <em>if they are missing.</em> If the id_rsa keypair is present in the <code>.ssh</code> hidden directory, it is used. Connected with Putty using the <code>azureuser</code> default account.</p>
|
<p>I am trying to follow docs to setup a one node Kubernetes cluster with Centos 7.</p>
<p>kubeadm init will return no error but kubectl get nodes will return:</p>
<pre><code>NAME STATUS AGE VERSION
[MY_IP] NotReady 22s v1.6.4
</code></pre>
<p>system log repeats:</p>
<pre><code>Jun 6 16:21:48 localhost kubelet: W0606 16:21:48.064388 11520 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 6 16:21:48 localhost kubelet: E0606 16:21:48.064537 11520 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
</code></pre>
<p>I can only find info about this in Kubernetes github logs but they talk about a bug and I haven't found a workaround. Thanks</p>
| <p>you can run this command</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
|
<p>I've set up and deployed a Kubernetes stateful set containing three CockroachDB pods, <a href="https://www.cockroachlabs.com/docs/orchestrate-cockroachdb-with-kubernetes.html" rel="noreferrer">as per docs</a>. My ultimate objective is to query the database without requiring use of kubectl. My intermediate objective is to query the database without actually shelling into the database pod.</p>
<p>I forwarded a port from a pod to my local machine, and attempted to connect:</p>
<pre><code>$ kubectl port-forward cockroachdb-0 26257
Forwarding from 127.0.0.1:26257 -> 26257
Forwarding from [::1]:26257 -> 26257
# later, after attempting to connect:
Handling connection for 26257
E0607 16:32:20.047098 80112 portforward.go:329] an error occurred forwarding 26257 -> 26257: error forwarding port 26257 to pod cockroachdb-0_mc-red, uid : exit status 1: 2017/06/07 04:32:19 socat[40115] E connect(5, AF=2 127.0.0.1:26257, 16): Connection refused
$ cockroach node ls --insecure --host localhost --port 26257
Error: unable to connect or connection lost.
Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).
rpc error: code = Internal desc = transport is closing
Failed running "node"
</code></pre>
<p>Anyone manage to accomplish this?</p>
| <p>From inside the Kubernetes cluster, you can talk to the database by connecting the <code>cockroachdb-public</code> DNS name. In <a href="https://www.cockroachlabs.com/docs/orchestrate-cockroachdb-with-kubernetes.html#step-4-use-the-built-in-sql-client" rel="noreferrer">the docs</a>, that corresponds to the example command:</p>
<pre><code>kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never -- sql --insecure --host=cockroachdb-public
</code></pre>
<p>While that command is using the CockroachDB image, any Postgres client driver you use should be able to connect to <code>cockroachdb-public</code> when running with the Kubernetes cluster.</p>
<p>Connecting to the database from outside of the Kubernetes cluster will require exposing the <code>cockroachdb-public</code> service. The details will depend somewhat on how your Kubernetes cluster was deployed, so I'd recommend checking out their docs on that:
<a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-the-service" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-the-service</a></p>
<p>And in case you're curious, the reason forwarding port 26257 isn't working for you is because port forwarding from a pod only works if the process in the pod is listening on localhost, but the CockroachDB process in the statefulset configuration is set up to listen on the pod's hostname (as configured via the <code>--host</code> flag).</p>
|
<p>I'm planning to wrap our application (which consists of multiple microservices) into a chart.</p>
<p>Right now, for each microservice, we store secrets and configuration values hardcoded directly in our <code>deployment.yaml</code> files, in <code>...containers[].env</code>. All of our <code>yaml</code> files are stored in git repo.</p>
<p>I have noticed that some popular charts use <code>ConfigMap</code> (<a href="https://github.com/kubernetes/charts/blob/master/stable/grafana/templates/configmap.yaml" rel="nofollow noreferrer">1</a>, <a href="https://github.com/kubernetes/charts/blob/master/stable/mariadb/templates/configmap.yaml" rel="nofollow noreferrer">2</a>) and <code>Secret</code> (<a href="https://github.com/kubernetes/charts/blob/master/stable/jenkins/templates/secret.yaml" rel="nofollow noreferrer">1</a>, <a href="https://github.com/kubernetes/charts/blob/master/stable/drupal/templates/secrets.yaml" rel="nofollow noreferrer">2</a>) Kubernetes objects to store config values and secrets respectively.</p>
<p><strong>What are some advantages, be it ergonomics and/or security gains, of using <code>ConfigMap</code> and <code>Secret</code> objects</strong>?</p>
<p>I could already make templates out of all <code>yaml</code> files we have, making all hardcoded values configurable and thus resolvable during helm's template compilation time.</p>
<p>However, since Kubernetes provides specialized objects to store configuration & secrets, I want to justify adding <code>configmap.yaml</code> and <code>secrets.yaml</code> template files, as well adding references to them from existing <code>deployment.yaml</code> files.</p>
| <p>Configmaps are very generic configuration files. They can consist of a list of key value pairs but they can also be generic files. For example you can store a nginx configuration file <code>nginx.conf</code> in a configmap and load it in the proper location for the nginx daemon to read it.</p>
<p>Secrets are supposed to be used for storing sensitive data, unfortunately right now secrets are not encrypted, they are just based64 encoded. So while this helps you remove hardcoded non-encrypted values from your manifests, it does not help with encryption at all. This should get better in <a href="https://github.com/kubernetes/features/issues/92" rel="nofollow noreferrer">v1.7</a></p>
<p>You can set environment variables in your deployment manifest to point to specific values in <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">secrets</a> or <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#using-configmap-defined-environment-variables-in-pod-commands" rel="nofollow noreferrer">configmaps</a>. Both are also easily generated with <code>kubectl</code> for example:</p>
<ul>
<li><code>kubectl create secret generic foobar --from-literal=password=foobar</code></li>
<li><code>kubectl create configmap foobar --from-file=foobar.conf</code></li>
</ul>
<p>Helm charts best practice is to use both, see <a href="https://github.com/kubernetes/charts/tree/master/stable/mariadb/templates" rel="nofollow noreferrer">mariadb chart</a>.</p>
<p>Personally, when I need to load a file in a Pod I use a configmap, and when I deal with a sensitive env variable I use a secret, keeping in mind that it is not encrypted.</p>
|
<p>I have a Backend service which is deployed to Kubernetes.
Whenever I want to deploy I build my docker image, push it to the google docker registry with a tag (e.g. <code>1.1.0</code>) and update my deployment yaml.</p>
<p>However, updating this file and creating another commit is a PITA. Especially because I have a production and a staging environment (actually 2 namespaces)
I recently found out about ConfigMaps in Kubernetes.</p>
<p>So I would like to know if it is possible to store a value in a ConfigMap with the image tag and use it? I haven't found a way so far.</p>
<p>Are there any good alternatives so I don't have to store the information about the current release in git? What's the best practice here? Using <code>latest</code> tag I guess isn't.</p>
<p>I want to let my CI do the deployment whenever I push to master or development (I use gitlab CI), so any approach that is easy to do on the command line without a lot of <code>sed</code> would be appreciated.</p>
| <p>A straightforward solution is to trigger a rolling update of your deployment by using <code>kubectl set image</code>:</p>
<pre><code>kubectl set image deployment/foobar <container_name>=<new_image:new_tag>
</code></pre>
<p>and you can use your git commit id as image tag.</p>
|
<p>How would we create a directory inside the kubernetes persistent volume to mount to use in the container as subPath ? eg: mysql directory should be created while claiming the persistent volume </p>
| <p>I would probably put an init container into my podspec that simply mounts the volume and runs a <code>mkdir -p</code> to create the directory and then exit. You could also do this in the target container itself with some kind of script.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
|
<p>I'm trying HPA: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p>
<p>PV:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: api-orientdb-pv
labels:
app: api-orientdb
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: api-orientdb-{{ .Values.cluster.name | default "testing" }}
fsType: ext4
</code></pre>
<p>PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: api-orientdb-pv-claim
labels:
app: api
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: api-orientdb
storageClassName: ""
</code></pre>
<p>HPA:</p>
<pre><code>Name: api-orientdb-deployment
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 08 Jun 2017 10:37:06 +0700
Reference: Deployment/api-orientdb-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 17% (8m) / 10%
Min replicas: 1
Max replicas: 2
Events: <none>
</code></pre>
<p>and new pod has been created:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
api-orientdb-deployment-2506639415-n8nbt 1/1 Running 0 7h
api-orientdb-deployment-2506639415-x8nvm 1/1 Running 0 6h
</code></pre>
<p>As you can see, I'm using <code>gcePersistentDisk</code> which does not support <code>ReadWriteMany</code> access mode.</p>
<p>Newly created pod also mount the volume as <code>rw</code> mode:</p>
<pre><code>Name: api-orientdb-deployment-2506639415-x8nvm
Containers:
Mounts:
/orientdb/databases from api-orientdb-persistent-storage (rw)
Volumes:
api-orientdb-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: api-orientdb-pv-claim
ReadOnly: false
</code></pre>
<p>Question: How does it work in this case? Is there a way to config the mainly pod (<code>n8nbt</code>) to use a PV with <code>ReadWriteOnce</code> access mode, and all other scaled pod (<code>x8nvm</code>) should be <code>ReadOnlyMany</code>? How to do it automatically?</p>
<p>The only way I can think of is create another PVC mount the same disk but with different <code>accessModes</code>, but then the question becomes to: how to config the newly scaled pod to use that PVC?</p>
<hr>
<p><strong>Fri Jun 9 11:29:34 ICT 2017</strong></p>
<p>I found something: there is nothing ensure that the newly scaled pod will be run on the same node as the first pod. So, if the volume plugin does not support <code>ReadWriteMany</code> and the scaled pod is run on another node, it will failed to mount:</p>
<blockquote>
<p>Failed to attach volume "api-orientdb-pv" on node
"gke-testing-default-pool-7711f782-4p6f" with: googleapi: Error 400:
The disk resource
'projects/xx/zones/us-central1-a/disks/api-orientdb-testing' is
already being used by
'projects/xx/zones/us-central1-a/instances/gke-testing-default-pool-7711f782-h7xv'</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes</a></p>
<blockquote>
<p>Important! A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.</p>
</blockquote>
<p>If so, the only way to ensure that the HPA works is <code>ReadWriteMany</code> access mode must be supported by the volume plugin?</p>
<hr>
<p><strong>Fri Jun 9 14:28:30 ICT 2017</strong></p>
<blockquote>
<p>If you want only one Pod to be able to write then create two Deployments. One with replicas: 1 and the other one that has the autoscaler attached (and has readOnly: true in it)</p>
</blockquote>
<p>OK.</p>
<blockquote>
<p>Do note that a GCE PD can only be mounted by a single node if any of the Pods are accessing it readWrite.</p>
</blockquote>
<p>Then I have to use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">label selectors</a> to ensure that all pods end up on the same node, right?</p>
<blockquote>
<p>Your question is not clear to me</p>
</blockquote>
<p>Let me explain: in case of autoscaling, assuming that by using label selectors, I can ensure that newly scaled pod ends up on the same node, but since volume is mounted as <code>rw</code>, does it break the GCE PD as we have 2 pods mount a volume as <code>rw</code>?</p>
<blockquote>
<p>First of all, generally, if you have a Deployment with replicas: 1 you won't have 2 Pod running at the same time (most of the time!!)</p>
</blockquote>
<p>I know.</p>
<blockquote>
<p>On the other hand if a PVC specifies ReadWriteOnce then after the first Pod is scheduled any other Pods will need to be scheduled on the same node or not be scheduled at all (most common case: there aren't enough resources on the Node)</p>
</blockquote>
<p>In case of HPA, it does not. Please see my above updates for more details.</p>
<blockquote>
<p>If for any reason you do have 2 Pods accessing the same mount readWrite then it's completely up the the application what will happen and is not kubernetes specific</p>
</blockquote>
<p>The main thing made me confused is:</p>
<blockquote>
<p>ReadWriteOnce – the volume can be mounted as read-write by a single node</p>
</blockquote>
<p>OK, node, not pod. But in case of autoscaling, if 2 pods are running on the same node, and both mount the volume as <code>rw</code>, does GCE PD support it? If so, how does it work?</p>
| <p>It's working as intended. The <em>Once</em> in <code>ReadWriteOnce</code> refers to the number of Nodes that can use the PVC and not the number of Pods (HPA or no HPA).</p>
<p>If you want only one Pod to be able to write then create two Deployments. One with <code>replicas: 1</code> and the other one that has the autoscaler attached (and has <code>readOnly: true</code> in it). Do note that a GCE PD can only be mounted by a single node if any of the Pods are accessing it readWrite.</p>
|
<p>I set up a kubernetes cluster in AWS using KOPS; now I want to set up an NGINX ingress controller and terminate TLS with AWS managed certificate. The topology in my understanding is AWS ELB is facing the internet and terminates TLS, forwards unencrypted to ingress service which then does dispatches.</p>
<p>I've deployed ingress controller from <a href="https://github.com/kubernetes/ingress/tree/master/examples/aws/nginx" rel="noreferrer">https://github.com/kubernetes/ingress/tree/master/examples/aws/nginx</a></p>
<p>Except I used annotations as described on top of <a href="https://github.com/kubernetes/ingress/issues/71" rel="noreferrer">https://github.com/kubernetes/ingress/issues/71</a> to add the certificate.</p>
<p>I add the route to Route53 and open my browser to https address and get a 400 response from NGINX with message "The plain HTTP request was sent to HTTPS port"</p>
<p>What am I doing wrong?</p>
<p>This is my ingress resource:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: dispatcher
namespace: test
spec:
rules:
- host: REDACTED
http:
paths:
- backend:
serviceName: REDACTED
servicePort: 80
path: /api/v0
</code></pre>
| <p>I managed to get this done largely using the ingress here: <a href="https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx</a> except for the ingress service I added <code>service.beta.kubernetes.io/aws-load-balancer-ssl-cert</code> annotation pointing to my certificate ARN and set <code>targetPort</code> of both the ports to 80</p>
|
<p>I have a golang service that implements a WebSocket client using gorilla that is exposed to a Google Container Engine (GKE)/k8s cluster via a NodePort (30002 in this case).</p>
<p>I've got a manually created load balancer (i.e. NOT at k8s ingress/load balancer) with HTTP/HTTPS frontends (i.e. 80/443) that forward traffic to nodes in my GKE/k8s cluster on port 30002.</p>
<p>I can get my JavaScript WebSocket implementation in the browser (Chrome 58.0.3029.110 on OSX) to connect, upgrade and send / receive messages.</p>
<p>I log ping/pongs in the golang WebSocket client and all looks good until 30s in. 30s after connection my golang WebSocket client gets an EOF / close 1006 (abnormal closure) and my JavaScript code gets a close event. As far as I can tell, neither my Golang or JavaScript code is initiating the WebSocket closure.</p>
<p>I don't particularly care about session affinity in this case AFAIK, but I have tried both IP and cookie based affinity in the load balancer with long lived cookies.</p>
<p>Additionally, this exact same set of k8s deployment/pod/service specs and golang service code works great on my KOPS based k8s cluster on AWS through AWS' ELBs.</p>
<p>Any ideas where the 30s forced closures might be coming from? Could that be a k8s default cluster setting specific to GKE or something on the GCE load balancer?</p>
<p>Thanks for reading!</p>
<p>-- UPDATE --</p>
<p>There is a backend configuration timeout setting on the load balancer which is for "How long to wait for the backend service to respond before considering it a failed request".</p>
<p>The WebSocket is not unresponsive. It is sending ping/pong and other messages right up until getting killed which I can verify by console.log's in the browser and logs in the golang service.</p>
<p>That said, if I bump the load balancer backend timeout setting to 30000 seconds, things "work".</p>
<p>Doesn't feel like a real fix though because the load balancer will continue to feed <em>actual</em> unresponsive services traffic inappropriately, never mind if the WebSocket <em>does</em> become unresponsive.</p>
<p>I've isolated the high timeout setting to a specific backend setting using a path map, but hoping to come up with a real fix to the problem.</p>
| <p>I think this may be Working as Intended. Google just updated the documentation today (about an hour ago).</p>
<p><a href="https://cloud.google.com/compute/docs/load-balancing/http/#websocket_proxy_support" rel="nofollow noreferrer">LB Proxy Support docs</a></p>
<p><a href="https://cloud.google.com/compute/docs/load-balancing/http/backend-service#backend_service_components" rel="nofollow noreferrer">Backend Service Components docs</a></p>
<p>Cheers,</p>
<p>Matt</p>
|
<p>I'm trying to connect on a container throw the Kubernetes WebSocket API, from a container running within Kubernetes, without any success. </p>
<p>Install <code>wscat</code>:</p>
<pre><code>apt-get update
apt-get install -y npm
ln -s /usr/bin/nodejs /usr/bin/node
npm install -g n
n stable
npm install -g wscat
</code></pre>
<p>Exec on Kubernetes API:</p>
<pre><code>wscat -c "wss://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/my-pod-1623018646-kvc4b/exec?container=aws&stdin=1&stdout=1&stderr=1&tty=1&command=bash" \
--ca /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
-H "Authorization: Bearer $(</var/run/secrets/kubernetes.io/serviceaccount/token)"
error: Error: unexpected server response (400)
</code></pre>
<p>Do you know what I'm doing wrong?</p>
<p>Note that the following works:</p>
<pre><code>curl https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/my-pod-1623018646-kvc4b \
--cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
-H "Authorization: Bearer $(</var/run/secrets/kubernetes.io/serviceaccount/token)"
</code></pre>
<p>Apparently some people are able to connect: <a href="https://stackoverflow.com/a/43841572/599728">https://stackoverflow.com/a/43841572/599728</a></p>
<p>Cheers</p>
| <p>I just found out that the container name was wrong:<br>
<code>?container=aws</code> Aws was not on this pod.</p>
|
<p>I have a pod/service running an application that consumes etcd as a synchronization system and datastore. I want to run etcd within the pod, such that all of the replicas form a coherent cluster. In other words, so the application in replica #1 can write "foo" to <code>localhost:4001/v2/keys/my_key</code> and then replica #2 can then read <code>localhost:4001/v2/keys/my_key</code> and get "foo" as a result.</p>
<p>It's not clear how this can be done, since pod replicas are not individually addressable. I could in theory create an "etcd" service exposing the cluster ports, but any requests would round-robin to all the replicas so the individual etcd nodes would not be able to find each other.</p>
<p>Am I approaching this problem the correct way?</p>
| <p>You can deploy etcd on kubernetes using an Operator (from the <code>extensions/v1beta1</code>) and the <a href="https://quay.io/repository/coreos/etcd-operator" rel="nofollow noreferrer"><code>quay.io/coreos/etcd-operator</code></a> image.</p>
<p>An example deployment with a cluster size of 3 looks like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
spec:
replicas: 1
template:
metadata:
name: etcd-operator
labels:
app: etcd
component: operator
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.3.0
env:
- name: MY_POD_NAMESPACE
valueFrom: { fieldRef: { fieldPath: metadata.namespace } }
- name: MY_POD_NAME
valueFrom: { fieldRef: { fieldPath: metadata.name } }
---
apiVersion: etcd.coreos.com/v1beta1
kind: Cluster
metadata:
name: etcd-cluster
labels:
app: etcd
component: cluster
spec:
size: 3
version: "3.1.8"
</code></pre>
<p>Please be aware of the <strong>beta status</strong> of this project. However according to the maintainers the operator is <strong>now stable</strong>. I have deployed the configuration above successfully but I didn't run any of this in production.</p>
<p>The operator code is available <a href="https://github.com/coreos/etcd-operator" rel="nofollow noreferrer">on github</a>. You can find additional documentation there.</p>
|
<p>Running <code>kubectl logs</code> shows me the stderr/stdout of one Kubernetes container. </p>
<p>How can I get the aggregated stderr/stdout of a set of pods, preferably those created by a certain replication controller?</p>
| <p>You can use labels</p>
<pre><code>kubectl logs -l app=elasticsearch
</code></pre>
<p>And you'd probably want to specify <code>--all-containers --ignore-errors</code> in order to:</p>
<ul>
<li>Include logs from pods with multiple containers</li>
<li>Continue to next pod on fatal error (e.g. logs could not be retrieved)</li>
</ul>
|
<p>Among Kubernetes, OpenShift, Docker Swarm and Docker DataCenter deployment tools which all are having automatic rollback in case of any failure happens?</p>
| <p>All of them have a sort of rollback mechanism built-in with commands available for you to control. However, these are not fully automated and you might have to do it manually. For Kubernetes, here is the related github <a href="https://github.com/kubernetes/kubernetes/issues/23211" rel="nofollow noreferrer">issue</a> and another <a href="https://github.com/docker/swarmkit/issues/1085" rel="nofollow noreferrer">issue</a> for Swarm. Docker DataCenter use Swarm already.</p>
<p>Rollback documentations I was able to find:</p>
<ul>
<li>Kubernetes: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment" rel="nofollow noreferrer">Rolling Back a Deployment</a></li>
<li>Swarm: <a href="https://docs.docker.com/engine/swarm/services/#roll-back-to-the-previous-version-of-a-service" rel="nofollow noreferrer">Roll back to the previous version of a service</a> (features <code>--update-delay</code>) </li>
<li>OpenShift: <a href="https://docs.openshift.com/enterprise/3.1/dev_guide/deployments.html#rolling-back-a-deployment" rel="nofollow noreferrer">Rolling Back a Deployment</a></li>
</ul>
|
<p>Scanario: I have a container image that needs to run with <code>net.core.somaxconn</code> > default_value. I am using Kubernetes to deploy and run in GCE.</p>
<p>The nodes (vms) in my cluster are configured with correct <code>net.core.somaxconn</code> value. Now the challenge is to start the docker container with flag <code>--sysctl=net.core.somaxconn=4096</code> from kubernetes. I cannot seem to find the proper documentation to achieve this.</p>
<p>Am I missing something obvious? </p>
| <p><strong>Solution 1</strong>: use <a href="https://stackoverflow.com/questions/43032406/gke-cant-disable-transparent-huge-pages-permission-denied/43081893#43081893">this answer</a> as a template to see how to configure the whole node to that sysctl value; you can use something like <code>echo 4096 >/proc/sys/net/core/somaxconn</code>. Thereafter you can put a label on the nodes that use a VM with the needed sysctl configuration and use nodeSelector in the Pod spec to force scheduling to those nodes. (This only works with <em>non</em> namespaced settings; sys.net.core.somaxconn appears to be namespaced. I would like to leave this solution here as it might help others.)</p>
<p><strong>Solution 2</strong>: again, starting from <a href="https://stackoverflow.com/questions/43032406/gke-cant-disable-transparent-huge-pages-permission-denied/43081893#43081893">same answer</a> you can add <code>--experimental-allowed-unsafe-sysctls=net.core.somaxconn</code> to the <code>kubelet</code> command line (This only works with <em>namespaced</em> settings; sys.net.core.somaxconn is namespaced). Then you can simply do something like (<a href="https://kubernetes.io/docs/concepts/cluster-administration/sysctl-cluster/#enabling-unsafe-sysctls" rel="nofollow noreferrer">source</a>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: sysctl-example
annotations:
security.alpha.kubernetes.io/sysctls: net.core.somaxconn=4096
</code></pre>
<p>I hope this helps..</p>
|
<p>I am running web site with Kubernetes on Google Cloud. At the moment, everything is working well - through http. But I need https. I have several services and one of them is exposed to the outside world, let's call it web. As far as I know, this is the only service that needs to be modified. I tried to creating a static IP and TCP/SSL loadbalancer ssl-LB in the Networking section of GCP and using that LB in web.yaml, which I create. Creating the service gets stuck with:</p>
<pre><code>Error creating load balancer (will retry): Failed to create load
balancer for service default/web: requested ip <IP> is
neither static nor assigned to LB
aff3a4e1f487f11e787cc42010a84016(default/web): <nil>
</code></pre>
<p>According to GCP my IP is static, however. The hashed LB I cannot find anywhere and it should be assigned to ssl-LB anyway. How do I assign this properly?</p>
<p><strong>More details:</strong></p>
<p>Here are the contents of web.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web
labels:
...
spec:
type: LoadBalancer
loadBalancerIP: <RESERVED STATIC IP>
ports:
- port: 443
targetPort: 7770
selector:
...
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web
spec:
replicas: 1
template:
metadata:
labels:
...
spec:
containers:
- name: web
image: gcr.io/<PROJECT>/<IMAGE NAME>
ports:
- containerPort: 7770
</code></pre>
| <p>Since you have not mentioned this already, I'm just assuming you're using Google Container Engine (GKE) for your Kubernetes setup.</p>
<p>In the service resource manifest, if you set the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer" rel="noreferrer"><code>Type</code> to <code>LoadBalancer</code></a>, Kubernetes on GKE automatically sets up Network load balancing (L4 Load balancer) using GCE. You will have to terminate connections in your pod using your own custom server or something like <code>nginx</code>/<code>apache</code>.</p>
<p>If your goal is to set up a (HTTP/HTTPS) L7 load balancer (which looks to be the case), it will be simpler and easier to use the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer"><code>Ingress</code> resource in Kubernetes</a> (starting with <code>v1.1</code>). GKE automatically sets up a GCE HTTP/HTTPS L7 load balancing with this setup.</p>
<p>You will be able to add your TLS certificates which will get provisioned on the GCE load balancer automatically by GKE.</p>
<p>This setup has the following advantages:</p>
<ol>
<li>Specify services per URL path and port (it uses <a href="https://cloud.google.com/compute/docs/load-balancing/http/url-map" rel="noreferrer"><code>URL Maps</code></a> from GCE to configure this).</li>
<li>Set up and terminate SSL/TLS on the GCE load balancer (it uses <a href="https://cloud.google.com/compute/docs/load-balancing/http/target-proxies" rel="noreferrer"><code>Target proxies</code></a> from GCE to configure this).</li>
<li>GKE will automatically also configure the GCE <a href="https://cloud.google.com/compute/docs/load-balancing/health-checks" rel="noreferrer"><code>health checks</code></a> for your services.</li>
</ol>
<p>Your responsibility will be to handle the backend service logic to handle requests in your pods.</p>
<p>More info available on the <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="noreferrer">GKE page about setting up HTTP load balancing</a>.</p>
<p>Remember that when using GKE, it automatically uses the available GCE load balancer support for both the use cases described above and you will not need to manually set up <a href="https://cloud.google.com/compute/docs/load-balancing/" rel="noreferrer">GCE load balancing</a>.</p>
|
<p>I have created a pod with two containers. I know that different containers in a pod share same network namespace (i.e.,same IP and port space) and can also share a storage volume between them via configmaps. My question is do the pods also share same filesystem. For instance, in my case I have one container 'C1' that generates a dynamic file every 10 min in /var/targets.yml and I want the the other container 'C2' to read this file and perform its own independent action. </p>
<p>Is there a way to do this, may be some workaround via configmaps? or do I have to access these file via networking since each container have their own IP(But this may not be a good idea when it comes to POD restarts). Any suggestions or references please? </p>
| <p>You can use an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> for this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: generating-container
volumeMounts:
- mountPath: /cache
name: cache-volume
- image: gcr.io/google_containers/test-webserver
name: consuming-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
</code></pre>
<p>But be aware, that the data isn't persistent during container recreations.</p>
|
<p>I have been working on a simple Node.js application that SETs and GETs a key from etcd using Istio to connect the two services together. I have tried a few variations but keep seeing the same error returned.</p>
<blockquote>
<p>nodeAppTesting failed(etcd-operator) ->{"errors":[{"server":"<a href="http://etcd-operator:2379" rel="nofollow noreferrer">http://etcd-operator:2379</a>","httperror":null,"httpstatus":503,"httpbody":"upstream connect error or disconnect/reset before headers","response":{"statusCode":503,"body":"upstream connect error or disconnect/reset before headers","headers":{"content-length":"57","content-type":"text/plain","date":"Thu, 08 Jun 2017 17:17:04 GMT","server":"envoy","x-envoy-upstream-service-time":"5"},"request":{"uri":{"protocol":"http:","slashes":true,"auth":null,"host":"etcd-operator:2379","port":"2379","hostname":"etcd-operator","hash":null,"search":null,"query":null,"pathname":"/v2/keys/testKey","path":"/v2/keys/testKey","href":"<a href="http://etcd-operator:2379/v2/keys/testKey" rel="nofollow noreferrer">http://etcd-operator:2379/v2/keys/testKey</a>"},"method":"GET","headers":{"accept":"application/json"}}},"timestamp":"2017-06-08T17:17:04.544Z"}],"retries":0}</p>
</blockquote>
<p>Looking at the proxy logs, I can see that client and server proxies are involved in the communication (and this is verified I think in seeing envoy in the server header).</p>
<p>Attaching the Node.js app and the deployment.yaml.
<strong>server.js</strong></p>
<pre><code>var http = require('http');
var Etcd = require('node-etcd');
var fs = require('fs');
var httpClient = require('request');
var handleRequest = function(request, response) {
var scheme = "http";
var ipAddress = "etcd-operator"
var port = "2379";
var connectionAddress = scheme +"://" + ipAddress +":" + port;
console.log('Received request for URL: ' + request.url + " connecting to " + connectionAddress);
var etcd = new Etcd([connectionAddress] /*, options */);
etcd.set("testKey" , "foo");
etcd.get("testKey", function(err, res){
if(!err){
response.writeHead(200);
response.write("nodeAppTesting("+ ipAddress+") ->"+ JSON.stringify(res) ) ;
response.end();
}else{
response.writeHead(500);
response.write("nodeAppTesting failed("+ ipAddress+") ->"+ JSON.stringify(err) ) ;
console.log("Encountered error during runtime", JSON.stringify(err));
response.end();
}
});
}
var www = http.createServer(handleRequest);
www.listen(8080);
console.log("App up and running on port 8080")
</code></pre>
<p><strong>deployment.yaml</strong> </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: etcd-node
labels:
app: etcd-node
spec:
ports:
- port: 8080
name: http
selector:
app: etcd-node
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-node-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: etcd-node
spec:
containers:
- name: etcd-node
image: todkap/etcd-node:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
##################################################################################################
# etcd service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: etcd-operator
labels:
app: etcd-operator
spec:
ports:
- port: 2379
targetPort: 2379
name: http
selector:
app: etcd-operator
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: etcd-operator
spec:
replicas: 1
template:
metadata:
labels:
name: etcd-operator
app: etcd-operator
version: v1
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.2.6
imagePullPolicy: IfNotPresent
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2379
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway2
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: etcd-node
servicePort: 8080
- http:
paths:
- path: /
backend:
serviceName: etcd-operator
servicePort: 2379
- path: /v2/keys/*
backend:
serviceName: etcd-operator
servicePort: 2379
---
</code></pre>
| <p>I was able to resolve this issues reported here. I will be publishing a recipe demonstrating the flow sometime this week. For now, we can consider this closed. In the future, I will move to the forums or post on issues. Be on the look out for the article (I will update this post with the link when available).
Thanks for the help, guidance and suggestions.</p>
<p>Main issues were consistency in referencing the etcd service, consistency in referencing my node app as a Deployment, Service and Ingress and then finally the exposing the NodePort.</p>
<p><strong>update</strong></p>
<p><a href="https://medium.com/p/istio-is-not-just-for-microservices-4ed199322bf4" rel="nofollow noreferrer">Published a article demonstrating the working flow.</a> </p>
|
<p>I am getting the following error when trying to access the Kubernetes dashboard found in the cluster info:</p>
<pre><code>kubectl cluster-info
</code></pre>
<p>Also pops up in incognito mode in Chrome:</p>
<blockquote>
<p>User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched.\nUnknown user \"system:anonymous\""</p>
</blockquote>
| <p>I was able to access it with a local proxy by running:</p>
<pre><code>kubectl proxy
</code></pre>
<p>And then navigating to <a href="http://127.0.0.1:8001/ui" rel="nofollow noreferrer">http://127.0.0.1:8001/ui</a> (<a href="http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/workload?namespace=default" rel="nofollow noreferrer">http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/workload?namespace=default</a>).</p>
<p>Might be related to an upgrade to version 1.6.</p>
|
<p>I defined my deployment resources</p>
<pre><code> resources:
limits:
cpu: 900m
memory: 2500Mi
</code></pre>
<p>now on <a href="http://localhost:8001/api" rel="nofollow noreferrer">http://localhost:8001/api</a> <a href="https://i.stack.imgur.com/riZLA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/riZLA.png" alt="enter image description here"></a></p>
<p>how can I get the <strong>max</strong> usage of memory and cpu (in order to handle and define usages and resources well)? </p>
| <p>Usually, you will need to implement some monitoring solution for your K8s cluster to store historical metrics. </p>
<p>If your Kubernetes deployment runs on GKE, you can use <a href="https://cloudplatform.googleblog.com/2015/12/monitoring-Container-Engine-with-Google-Cloud-Monitoring.html" rel="nofollow noreferrer">Stackdriver</a>
for that and if you have opted for <a href="https://cloud.google.com/stackdriver/pricing" rel="nofollow noreferrer">Stackdriver Premium</a>, you will see your historical metrics there. If it's your own Kubernetes deployment, <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus/Grafana</a> is a popular choice</p>
|
<p>Is it possible to run "kubeadm init" without Internet access?</p>
<p>When executing kubeadm init on isolated networks where the host is not allowed to make external connections, it fails on download of a stable version check of some sort, as it tries to retrieve <a href="https://storage.googleapis.com/kubernetes-release/release/stable-1.6.txt" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-release/release/stable-1.6.txt</a> .</p>
<pre><code># kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
unable to get URL "https://storage.googleapis.com/kubernetesrelease/release/stable-1.6.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.6.txt: dial tcp 216.58.204.80:443: i/o timeout
</code></pre>
<p>Why is this check needed? The contents of that URL seems to today be "v1.6.4", which is the version that is installed:</p>
<pre><code># kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>This seems to be a behavior introduced after 1.6.0. I have looked at documentation, flags, configuration options but have not found a way to execute kubeadm init without this (not even with --skip-preflight-checks).</p>
| <p>Resolved by using the following command:</p>
<pre><code>kubeadm init --kubernetes-version=v1.6.4
</code></pre>
<p>(Note the "v" in the version number.)</p>
|
<p>I'm using Rancher over Kubernetes to create our test/dev environment. First of all, it's a great tool and I'm amazed of how it simplify the management of such environments.</p>
<p>That said, I have an issue (which is probably more a knowledge lack of Rancher). I try to automate the deployment via Jenkins, and as we will have several stacks into our test environment, I want to dynamically update the loadbalancer instances to add routes for new environement from Jenkins with Rancher CLI.</p>
<p>At the moment, I just try to run this command :</p>
<pre><code>rancher --url http://myrancher_server:8080 --access-key <key> --secret-key <secret> --env dev-test stack create kubernetes-ingress-lbs -r loadbalancer-rancher-service.yml
</code></pre>
<p>My docker-compose.yml file is like the following :</p>
<pre><code>version: '2'
services:
frontend:
image: 172.19.51.97:5000/frontend
dev-test-lb:
image: rancher/load-balancer-service
ports:
- 82: 8086
links:
- fronted:frontend
</code></pre>
<p>My rancher compose file is like this:</p>
<pre><code>version: '2'
services:
dev-test-lb:
scale: 4
lb_config:
port_rules:
- source_port: 82
path: /products
target_port: 8086
service: products
- source_port: 82
path: /
target_port: 4201
service: frontend
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
</code></pre>
<p>Now when I execute this I have the following response :</p>
<pre><code>Bad response statusCode [422]. Status [422 status code 422]. Body: [code=NotUnique, fieldName=name, baseType=error] from [http://myrancher_server:8080/v2-beta/projects/1a21/stacks]
</code></pre>
<p>Obviously I can't edit an existing stack with a service that already exsit. Do you know if it's best practice do to this like that ? I checked man, and I only see the "create" action on "rancher stack", so I'm wondering if we can update ?</p>
<p>My rancher server is v1.5.10 and all my rancher agents and Kubernetes drivers are up-to-date.</p>
<p>Thanks a lot for your help fellows :)</p>
| <p>Ok, just for the information, I found that this is possible via the Rest API of Rancher.</p>
<p>Check the following link : <a href="http://docs.rancher.com/rancher/v1.2/en/api/v2-beta/api-resources/service/" rel="nofollow noreferrer">http://docs.rancher.com/rancher/v1.2/en/api/v2-beta/api-resources/service/</a></p>
<p>I didn't found that at first 'cause the Googling I've done around was all about rancher cli at first. But as it's still beta, we can't do the same stuff as via the rest API.</p>
<p>Basically, just send an update resource query :</p>
<p>PUT rancherserver/v2-beta/projects/1a12/services/</p>
<pre><code>{
"description": "Loadbalancer for our test env",
"lbConfig": {
"portRules": [
{
"hostname": "",
"protocol": "http",
"source_port": "80",
"targetPort": "4200",
"path": "/"
}
]
},
"name": "kubernetes-ingress-lbs"
}
</code></pre>
|
<p><strong>1.</strong> Followed -> <a href="https://kubernetes.io/docs/getting-started-guides/ubuntu/manual/" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/ubuntu/manual/</a></p>
<p>After I clone as they mentioned in doc. <code>git clone --depth 1 https://github.com/kubernetes/kubernetes.git</code>. I could not find the file <code>cluster/ubuntu/config-default.sh</code> to configure cluster. </p>
<p>Ok, I left it default and try to run <code>KUBERNETES_PROVIDER=ubuntu ./kube-up.sh</code> but there is no <code>verify-kube-binaries.sh</code> file</p>
<pre><code>root@ultron:/home/veeru# KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
... Starting cluster using provider: ubuntu
... calling verify-prereqs
Skeleton Provider: verify-prereqs not implemented
... calling verify-kube-binaries
./kube-up.sh: line 44: verify-kube-binaries: command not found
</code></pre>
<p>Outdated Documentation?</p>
<p><strong>2.</strong> From official <a href="https://github.com/kubernetes/kubernetes" rel="noreferrer">git repo</a>, I have downloaded 1.6.4 version (<code>Branch</code>-><code>Tag</code>-><code>v1.6.4</code>)
After <code>cluster/ubuntu/config-default.sh</code> configuration I ran <code>KUBERNETES_PROVIDER=ubuntu ./kube-up.sh</code> in <code>cluster</code> directory. But some of the links are outdated!</p>
<p><a href="https://i.stack.imgur.com/yNgIx.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/yNgIx.jpg" alt="error"></a></p>
<p><strong>3.</strong> Finally I tried in <code>Ubuntu 16</code> with <code>kubeadm</code>.<a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a></p>
<p>The <code>kubeadm init</code> command successfully completed without any problem, but when I try to <code>kubectl cluster-info</code>, it is showing <code>The connection to the server localhost:8080 was refused
</code> </p>
<p>Any help?(Mainly I want to install K8 in Ubuntu 14)</p>
<p><strong>UPDATE 1</strong></p>
<p>Point 3(K8 on Ubuntu 16 with <code>kubeadm</code>) is resolved by running</p>
<pre><code> sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
</code></pre>
| <p>I have had some fun with this :-)</p>
<p>So, Kubernetes 1.6.4 on Ubuntu 14.04 (Trusty):</p>
<ul>
<li>have <code>nsenter</code> built & installed (<code>nsenter</code> is a hard <code>kubelet</code> dependency and is not present in 14.04)</li>
<li>patch up the <code>kubelet</code> and <code>kubeadm</code> packages to remove the systemd dependency (and repace it with an <code>upstart</code> script)</li>
<li>start <code>kubelet</code> manually during <code>kubeadm init</code> (because <code>kubeadm</code> only supports the systemd-style init system)</li>
</ul>
<p>I've created a proof of concept script to the above. It's available at:
<a href="https://gist.github.com/lenartj/0b264cb70e6cb50dfdef37084f892554#file-trusty-kubernetes-sh" rel="noreferrer">https://gist.github.com/lenartj/0b264cb70e6cb50dfdef37084f892554#file-trusty-kubernetes-sh</a></p>
<p>You can follow the official guide <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="noreferrer">installing kubeadm</a> and <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">creating the cluster</a>. Just skip the <code>kubeadm</code> and <code>kubelet</code> installation steps and use the script above instead.</p>
<p>There is a demo at: <a href="https://asciinema.org/a/124160" rel="noreferrer">https://asciinema.org/a/124160</a></p>
<p>The steps are:</p>
<ol>
<li>Install docker: <code>curl -sSL https://get.docker.com/ | sh</code></li>
<li>Install apt-transport-https: <code>apt-get update && apt-get install -y apt-transport-https</code></li>
<li>Add kubernetes repository key: <code>curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key a</code></li>
<li>Add kubernetes-<em>xenial</em> repository: <code>echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >/etc/apt/sources.list.d/kubernetes.list</code></li>
<li>Install kubectl, kubernetes-cni and dependencies of kubelet: <code>apt-get install -y kubectl kubernetes-cni binutils ebtables socat</code></li>
<li>Run the script to create the kubelet and kubeadm packages and the nsenter binary: <code>curl -sSL https://gist.github.com/lenartj/0b264cb70e6cb50dfdef37084f892554#file-trusty-kubernetes-sh | bash</code></li>
<li>Install the nsenter binary: <code>cp -v /tmp/tmp.xxxxx/nsenter /usr/local/bin</code></li>
<li>Install kubelet and kubeadm packages: <code>dpkg -i /tmp/tmp.xxxxx/*-patched</code></li>
<li>Initialize the master: <code>kubeadm init</code></li>
<li>Start the kubelet (while kubeadm is waiting for the control plane): <code>service kubelet start</code></li>
<li>Configure kubectl admin credentials: <code>cp /etc/kubernetes/admin.conf $HOME/; chown $(id -u):$(id -g) $HOME/admin.conf; export KUBECONFIG=$HOME/admin.conf</code></li>
<li>Install Pod networking (weave-net): <code>kubectl apply -f https://git.io/weave-kube-1.6</code></li>
</ol>
<p>It's not a nice & clean solution but it works.</p>
|
<p>I currently have a service that looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
</code></pre>
<p>I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP <code>10.104.0.1</code>, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.</p>
<p>How can I get the real source IP for the request without having to set up a load balancer or ingress?</p>
| <p>This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.</p>
<p>There are some beta annotations that you can use to get around this, specifically <code>service.beta.kubernetes.io/external-traffic: OnlyLocal</code>.</p>
<p>More info in the docs, here: <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer</a></p>
<p>But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?</p>
|
<p>My Understanding of this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">doc page</a> is, that I can configure service accounts with Pods and hopefully also deployments, so I can access the k8s API in Kubernetes 1.6+. In order not to alter or use the default one I want to create service account and mount certificate into the pods of a deployment.</p>
<p>How do I achieve something similar like in this example for a deployment?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
</code></pre>
| <p>As you will need to specify 'podSpec' in Deployment as well, you should be able to configure the service account in the same way. Something like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
# Below is the podSpec.
metadata:
name: ...
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
</code></pre>
|
<p>I intend to use Kubernetes and Ingress for load balancing. I'm trying to learn how to set up Flask, uWSGI and Nginx.
I see this tutorial that has all three installed in the same container, and I'm wondering whether I should use it or not.
<a href="https://ianlondon.github.io/blog/deploy-flask-docker-nginx/" rel="noreferrer">https://ianlondon.github.io/blog/deploy-flask-docker-nginx/</a></p>
<p>I'm guessing the benefit of having them as separate containers and separate pods is that they can then all scale individually?</p>
<p>But also, should Flask and uwsgi even be in separate containers? (or Flask and Gunicorn, since uwsgi seems to be very similar to Gunicorn) </p>
| <p>Flask is a web framework, any application written with it needs a WSGI server to host it. Although you could use the Flask builtin developer server, you shouldn't as that isn't suitable for production systems. You therefore need to use a WSGI server such as uWSGI, gunicorn or mod_wsgi (mod_wsgi-express). Since the web application is hosted by the WSGI server, it can only be in the same container, but there isn't a separate process for Flask, it runs in the web server process.</p>
<p>Whether you need a separate web server such as nginx then depends. In the case of mod_wsgi you don't as it uses the Apache web server and so draws direct benefits from that. When using mod_wsgi-express it also is already setup to run in an optimal base configuration and how it does that avoids the need to have a separate front facing web server like people often do with nginx when using uWSGI or gunicorn.</p>
<p>For containerised systems, where the platform already provides a routing layer for load balancing, as is the case for ingress in Kubernetes, using nginx in the mix could just add extra complexity you don't need and could reduce performance. This is because you either have to run nginx in the same container, or create a separate container in the same pod and use shared <code>emptyDir</code> volume type to allow them to communicate via a UNIX socket still. If you don't use a UNIX socket, and use INET socket, or run nginx in a completely different pod, then it is sort of pointless as you are introducing an additional hop for traffic which is going to be more expensive than having it closely bound using a UNIX socket. The uWSGI server doesn't perform as well when accepting requests over INET when coupled with nginx, and having nginx in a separate pod, potentially on different host, can make that worse.</p>
<p>Part of the reason for using nginx in front is that it can protect you from slow clients due to request buffering, as well as other potential issues. When using ingress though, you already have a haproxy or nginx front end load balancer that can to a degree protect you from that. So it is really going to depend on what you are doing as to whether there is a point in introducing an additional nginx proxy in the mix. It can be simpler to just put gunicorn or uWSGI directly behind the load balancer.</p>
<p>Suggestions are as follows.</p>
<ul>
<li><p>Also look at mod_wsgi-express. It was specifically developed with containerised systems in mind to make it easier, and can be a better choice than uWSGI and gunicorn.</p></li>
<li><p>Test different WSGI servers and configurations with your actual application with real world traffic profiles, not benchmarks which just overload it. This is important as the dynamics of a Kubernetes based system, along with how its routing may be implemented, means it all could behave a lot differently to more traditional systems you may be used to.</p></li>
</ul>
|
<p>The CentOS Atomic Host is shipped without the kubernetes-master package built into the image. Instead, you need to run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files. Do you have any good tutorials on how to form a kubernetes cluster of atomic hosts? the tutorials and the documentations I have seen so far was done on fedora atomic and centOS 7. </p>
| <p>You should be able to get a Kubernetes cluster working using the <a href="https://github.com/kubernetes/contrib/tree/master/ansible" rel="nofollow noreferrer">old contrib/ansible playbooks</a>. While somewhat outdated, they've been tested to work. At this point the Atomic team is working towards <a href="http://www.projectatomic.io/blog/2017/05/testing-system-containerized-kubeadm/" rel="nofollow noreferrer">enabling kubeadm-based installs</a>.</p>
|
<p>I am trying to run microservice applications with kubernetes. I have rabbitmq, elasticsearch and eureka discovery service running on kubernetes. Other than that, I have three microservice applications. When I run two of them, it is fine; however when I run the third one they all began restarting over and over again without any reason.</p>
<p>One of my config files:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hrm
labels:
app: suite
spec:
type: NodePort
ports:
- port: 8086
nodePort: 30001
selector:
app: suite
tier: hrm-core
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hrm
spec:
replicas: 1
template:
metadata:
labels:
app: suite
tier: hrm-core
spec:
containers:
- image: privaterepo/hrm-core
name: hrm
ports:
- containerPort: 8086
imagePullSecrets:
- name: regsecret
</code></pre>
<p>Result from kubectl describe pod hrm:</p>
<pre><code> State: Running
Started: Mon, 12 Jun 2017 12:08:28 +0300
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 12 Jun 2017 12:07:05 +0300
Ready: True
Restart Count: 5
18m 18m 1 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hrm" with CrashLoopBackOff: "Back-off 10s restarting failed container=hrm pod=hrm-3288407936-cwvgz_default(915fb55c-4f4a-11e7-9240-080027ccf1c3)"
</code></pre>
<p>kubectl get pods:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
discserv-189146465-s599x 1/1 Running 0 2d
esearch-3913228203-9sm72 1/1 Running 0 2d
hrm-3288407936-cwvgz 1/1 Running 6 46m
parabot-1262887100-6098j 1/1 Running 9 2d
rabbitmq-279796448-9qls3 1/1 Running 0 2d
suite-ui-1725964700-clvbd 1/1 Running 3 2d
</code></pre>
<p>kubectl version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:43:50Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>minikube version:</p>
<pre><code>minikube version: v0.18.0
</code></pre>
<p>When I look at pod logs, there is no error. It seems like it starts without any problem. what could be the problem here?</p>
<p>edit: output of kubectl get events:</p>
<pre><code>19m 19m 1 discserv-189146465-lk3sm Pod Normal SandboxChanged kubelet, minikube Pod sandbox changed, it will be killed and re-created.
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Pulling kubelet, minikube pulling image "private repo"
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Created kubelet, minikube Created container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
19m 19m 1 discserv-189146465-lk3sm Pod spec.containers{discserv} Normal Started kubelet, minikube Started container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67
19m 19m 1 esearch-3913228203-6l3t7 Pod Normal SandboxChanged kubelet, minikube Pod sandbox changed, it will be killed and re-created.
19m 19m 1 esearch-3913228203-6l3t7 Pod spec.containers{esearch} Normal Pulled kubelet, minikube Container image "elasticsearch:2.4" already present on machine
19m 19m 1 esearch-3913228203-6l3t7 Pod spec.containers{esearch} Normal Created kubelet, minikube Created container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
19m 19m 1 esearch-3913228203-6l3t7 Pod spec.containers{esearch} Normal Started kubelet, minikube Started container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60
18m 18m 1 hrm-3288407936-d2vhh Pod Normal Scheduled default-scheduler Successfully assigned hrm-3288407936-d2vhh to minikube
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Pulling kubelet, minikube pulling image "private repo"
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Created kubelet, minikube Created container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
18m 18m 1 hrm-3288407936-d2vhh Pod spec.containers{hrm} Normal Started kubelet, minikube Started container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e
18m 18m 1 hrm-3288407936 ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: hrm-3288407936-d2vhh
18m 18m 1 hrm Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set hrm-3288407936 to 1
19m 19m 1 minikube Node Normal RegisteredNode controllermanager Node minikube event: Registered Node minikube in NodeController
19m 19m 1 minikube Node Normal Starting kubelet, minikube Starting kubelet.
19m 19m 1 minikube Node Warning ImageGCFailed kubelet, minikube unable to find data for container /
19m 19m 1 minikube Node Normal NodeAllocatableEnforced kubelet, minikube Updated Node Allocatable limit across pods
19m 19m 1 minikube Node Normal NodeHasSufficientDisk kubelet, minikube Node minikube status is now: NodeHasSufficientDisk
19m 19m 1 minikube Node Normal NodeHasSufficientMemory kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
19m 19m 1 minikube Node Normal NodeHasNoDiskPressure kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
19m 19m 1 minikube Node Warning Rebooted kubelet, minikube Node minikube has been rebooted, boot id: f66e28f9-62b3-4066-9e18-33b152fa1300
19m 19m 1 minikube Node Normal NodeNotReady kubelet, minikube Node minikube status is now: NodeNotReady
19m 19m 1 minikube Node Normal Starting kube-proxy, minikube Starting kube-proxy.
19m 19m 1 minikube Node Normal NodeReady kubelet, minikube Node minikube status is now: NodeReady
8m 8m 1 minikube Node Warning SystemOOM kubelet, minikube System OOM encountered
18m 18m 1 parabot-1262887100-r84kf Pod Normal Scheduled default-scheduler Successfully assigned parabot-1262887100-r84kf to minikube
8m 18m 2 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Pulling kubelet, minikube pulling image "private repo"
8m 18m 2 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
18m 18m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Created kubelet, minikube Created container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
18m 18m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Started kubelet, minikube Started container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045
8m 8m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Created kubelet, minikube Created container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
8m 8m 1 parabot-1262887100-r84kf Pod spec.containers{parabot} Normal Started kubelet, minikube Started container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b
18m 18m 1 parabot-1262887100 ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: parabot-1262887100-r84kf
18m 18m 1 parabot Deployment Normal ScalingReplicaSet deployment-controller Scaled up replica set parabot-1262887100 to 1
19m 19m 1 rabbitmq-279796448-pcqqh Pod Normal SandboxChanged kubelet, minikube Pod sandbox changed, it will be killed and re-created.
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Pulling kubelet, minikube pulling image "rabbitmq"
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Pulled kubelet, minikube Successfully pulled image "rabbitmq"
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Created kubelet, minikube Created container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
19m 19m 1 rabbitmq-279796448-pcqqh Pod spec.containers{rabbitmq} Normal Started kubelet, minikube Started container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50
19m 19m 1 suite-ui-1725964700-ssshn Pod Normal SandboxChanged kubelet, minikube Pod sandbox changed, it will be killed and re-created.
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Pulling kubelet, minikube pulling image "private repo"
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Pulled kubelet, minikube Successfully pulled image "private repo"
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Created kubelet, minikube Created container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a
19m 19m 1 suite-ui-1725964700-ssshn Pod spec.containers{suite-ui} Normal Started kubelet, minikube Started container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a
</code></pre>
| <p>See kubectl get logs for any obvious errors. In this case, as suspected, it looks like it is insufficient resources problem (or a service that has resource leaks).
If possible, try increasing resources to see if it helps. </p>
|
<p>I tried <code>kubectl exec</code> on a k8s 1.6.4 RBAC-enabled cluster and the error returned was: <code>error: unable to upgrade connection: Unauthorized</code>. <code>docker exec</code> on the same container succeeds. Otherwise, <code>kubectl</code> is working. <code>kubectl</code> tunnels through an SSH connection but I don't think this is the issue.</p>
<p>kubelet authn is enabled but not authz. The <a href="https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization" rel="noreferrer">docs</a> say that authz is AlwaysAllow by default, so I have left it this way.</p>
<p>I have a feeling that it is similar to <a href="https://stackoverflow.com/questions/44312745/kubernetes-rbac-unable-to-upgrade-connection-forbidden-user-systemanonymous">this issue</a>. But the error message is a tad different.</p>
<p>Thanks in advance!</p>
<p>Verbose logs for the <code>kubectl exec</code> command:</p>
<pre><code>I0614 16:50:11.003677 64104 round_trippers.go:398] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" https://localhost:6443/api/v1/namespaces/monitoring/pods/alertmanager-main-0/exec?command=%2Fbin%2Fls&container=alertmanager&container=alertmanager&stderr=true&stdout=true
I0614 16:50:11.003705 64104 round_trippers.go:398] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.6.4 (darwin/amd64) kubernetes/d6f4332" https://localhost:6443/api/v1/namespaces/monitoring/pods/alertmanager-main-0/exec?command=%2Fbin%2Fls&container=alertmanager&container=alertmanager&stderr=true&stdout=true
I0614 16:50:11.169474 64104 round_trippers.go:417] POST https://localhost:6443/api/v1/namespaces/monitoring/pods/alertmanager-main-0/exec?command=%2Fbin%2Fls&container=alertmanager&container=alertmanager&stderr=true&stdout=true 401 Unauthorized in 165 milliseconds
I0614 16:50:11.169493 64104 round_trippers.go:423] Response Headers:
I0614 16:50:11.169497 64104 round_trippers.go:426] Date: Wed, 14 Jun 2017 08:50:11 GMT
I0614 16:50:11.169500 64104 round_trippers.go:426] Content-Length: 12
I0614 16:50:11.169502 64104 round_trippers.go:426] Content-Type: text/plain; charset=utf-8
I0614 16:50:11.169506 64104 round_trippers.go:417] POST https://localhost:6443/api/v1/namespaces/monitoring/pods/alertmanager-main-0/exec?command=%2Fbin%2Fls&container=alertmanager&container=alertmanager&stderr=true&stdout=true 401 Unauthorized in 165 milliseconds
I0614 16:50:11.169509 64104 round_trippers.go:423] Response Headers:
I0614 16:50:11.169512 64104 round_trippers.go:426] Date: Wed, 14 Jun 2017 08:50:11 GMT
I0614 16:50:11.169545 64104 round_trippers.go:426] Content-Length: 12
I0614 16:50:11.169548 64104 round_trippers.go:426] Content-Type: text/plain; charset=utf-8
F0614 16:50:11.169635 64104 helpers.go:119] error: unable to upgrade connection: Unauthorized
</code></pre>
| <p>This is an RTFM moment... The solution was basically to follow all the steps on <a href="https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization" rel="noreferrer">this page</a> for authn, authz, or both.</p>
<p>I had omitted <code>--kubelet-client-certificate</code> and <code>--kubelet-client-key</code> which resulted in the error. Without these flags, <code>kube-apiserver</code> will fail to authenticate with kubelet when you do a <code>kubectl exec</code>.</p>
<p>My original attempt to configure authn was by reading the docs for the kubelet daemon (ie. not the one above). Hence the grave omission.</p>
|
<p>Currently, under kubernetes1.5.3, kube-apiserver.log and kube-controller-manager.log is generated by adding '1>>/var/log/kube-apiserver.log 2>&1' in /etc/kubernetes/kube-apiserver.yaml file.
When I upgrade the kubernetes version to 1.6.3, it not work. There is no log file created under /var/log. How to get the kubernetes log file?
Thanks, much.</p>
| <p>For kubernetes1.6+, try the following options</p>
<h3><code>kube-apiserver</code></h3>
<pre><code>--audit-log-path=/var/log/kubernetes/kube-apiserver.log --logtostderr=false
</code></pre>
<p>and restart <code>kube-apiserver</code>, you can find all logs for <code>kube-apiserver</code> in file <code>/var/log/kubernetes/kube-apiserver.log</code>.</p>
<h3><code>kube-controller-manager</code></h3>
<pre><code>--log-dir=/var/log/kubernetes --logtostderr=false
</code></pre>
<p>then restart <code>kube-controller-manager</code>, you will find:</p>
<ul>
<li><code>ERROR</code> logs in <code>/var/log/kubernetes/kube-controller-manager.ERROR</code>;</li>
<li><code>FATAL</code> logs in <code>/var/log/kubernetes/kube-controller-manager.FATAL</code>;</li>
<li><code>INFO</code> logs in <code>/var/log/kubernetes/kube-controller-manager.INFO</code>;</li>
<li><code>WARNING</code> logs in <code>/var/log/kubernetes/kube-controller-manager.WARNING</code>;</li>
</ul>
<h3><code>kube-scheduler</code></h3>
<pre><code> --log-dir=/var/log/kubernetes --logtostderr=false
</code></pre>
<p>then restart <code>kube-scheduler</code>, you will find:</p>
<ul>
<li><code>ERROR</code> logs in <code>/var/log/kubernetes/kube-scheduler.ERROR</code>;</li>
<li><code>FATAL</code> logs in <code>/var/log/kubernetes/kube-scheduler.FATAL</code>;</li>
<li><code>INFO</code> logs in <code>/var/log/kubernetes/kube-scheduler.INFO</code>;</li>
<li><code>WARNING</code> logs in <code>/var/log/kubernetes/kube-scheduler.WARNING</code>;</li>
</ul>
<h3><code>kubelet</code></h3>
<pre><code>--log-dir=/var/log/kubernetes --logtostderr=false
</code></pre>
<p>then restart <code>kubelet</code>, you will find:</p>
<ul>
<li><code>ERROR</code> logs in <code>/var/log/kubernetes/kubelet.ERROR</code>;</li>
<li><code>FATAL</code> logs in <code>/var/log/kubernetes/kubelet.FATAL</code>;</li>
<li><code>INFO</code> logs in <code>/var/log/kubernetes/kubelet.INFO</code>;</li>
<li><code>WARNING</code> logs in <code>/var/log/kubernetes/kubelet.WARNING</code>;</li>
</ul>
<h3><code>kube-proxy</code></h3>
<pre><code>--log-dir=/var/log/kubernetes --logtostderr=false
</code></pre>
<p>then restart <code>kube-proxy</code>, you will find:</p>
<ul>
<li><code>ERROR</code> logs in <code>/var/log/kubernetes/kube-proxy.ERROR</code>;</li>
<li><code>FATAL</code> logs in <code>/var/log/kubernetes/kube-proxy.FATAL</code>;</li>
<li><code>INFO</code> logs in <code>/var/log/kubernetes/kube-proxy.INFO</code>;</li>
<li><code>WARNING</code> logs in <code>/var/log/kubernetes/kube-proxy.WARNING</code>;</li>
</ul>
|
<p>I am looking to list all the containers in a pod in a script that gather's logs after running a test. <code>kubectl describe pods -l k8s-app=kube-dns</code> returns a lot of info, but I am just looking for a return like:</p>
<pre><code>etcd
kube2sky
skydns
</code></pre>
<p>I don't see a simple way to format the describe output. Is there another command? (and I guess worst case there is always parsing the output of describe).</p>
| <h1>Answer</h1>
<pre><code>kubectl get pods POD_NAME_HERE -o jsonpath='{.spec.containers[*].name}'
</code></pre>
<h1>Explanation</h1>
<p>This gets the JSON object representing the pod. It then uses kubectl's <a href="https://kubernetes.io/docs/user-guide/jsonpath/" rel="noreferrer">JSONpath</a> to extract the name of each container from the pod.</p>
|
<p>I've been running a Kubernetes cluster for a while now, but I haven't been able to keep it stable.
My cluster consists of four nodes, two masters and two workers. All nodes run on the same physical server, which in turn runs VMware vSphere 6.5. Each node runs CoreOS stable (1353.7.0), and I'm running Kubernetes/Hyperkube v1.6.4, using Calico for networking. I've followed the steps in <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="nofollow noreferrer">this</a> guide.</p>
<p>What happens is that for a few hours/days, the cluster will run without a hitch. Then, all of a sudden (for no discernible reason as far as I can tell) all my pods go to status "Pending" and stay that way. Any hosted services are then no longer reachable.
After a while (usually 5 to 10 minutes), it seems to restore itself, after which it starts recreating all my pods, and trying (but failing) to shut down all my running pods. Some of the newly created pods come up, but will initially have no connection to the internet. </p>
<p>For a couple of weeks now I've had this issue intermittently, and it's been preventing me from using Kubernetes in production. I'd really like to figure out what's been causing this!</p>
<p>Weirdly enough, when I try to diagnose the problem by inspecting the logs,
I've noticed that on both of my worker nodes, the journald logs will have become corrupted! On the master nodes, the log is still readable, but not very informative. </p>
<p>Even when running, kubelet is constantly emitting errors in its logs. On all the nodes, this is what's posted about once a minute:</p>
<pre><code>May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.012890 24228 cni.go:275] Error deleting network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.014762 24228 remote_runtime.go:109] StopPodSandbox "3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "logstash-s3498_default" network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.014818 24228 kuberuntime_gc.go:138] Failed to stop sandbox "3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "logstash-s3498_default" network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:38:07 kube-master1 kubelet-wrapper[24228]: I0526 09:38:07.422341 24228 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9a378211-3597-11e7-a7ec-000c2958a0d7-default-token-0p3gf" (spec.Name: "default-token-0p3gf") pod "9a378211-3597-11e7-a7ec-000c2958a0d7" (UID: "9a378211-3597-11e7-a7ec-000c2958a0d7").
May 26 09:38:14 kube-master1 kubelet-wrapper[24228]: W0526 09:38:14.037553 24228 docker_sandbox.go:263] NetworkPlugin cni failed on the status hook for pod "logstash-s3498_default": Unexpected command output nsenter: cannot open : No such file or directory
May 26 09:38:14 kube-master1 kubelet-wrapper[24228]: with error: exit status 1
</code></pre>
<p>I've googled this error, encountered <a href="https://github.com/kubernetes/kubernetes/issues/42735" rel="nofollow noreferrer">this</a> issue, but that has been closed and people indicate that using v1.6.0 or later should resolve it, but it definitely hasn't in my case...</p>
<p>Can anybody point me in the right direction?!</p>
<p>Thanks!</p>
| <p>Seen this as well. problem seems to go away if you downgrade CoreOS to a older version with docker 1.12.3.</p>
<p>Docker is a nightmare with regressions in every single version they release :(</p>
|
<p>In some hosting environments/configurations, the network traffic between pods (applications) may traverse the public Internet. As a result, I'd like to secure the communication between the pods.</p>
<p>For example, I have the following structure:</p>
<p>Service_A - edge service in my product and provides access to my API to external users via public IP.</p>
<p>Service_B and Service_C - microservices that has ClusterIP(s).</p>
<p>As I understand I can secure traffic user<-> Service_A by using Ingress controller with ssl certificate.</p>
<p>But how should I secure Service_A<->Service_B communication? Create additional ingress services to wrap microservices? Are there any best practices for such cases?</p>
<p>One detail: microservices use gRPC for communication.</p>
<p>Thanks</p>
| <p>A simple, generic solution that I like is to run a reverse-proxy (such as nginx) in each pod. All of your app containers will listen on localhost or unix sockets, and the ssl proxy will terminate external HTTPS connections. This makes it easy to audit your SSL config across all your apps, since every connection is terminated by the same nginx config.</p>
<p>Certificate distribution is the primary challenge with this approach. For external services, you can use LetsEncrypt to generate certs. For internal services, you'll need a private CA that is trusted by your ssl-proxy. You can mount the CA cert in a config-map at runtime. You'd then generate a cert per app or per-pod, and mount that as a Secret consumed in the ssl-proxy container.</p>
<p>If this sounds like too much work, you might want to look at <a href="https://github.com/istio/istio" rel="noreferrer">https://github.com/istio/istio</a>, which aims to automate the cluster CA role, and the provision of per-pod certificates.</p>
|
<p>I'm trying to simulate Daemon-set in kubernetes with using Deployment/RC/Replica-set.</p>
<p><strong>What I want to achieve :</strong></p>
<p>As daemon-set kind deploy PODs on each nodes like wise I want to deploy pod on each node, but without kind daemonset.</p>
<p>Is there any way to do it ? Can't find proper way to do that.</p>
| <p>You can do that by using Deployment/ReplicaSet in Kubernetes with <strong>hostPort</strong>.<br/><br/>
Assuming you have 4 nodes in Kubernetes cluster, you can create a deployment or replicaset with hostPort and replicas equal to number of nodes in cluster. <br/><br/>
For example you want to run nginx pod on every node with clustersize equal to 4 then you have mention hostport to container port in deployment/replicaset definition. The kubernetes scheduler will be unable to schedule more than 1 pod on same host and in this way all nodes have at least one pod scheduled.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-hello
labels:
tier: frontend
app: nginx-hello
spec:
replicas: 4
template:
metadata:
labels:
tier: frontend
app: nginx-hello
spec:
containers:
- name: nginx-hello
image: nginxdemos/hello
ports:
- containerPort: 80
hostPort: 8088
</code></pre>
|
<p>What are installed for minikube:</p>
<pre><code>$ ls -al /usr/local/bin/
-rwxr-xr-x 1 root root 26406912 Jun 14 12:05 docker-machine
-rwxrwxr-x 1 me libvirtd 11889064 Jun 14 12:07 docker-machine-driver-kvm
-rwxrwxr-x 1 me me 70232912 Jun 14 11:58 kubectl
-rwxrwxr-x 1 me me 82512696 Jun 14 11:57 minikube
</code></pre>
<p>Trying to start cluster by minikube</p>
<pre><code>$ minikube start --vm-driver=kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
E0614 12:07:39.515994 14655 start.go:127] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ').
Retrying.
E0614 12:07:39.517076 14655 start.go:133] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ')
</code></pre>
<p>I am new to kubernetes. Any idea how to fix it? Thanks</p>
<p><strong>UPDATE</strong></p>
<pre><code>sudo /usr/sbin/kvm-ok
INFO: /dev/kvm does not exist
HINT: sudo modprobe kvm_intel
INFO: Your CPU supports KVM extensions
INFO: KVM (vmx) is disabled by your BIOS
HINT: Enter your BIOS setup and enable Virtualization Technology (VT),
and then hard poweroff/poweron your system
KVM acceleration can NOT be used
$ dmesg | grep kvm
[ 2.114855] kvm: disabled by bios
[ 2.327746] kvm: disabled by bios
[ 120.423249] kvm: disabled by bios
[ 222.250977] kvm: disabled by bios
</code></pre>
| <p>My update is close to the solution. The solution is to enable virtualization in the BIOS.</p>
<p>1, Power on your PC and open the BIOS.</p>
<p>2, Go to the <code>security</code> section and enable virtualization.</p>
|
<p>I am trying to configure my Kubernetes cluster to use a local NFS server for persistent volumes.</p>
<p>I set up the PersistentVolume as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: hq-storage-u4
namespace: my-ns
spec:
capacity:
storage: 10Ti
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/u4
server: 10.30.136.79
readOnly: false
</code></pre>
<p>The PV looks OK in kubectl</p>
<pre><code>$ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
hq-storage-u4 10Ti RWX Retain Released my-ns/pv-50g 49m
</code></pre>
<p>I then try to create the PersistentVolumeClaim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-50gb
namespace: my-ns
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
</code></pre>
<p>Kubectl shows the pvc status is Pending</p>
<pre><code>$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
pvc-50gb Pending 16m
</code></pre>
<p>When I try to add the volume to a deployment, I get the error:</p>
<pre><code>[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "pvc-50gb", which is unexpected.]
</code></pre>
<p>How to I get the pvc to a working state?</p>
| <p>It turned out that I needed to put the IP (I also put the path) in quotes. After fixing that, the pvc goes to status Bound, and the pod can mount correctly.</p>
|
<p>I am reading <a href="http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-6/" rel="nofollow noreferrer">this blog</a> and tried to run the <a href="https://github.com/phatak-dev/kubernetes-spark" rel="nofollow noreferrer">code</a>. If <code>sleep infinity</code> is removed, the pod will be stuck in CrashLoopBackOff:</p>
<pre><code>$ kubectl get po
NAME READY STATUS RESTARTS AGE
spark-master-715509916-zggtc 0/1 CrashLoopBackOff 5 3m
spark-worker-3468022965-xb5mw 0/1 Completed 5 3m
</code></pre>
<p>Can anyone explain this?</p>
| <p>The reason the pod goes into the <code>CrashLoopBackOff</code> state is that Kubernetes expects to process manage the command executed by the container. Presumably the <code>start-master.sh</code> script executes, then exits, which Kubernetes interprets as the process dying. You need to execute a command which will not exit in order to keep the pod alive. In this case the <code>sleep infinity</code> is included to simulate a long running process. You could also achieve this with something like:</p>
<p><code>'./start-master.sh ; /bin/bash'</code></p>
|
<p>I need to access a service from my cluster on GKE outside of it. This service restricts access IP allowing just one IP. So, I have to set a NAT or something like that, but I don't really sure that is the right solution to set an external gateway/NAT on my GKE cluster. Can you help me, please?</p>
| <p>You can achieve this by configuring a <a href="https://cloud.google.com/solutions/connecting-securely#natgateway" rel="nofollow noreferrer">NAT Gateway</a>.</p>
<p>Here's a guide: <a href="https://github.com/johnlabarge/gke-nat-example" rel="nofollow noreferrer">https://github.com/johnlabarge/gke-nat-example</a></p>
<p>The key steps to note are that you'll need to recreate your GKE cluster to apply a network tag to the nodes, and then use that tag in your GCP Route. (You cannot just apply the route to all nodes, as it would then be applied to your NAT Gateway instance(s) as well).</p>
<p>The other point to note (perhaps obviously) is that you cannot route all traffic through the NAT Gateway, unless you route all incoming traffic through it as well. I just it just for outbound traffic to a specific set of IPs which need a stable source.</p>
|
<p>I have a etcd cluster running on coreOS. The leader was set to the first member I've started (as expected), but I started the wrong one first, so the leader isn't who it's supposed to be.</p>
<p>How do I change the role of a leader to a different member?</p>
| <p>I misunderstood the role of the etcd cluster leader. I was setting up a Kubernetes cluster and I thought the master node should be the etcd cluster leader. But as @heimbert mentioned the point of leader election is that you don't have to worry who the leader is.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.