prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have a deployment with 2 replicas of nginx with openconnect vpn proxy container (a pod has only one container). </p>
<p>They start without any problems and everything works, but once the connection crashes and my liveness probe fails, the nginx container is restarted ending up in CrashLoopbackoff because the openconnect and nginx restart fails with </p>
<p>nginx:</p>
<pre><code>host not found in upstream "example.server.org" in /etc/nginx/nginx.conf:11
</code></pre>
<p>openconnect: </p>
<pre><code>getaddrinfo failed for host 'vpn.server.com': Temporary failure in name resolution
</code></pre>
<p>It seems like the /etc/resolv.conf is edited by openconnect and on the pod restart it stays the same (altough it is not a part of a persistent volume) and I believe the whole container should be run from a clean docker image, where the /etc/resolv.conf is not modified, right? </p>
<p>The only way how to fix the CrashLoopback is to delete the pod and the deployment rc runs a new pod that works. </p>
<p><strong>How is it different to create a new pod vs. when the container in pod is restarted by the liveness probe restartPolicy: Always? Is the container restarted with a clean image?</strong></p>
| <p><code>restartPolicy</code> applies to all Containers in the Pod, not the pod itself. Pods usually only get re-created when someone <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime" rel="nofollow noreferrer">explicitly deletes</a> them.</p>
<p>I think this explains why the restarted container with the bad resolv.conf fails but a new pod works.</p>
<p></p>
<p>A "restarted container" is just that, it is not spawned new from the downloaded docker image. It is like killing a process and starting it - the file system for the new process is the same one the old process was updating. But a new pod will create a new container with a local file system view identical to the one packaged in the downloaded docker image - fresh start.</p>
|
<p>If some nodes down in a cluster and the status become <code>NotReady</code>, can I make it to trigger some action?</p>
<p>I tried to use <a href="https://gist.github.com/irake99/2062ea54bef89a072b67461cf955b079" rel="nofollow noreferrer">client-go</a> and curl to watch <code>/api/v1/watch/nodes</code> and get notifies when nodes status have <code>any</code> change. How do I get notifies <strong>only</strong> if nodes status change from <code>Ready</code> to <code>NotReady</code>?</p>
| <p>As of now, there is no way to query for specific updates. There are two ways you can implement this:</p>
<ol>
<li><p>The NodeStatus has a field called <code>lastTransitionTime</code> which records a change in a field. In your application, you can check every X seconds and then compare last checked time with <code>lastTransitionTime</code> to determine if there is a change. If your application restarts, you will have to do a one time check at startup of application</p>
<pre><code>{
"type": "Ready",
"status": "True",
"lastHeartbeatTime": "2017-12-27T09:52:19Z",
"lastTransitionTime": "2017-12-26T14:55:49Z",
"reason": "KubeletReady",
"message": "kubelet is posting ready status"
}
</code></pre></li>
<li><p>Your app written in client-go can maintain a cache and compare the value in cache to the value from watch events. In this case as well, you will have to think of ways to not miss events when your watcher service is down. You can either store that state somewhere or use some other mechanism but it depends on criticality of your use case.</p></li>
</ol>
<p>The closest example I can find is in Calico-go, where any changes in network policies are watched for and related changes are done in a calico-etcd so that agents on nodes can implement firewall rules accordingly. Starting on this line, events are handled, of course you will have to navigate the stack further! <a href="https://github.com/projectcalico/libcalico-go/blob/master/lib/backend/watchersyncer/watchercache.go#L93" rel="nofollow noreferrer">Code</a></p>
|
<p>Using nginx <code>nginx-ingress-controller:0.9.0</code>, below is the permanent state of the google cloud load balancer :</p>
<p><a href="https://i.stack.imgur.com/zT3mj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zT3mj.png" alt="enter image description here"></a></p>
<p>Basically, the single healthy node is the one running the nginx-ingress-controller pods. Besides not looking good on this screen, everything works super fine. Thing is, Im' wondering why such bad notice appears on the lb</p>
<p>Here's <a href="https://gist.github.com/benbonnet/14f0c3efc7a2edd7b50351e825e39353" rel="nofollow noreferrer">the service/deployment</a> used</p>
<p>Am just getting a little lost over how thing works; hope to get some experienced feedback on how to do thing right (I mean, getting green lights on all nodes), or to double check if that's a drawback of not using the 'official' gcloud l7 thing </p>
| <p>Your <code>Service</code> is using the <code>service.beta.kubernetes.io/external-traffic: OnlyLocal</code> annotation. This configures it so that traffic arriving at the NodePort for that service will never go a <code>Pod</code> on another node. Since your <code>Deployment</code> only has 1 replica, the only node that will receive traffic is the one where the 1 Pod is running.</p>
<p>If you scale your <code>Deployment</code> to 2 replicas, 2 nodes will be healthy, etc.</p>
<p>Using that annotation is a recommend configuration so that you are not introducing additional network hops.</p>
|
<p>I'm trying to figure out where a program run in a container stores its logs. But, I don't have SSH access to the machine which deployed container, only <code>kubectl</code>. Suppose I had SSH access, I'd do something like this:</p>
<pre><code>ssh machine-running-docker 'docker diff \
$(kubectl describe pod pod-name | \
grep 'Conainer ID' | sed -E s#^[^/]+//(.+)$#\1#)'
</code></pre>
<p>(The regex may be imprecise, just to give the idea).</p>
| <p>Well, for starters, an app in container should not store it's logs in files inside container. That said, it is sometimes hard to avoid when you work with 3rd party apps not configured for logging to stdout / some logging service.</p>
<p>Good old <code>find</code> to the rescue - just <code>kubectl exec</code> into the pod/container and <code>find / -mmin -1</code> will give you all files modified in last 1 min. That should narrow the list enough for you (assuming the container lived for few minutes already).</p>
|
<p>I'm trying to deploy Lagom microservices on Kubernetes by following-up the Chirper Lagom example. So According to the provided guide <a href="https://developer.lightbend.com/guides/lagom-kubernetes-k8s-deploy-microservices/" rel="nofollow noreferrer">link</a>, I configured a kubernetes cluster by installing Minikube and everything is good.</p>
<p>But when I tried to build Chirper Docker images by using fabric8’s docker-maven-plugin, I got this error:</p>
<blockquote>
<p>[ERROR] Failed to execute goal
io.fabric8:docker-maven-plugin:0.20.1:build (default-cli) on project
friend-impl: Execution default-cli of goal
io.fabric8:docker-maven-plugin:0.20.1:build failed: No
given, no DOCKER_HOST environment variable, no read/writable
'/var/run/docker.sock' or '//./pipe/docker_engine' and no external
provider like Docker machine configured -> [Help 1]</p>
</blockquote>
<p>Is there anyone that can help me to understand that error? Thanks.</p>
| <p>[Updating based on discussions in comments]</p>
<p>The issue here is that the Docker CLI is not able to reach the Docker Engine. Since you are using Minikube, you can point docker CLI to Docker engine inside Minikube. That will ensure that images are built inside minikube VM and also ran there subsequently. You can run command:</p>
<pre><code>eval $(minikube docker-env)
</code></pre>
<p>This command will set the parameters required by Docker CLI to communicate with Docker engine in Minikube and it should be able to run command which is failing for you!</p>
|
<p>I'm certain I'm missing something obvious. I have looked through the documentation for ScheduledJobs / CronJobs on Kubernetes, but I cannot find a way to do the following on a schedule:</p>
<ol>
<li>Connect to an existing Pod</li>
<li>Execute a script</li>
<li>Disconnect</li>
</ol>
<p>I have alternative methods of doing this, but they don't feel right. </p>
<ol>
<li><p>Schedule a cron task for: kubectl exec -it $(kubectl get pods --selector=some-selector | head -1) /path/to/script</p></li>
<li><p>Create one deployment that has a "Cron Pod" which also houses the application, and many "Non Cron Pods" which are just the application. The Cron Pod would use a different image (one with cron tasks scheduled).</p></li>
</ol>
<p>I would prefer to use the Kubernetes ScheduledJobs if possible to prevent the same Job running multiple times at once and also because it strikes me as the more appropriate way of doing it.</p>
<p>Is there a way to do this by ScheduledJobs / CronJobs?</p>
<p><a href="http://kubernetes.io/docs/user-guide/cron-jobs/" rel="noreferrer">http://kubernetes.io/docs/user-guide/cron-jobs/</a></p>
| <p>As far as I'm aware there is no "official" way to do this the way you want, and that is I believe by design. Pods are supposed to be ephemeral and horizontally scalable, and Jobs are designed to exit. Having a cron job "attach" to an existing pod doesn't fit that module. The Scheduler would have no idea if the job completed.</p>
<p>Instead, a Job can to bring up an instance of your application specifically for running the Job and then take it down once the Job is complete. To do this you can use the same Image for the Job as for your Deployment but use a different "Entrypoint" by setting <code>command:</code>.</p>
<p>If they job needs access to data created by your application then that data will need to be persisted outside the application/Pod, you could so this a few ways but the obvious ways would be a database or a persistent volume.
For example useing a database would look something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP
spec:
template:
metadata:
labels:
name: THIS
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP
command:
- app-start
env:
- name: DB_HOST
value: "127.0.0.1"
- name: DB_DATABASE
value: "app_db"
</code></pre>
<p>And a job that connects to the same database, but with a different "Entrypoint" :</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: APP-JOB
spec:
template:
metadata:
name: APP-JOB
labels:
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP-JOB
command:
- app-job
env:
- name: DB_HOST
value: "127.0.0.1"
- name: DB_DATABASE
value: "app_db"
</code></pre>
<p>Or the persistent volume approach would look something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP
spec:
template:
metadata:
labels:
name: THIS
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP
command:
- app-start
volumeMounts:
- mountPath: "/var/www/html"
name: APP-VOLUME
volumes:
- name: APP-VOLUME
persistentVolumeClaim:
claimName: APP-CLAIM
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: APP-VOLUME
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /app
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: APP-CLAIM
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
service: app
</code></pre>
<p>With a job like this, attaching to the same volume:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: APP-JOB
spec:
template:
metadata:
name: APP-JOB
labels:
app: THAT
spec:
containers:
- image: APP:IMAGE
name: APP-JOB
command:
- app-job
volumeMounts:
- mountPath: "/var/www/html"
name: APP-VOLUME
volumes:
- name: APP-VOLUME
persistentVolumeClaim:
claimName: APP-CLAIM
</code></pre>
|
<p>I've a question about helm validation of <code>resources</code> field in <code>CronJob</code> chart , any help welcome.</p>
<p>I use apiVersion batch/v1beta1</p>
<p>I get error when I try to retrive values from values.yaml</p>
<p>The error : <code>Error: error validating "": error validating data: found `invalid field requests for v1.Container</code></p>
<p>Command I run : <code>helm install --dry-run --debug my_chart</code> </p>
<p>my CronJob chart as following:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Chart.Name }}
spec:
schedule: "0 */2 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
...
spec:
containers:
...
spec:
resources:
{{ toYaml .Values.resources | indent 12 }}
</code></pre>
<p>Error disapierd when I change the resources configuration to </p>
<pre><code> resources:
requests:
cpu: 300m
memory: 1024Mi
</code></pre>
<p>I don't like the last change and I'd like to follow my regular configuration guide and store values in values.yaml</p>
<p>cluster version are</p>
<p><code>Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}</code></p>
<p><code>Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.4-gke.0", GitCommit:"04502ae78d522a3d410de3710e1550cfb16dad4a", GitTreeState:"clean", BuildDate:"2017-11-27T19:19:56Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}</code></p>
<p>helm versions are :</p>
<p><code>Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}</code></p>
<p><code>Server: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"}</code></p>
<p>Thanks in advance</p>
| <p><strong>TL;DR</strong>: Indent by 14 spaces (instead of 12) and remove the additional <code>spec</code> property in your container definition.</p>
<hr>
<p>Mind the correct indentation in your YAML definitions. For example, the <code>containers</code> property needs to be a sub-property of the CronJob's <code>spec.jobTemplate.spec.template</code> property (with <code>spec.jobTemplate</code> being the <em>template</em> for a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#job-v1-batch" rel="nofollow noreferrer">Job</a> object (or a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#jobtemplatespec-v1beta1-batch" rel="nofollow noreferrer">JobTemplate</a>), and <code>spec.jobTemplate.spec.template</code> then being the <em>template</em> for that Job's <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#pod-v1-core" rel="nofollow noreferrer">Pod</a> object (or a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#podtemplate-v1-core" rel="nofollow noreferrer">PodTemplate</a>).</p>
<p>Furthermore, the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#podspec-v1-core" rel="nofollow noreferrer">Pod's <code>containers</code> attribute</a> does not require an additional <code>spec</code>. Have a look at the API reference for the respective objects (linked above) for the exact specification of these object types.</p>
<p>For a CronJob, this is how the Helm template should look like (again, indentation is important!). Also, note that in this case, the <code>.spec.jobTemplate.spec.template.spec.resources.requests</code> property needs to be indented by <strong>14 spaces, and not 12</strong>. </p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Chart.Name }}
spec:
schedule: "0 */2 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
# ...
spec:
containers:
- name: foo
# ...
resources:
{{ toYaml .Values.resources | indent 14 }}
</code></pre>
<hr>
<p>Regarding the error you've received: With an indentation of 12 spaces (<code>indent 12</code>), Helm will create a YAML definition for your job similar to the following:</p>
<pre><code> spec:
containers:
- name: foo
# ...
resources:
requests:
cpu: 300m
memory: 1024Mi
</code></pre>
<p>As you can see, the <code>requests</code> property (intended to be a sub-property of the <code>resources</code> property), is now actually a property of the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#container-v1-core" rel="nofollow noreferrer">Container</a> definition. However, the Container resource does not have a field called <code>requests</code>, resulting in the error message:</p>
<blockquote>
<p>Error: error validating "": error validating data: found `invalid field requests for v1.Container</p>
</blockquote>
|
<p>I'm new to Kubernetes so I encountered the following issue. These are my steps:</p>
<p>1) I ran <code>etcd</code>:</p>
<pre><code>docker run --volume=/var/etcd:/var/etcd --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
</code></pre>
<p>2) I ran the master container:</p>
<pre><code>docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d gcr.io/google_containers/hyperkube:v1.0.1 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" --address="0.0.0.0" --api-servers=http://localhost:8080 --config=/etc/kubernetes/manifests
</code></pre>
<p>3) I ran the proxy:</p>
<pre><code>docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
</code></pre>
<p>4) I installed <code>kubectl</code></p>
<p>5) I created this simple <code>pod-file.yml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
</code></pre>
<p>and tried to create pod by running:</p>
<pre><code>kubectl create -f pod-file.yml
</code></pre>
<p>And I got:</p>
<pre><code>ubuntu@ubuntu:~$ kubectl create -f pod-file.yml
error: could not read an encoded object from pod-file.yml: unable to connect to a server to handle "pods": couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused
</code></pre>
<p>I found it pretty odd so I checked containers I ran earlier:</p>
<pre><code>ubuntu@ubuntu:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ae7f094bb01 gcr.io/google_containers/hyperkube:v1.0.1 "/hyperkube proxy ..." 55 minutes ago Up 55 minutes suspicious_ramanujan
ed841bc6ef26 gcr.io/google_containers/hyperkube:v1.0.1 "/hyperkube kubele..." 57 minutes ago Up 57 minutes competent_mclean
7408c640a2c8 gcr.io/google_containers/etcd:2.0.12 "/usr/local/bin/et..." About an hour ago Up About an hour elated_shaw
</code></pre>
<p>So looks like all is ok because all containers are up and running. Okay, I checked open ports in my system (<code>ubuntu 16.04</code>):</p>
<pre><code>ubuntu@ubuntu:~$ sudo netstat -nautp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1315/sshd
tcp 0 0 127.0.0.1:7001 0.0.0.0:* LISTEN 3209/etcd
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 3324/hyperkube
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3399/hyperkube
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 3209/etcd
tcp 0 524 172.30.3.114:22 212.98.179.158:35900 ESTABLISHED 3087/sshd: ubuntu [
tcp6 0 0 :::10255 :::* LISTEN 3324/hyperkube
tcp6 0 0 :::22 :::* LISTEN 1315/sshd
tcp6 0 0 :::4001 :::* LISTEN 3209/etcd
tcp6 0 0 :::10250 :::* LISTEN 3324/hyperkube
udp 0 0 0.0.0.0:68 0.0.0.0:* 959/dhclient
udp 0 0 0.0.0.0:68 0.0.0.0:* 796/dhclient
</code></pre>
<p>And I found that there is no <code>8080</code> open TCP port that <code>kubectl</code> tried to reach. So this is the cause of my issue. </p>
<p>So my question is what container/service/daemon I should run/launch to open this port and to assign web service to it in order to let kubectl use it for this GET request <code>http://localhost:8080/api</code>? </p>
<p>Any help would be appreciated.</p>
| <p>Creating a <a href="https://kubernetes.io/docs/getting-started-guides/scratch/" rel="nofollow noreferrer">kubernetes cluster from scratch</a> is a little more complicated. The master <code>kubelet</code> runs a <a href="https://kubernetes.io/docs/getting-started-guides/scratch/#apiserver-controller-manager-and-scheduler" rel="nofollow noreferrer">number of pods</a> to make it all go. </p>
<p>If you're looking for an all in one solution use <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">minikube</a> to run in a VM. Otherwise use <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer"><code>kubeadm</code></a> to setup your master and work back from there if you want to see how each component is setup. </p>
|
<p><code>Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:default:default" cannot get services in the namespace "mycomp-services-process"</code></p>
<p>For the above issue I have created "mycomp-service-process" namespace and checked the issue.</p>
<p>But it shows again message like this:</p>
<p><code>Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:mycomp-services-process:default" cannot get services in the namespace "mycomp-services-process"</code></p>
| <p>Creating a namespace won't, of course, solve the issue, as that is not the problem at all.</p>
<p>In the first error the issue is that <code>serviceaccount</code> default in default namespace <code>can not get services</code> because it does not have access to list/get services. So what you need to do is assign a role to that user using <code>clusterrolebinding</code>.</p>
<p>Following the set of minimum privileges, you can first create a role which has access to list services:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>What above snippet does is create a clusterrole which can list, get and watch services. (You will have to create a yaml file and apply above specs)</p>
<p>Now we can use this clusterrole to create a clusterrolebinding:</p>
<pre><code>kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount=default:default
</code></pre>
<p>In above command the <code>service-reader-pod</code> is name of clusterrolebinding and it is assigning the service-reader clusterrole to default serviceaccount in default namespace. Similar steps can be followed for the second error you are facing.</p>
<p>In this case I created <code>clusterrole</code> and <code>clusterrolebinding</code> but you might want to create a <code>role</code> and <code>rolebinding</code> instead. You can check the <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="noreferrer">documentation in detail here</a></p>
|
<p>I often get errors about pods not syncing / ImagePullBackOff errors on my Kubernetes cluster on Google Kubernetes Engine. But I'm not sure how to debug the issue as I can't establish the root cause.</p>
<p>In the Google dashboard I can see the ReplicaSet has the warning:
Pod errors: ImagePullBackOff</p>
<p>If I drill down to the pod, I can see in the logs:
Message: Error syncing pod
Reason: FailedSync</p>
<p><a href="https://i.stack.imgur.com/d1Cll.png" rel="noreferrer"><img src="https://i.stack.imgur.com/d1Cll.png" alt="enter image description here"></a></p>
<p>but nothing further. Why would a pod fail to sync?</p>
<p>UPDATE: further down in the logs I see:</p>
<pre><code>Failed to pull image "cockroachdb/cockroach:v1.1.3": rpc error: code = Unknown desc = failed to register layer: ApplyLayer exit status 1 stdout: stderr: open /usr/share/zoneinfo/right/America/Pangnirtung: no space left on device
</code></pre>
<p>I've allocated cockroachdb 1 GB of persistent storage - going to try to increase to 10 GB to see if that fixes anything. Or do I need to increase the disk size on the node pools?</p>
| <p>ImagePullBackOff occurs most of the time due to typos in the image name or not being able to reach the repository:</p>
<ul>
<li>Check for typos by copy/pasting the image name in a docker pull command (copy/paste so that any error is also copy/pasted: You want to find the error and not confirm your own bias).</li>
<li>Check for reachability of DNS by logging into a pod and executing a nslookup/dig command (or ping opr anything which hits the DNS).</li>
</ul>
|
<p>I have a k8s cluster v1.8. I was trying to setup prometheus on my cluster so basically am trying to monitor services and deployment. But with the below config-map i am not able to view any of my services or deployments.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
# Default to scraping over https. If required, just disable this or change to
# `http`.
scheme: https
# This TLS & bearer token file config is used to connect to the actual scrape
# endpoints for cluster components. This is separate to discovery auth
# configuration because discovery & scraping are two separate concerns in
# Prometheus. The discovery auth config is automatic if Prometheus runs inside
# the cluster. Otherwise, more config options have to be provided within the
# <kubernetes_sd_config>.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# If your node certificates are self-signed or use a different CA to the
# master CA, then disable certificate verification below. Note that
# certificate verification is an integral part of a secure infrastructure
# so this should only be disabled in a controlled environment. You can
# disable certificate verification by uncommenting the line below.
#
# insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# Keep only the default/kubernetes service endpoints for the https port. This
# will add targets for each API server which Kubernetes adds an endpoint to
# the default/kubernetes service.
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
# Scrape config for nodes (kubelet).
#
# Rather than connecting directly to the node, the scrape is proxied though the
# Kubernetes apiserver. This means it will work if Prometheus is running out of
# cluster, or can't connect to nodes for some other reason (e.g. because of
# firewalling).
- job_name: 'kubernetes-nodes'
# Default to scraping over https. If required, just disable this or change to
# `http`.
scheme: https
# This TLS & bearer token file config is used to connect to the actual scrape
# endpoints for cluster components. This is separate to discovery auth
# configuration because discovery & scraping are two separate concerns in
# Prometheus. The discovery auth config is automatic if Prometheus runs inside
# the cluster. Otherwise, more config options have to be provided within the
# <kubernetes_sd_config>.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
# Default to scraping over https. If required, just disable this or change to
# `http`.
scheme: https
# This TLS & bearer token file config is used to connect to the actual scrape
# endpoints for cluster components. This is separate to discovery auth
# configuration because discovery & scraping are two separate concerns in
# Prometheus. The discovery auth config is automatic if Prometheus runs inside
# the cluster. Otherwise, more config options have to be provided within the
# <kubernetes_sd_config>.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# Scrape config for service endpoints.
#
# The relabeling allows the actual service scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/scrape`: Only scrape services that have a value of `true`
# * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
# to set this to `https` & most likely set the `tls_config` of the scrape config.
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: If the metrics are exposed on a different port to the
# service then set this appropriately.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
# Example scrape config for probing services via the Blackbox Exporter.
#
# The relabeling allows the actual service scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/probe`: Only probe services that have a value of `true`
- job_name: 'kubernetes-services'
metrics_path: /probe
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
# Example scrape config for probing ingresses via the Blackbox Exporter.
#
# The relabeling allows the actual ingress scrape endpoint to be configured
# via the following annotations:
#
# * `prometheus.io/probe`: Only probe services that have a value of `true`
# Example scrape config for pods
#
# The relabeling allows the actual pod scrape endpoint to be configured via the
# following annotations:
#
# * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: Scrape the pod on the indicated port instead of the
# pod's declared ports (default is a port-free target if none are declared).
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name</code></pre>
</div>
</div>
</p>
<p>My target page output looks like this <a href="https://i.stack.imgur.com/Q6F6O.png" rel="nofollow noreferrer">https://i.stack.imgur.com/Q6F6O.png</a> .
What i want from Prometheus is to give k8s service or deployment metrics data also.
Any help will be appreciated. Thanks.</p>
| <p>As said in the docs for the configuration job called 'kubernetes-pods' that you pasted above:</p>
<blockquote>
<p><code>prometheus.io/scrape</code>: Only scrape pods that have a value of <code>true</code></p>
</blockquote>
<p>So your pods need to have that annotation set to true in order for Prometheus to scrape the configured path. Note that you can configure the path and port where your applications are exposing metrics using annotations like <code>prometheus.io/path</code> and <code>prometheus.io/port</code>.</p>
|
<p>I'd like to deploy pods into my GKE Kubernetes cluster that use images from a private, third-party Docker registry (<em>not</em> GCP's private Docker registry).</p>
<p>How do I provide my GKE Kubernetes cluster with credentials to that private repository so that the images can be pulled when required?</p>
| <p>You need to create a secret that holds the credentials needed to download images from the private registry. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">This process is explained on Kubernetes documentation</a>, but it looks like</p>
<pre><code>kubectl create secret docker-registry regsecret --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
</code></pre>
<p>Then, once your secret has been created, you need to specify that you want to use this secret to pull images from the registry when creating the pod's containers with the <code>imagePullSecrets</code> key containing the name of the secret created above, like</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regsecret
</code></pre>
|
<p>I created the kubernetes cluster and i am not able to connect to the cluster, getting below error.</p>
<p>rakesh_pal@sandbox:~$ gcloud beta container clusters get-credentials map-c1 --region us-central1 --project sandboxWarning: you invoked <code>gcloud beta</code>, but with current configurationKubernetes Engine v1 API will be used instead of v1beta1 API.If you intended to use v1beta1 API instead, please setcontainer/use_v1_api_client property to false.Do you want to continue (Y/n)? YFetching cluster endpoint and auth data.ERROR: (gcloud.beta.container.clusters.get-credentials) ResponseError: code=400, message='us-central1' is not a valid zone.rakesh_pal@sandbox:~$</p>
<p>Can you please help me on this.</p>
| <p>I succeeded after using this command:</p>
<p>gcloud config set container/use_v1_api false</p>
|
<p>Follow <a href="https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615" rel="nofollow noreferrer">this</a> guide, I'm trying to start <code>minikube</code> and forward port at the boot time.</p>
<p>My script:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
set -eux
export PATH=/usr/local/bin:$PATH
minikube status || minikube start
minikube ssh 'grep docker.for.mac.localhost /etc/hosts || echo -e "127.0.0.1\tdocker.for.mac.localhost" | sudo tee -a /etc/hosts'
minikube ssh 'test -f wait-for-it.sh || curl -O https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh'
minikube ssh 'chmod +x wait-for-it.sh && ./wait-for-it.sh 127.0.1.1:10250'
POD=$(kubectl get po --namespace kube-system | awk '/kube-registry-v0/ { print $1 }')
kubectl port-forward --namespace kube-system $POD 5000:5000
</code></pre>
<p>Everything works fine except that <code>kubectl port-forward</code> said that pod does not exist at the first time running:</p>
<pre class="lang-sh prettyprint-override"><code>++ kubectl get po --namespace kube-system
++ awk '/kube-registry-v0/ { print $1 }'
+ POD=kube-registry-v0-qr2ml
+ kubectl port-forward --namespace kube-system kube-registry-v0-qr2ml 5000:5000
error: error upgrading connection: unable to upgrade connection: pod does not exist
</code></pre>
<p>If I re-run:</p>
<pre class="lang-sh prettyprint-override"><code>+ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
+ minikube ssh 'grep docker.for.mac.localhost /etc/hosts || echo -e "127.0.0.1\tdocker.for.mac.localhost" | sudo tee -a /etc/hosts'
127.0.0.1 docker.for.mac.localhost
+ minikube ssh 'test -f wait-for-it.sh || curl -O https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh'
+ minikube ssh 'chmod +x wait-for-it.sh && ./wait-for-it.sh 127.0.1.1:10250'
wait-for-it.sh: waiting 15 seconds for 127.0.1.1:10250
wait-for-it.sh: 127.0.1.1:10250 is available after 0 seconds
++ kubectl get po --namespace kube-system
++ awk '/kube-registry-v0/ { print $1 }'
+ POD=kube-registry-v0-qr2ml
+ kubectl port-forward --namespace kube-system kube-registry-v0-qr2ml 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
</code></pre>
<p>I added a debug line before forwarding:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl describe pod --namespace kube-system $POD
</code></pre>
<p>and saw this:</p>
<pre class="lang-sh prettyprint-override"><code>+ POD=kube-registry-v0-qr2ml
+ kubectl describe pod --namespace kube-system kube-registry-v0-qr2ml
Name: kube-registry-v0-qr2ml
Namespace: kube-system
Node: minikube/192.168.99.100
Start Time: Thu, 28 Dec 2017 10:00:00 +0700
Labels: k8s-app=kube-registry
version=v0
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kube-registry-v0","uid":"317ecc42-eb7b-11e7-a8ce-...
Status: Running
IP: 172.17.0.6
Controllers: ReplicationController/kube-registry-v0
Containers:
registry:
Container ID: docker://6e8f3f33399605758354f3f546996067d834459781235d51eef3ffa9c6589947
Image: registry:2.5.1
Image ID: docker-pullable://registry@sha256:946480a23b33480b8e7cdb89b82c1bd6accae91a8e66d017e21e8b56551f6209
Port: 5000/TCP
State: Running
Started: Thu, 28 Dec 2017 13:22:44 +0700
</code></pre>
<p>Why <code>kubectl</code> said that it does not exist?</p>
<hr>
<p><strong>Fri Dec 29 04:58:06 +07 2017</strong></p>
<p>Looking carefully at the events, I found something:</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
20m 20m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succ
eeded for volume "image-store"
20m 20m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succ
eeded for volume "default-token-fs7kr"
20m 20m 1 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
20m 20m 1 kubelet, minikube spec.containers{registry} Normal Pulled Container image "registry:2.5.1" already present on machine
20m 20m 1 kubelet, minikube spec.containers{registry} Normal Created Created container
20m 20m 1 kubelet, minikube spec.containers{registry} Normal Started Started container
</code></pre>
<blockquote>
<p>Pod sandbox changed, it will be killed and re-created.</p>
</blockquote>
<p>Before:</p>
<pre><code>Containers:
registry:
Container ID: docker://47c510dce00c6c2c29c9fe69665e1241c457d0666174a7723062c534e7229c58
Image: registry:2.5.1
Image ID: docker-pullable://registry@sha256:946480a23b33480b8e7cdb89b82c1bd6accae91a8e66d017e21e8b56551f6209
Port: 5000/TCP
State: Running
Started: Thu, 28 Dec 2017 13:47:02 +0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 28 Dec 2017 13:22:44 +0700
Finished: Thu, 28 Dec 2017 13:45:18 +0700
Ready: True
Restart Count: 14
</code></pre>
<p>After:</p>
<pre><code>Containers:
registry:
Container ID: docker://3a7da784d3d596796111348757725f5af22b47c5edd0fc29a4ffbb84f3f08956
Image: registry:2.5.1
Image ID: docker-pullable://registry@sha256:946480a23b33480b8e7cdb89b82c1bd6accae91a8e66d017e21e8b56551f6209
Port: 5000/TCP
State: Running
Started: Thu, 28 Dec 2017 19:03:04 +0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 28 Dec 2017 13:47:02 +0700
Finished: Thu, 28 Dec 2017 19:00:48 +0700
Ready: True
Restart Count: 15
</code></pre>
<p>minikube logs:</p>
<blockquote>
<p>Dec 28 22:15:41 minikube localkube[3250]: W1228 22:15:41.102038<br>
3250 docker_sandbox.go:343] failed to read pod IP from plugin/docker:
Couldn't find network status for kube-system/kube-registry-v0-qr2ml
through plugin: invalid network status for</p>
</blockquote>
| <blockquote>
<p>POD=$(kubectl get po --namespace kube-system | awk '/kube-registry-v0/ { print $1 }')</p>
</blockquote>
<p>Be aware that using a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">selector</a> is almost certainly better than using text utilities, especially with "unstructured" output from <code>kubectl</code>. I don't know of any promises they make about the format of the default output, which is why <code>--output=json</code> and friends exist. However, in your case when you just want the name, there is a special <code>--output=name</code> which does what it says, with the mild caveat that the Resource prefix will be in front of the name (<code>pods/kube-registry-v0-qr2ml</code> in your case)</p>
<p>Separately, I see that you have "wait-for-it," but just because a port is accepting connections doesn't mean the Pod is <strong>Ready</strong>. You'll actually want to use <code>--output=json</code> (or more <code>awk</code> scripts, I guess) to ensure the Pod is both Running and Ready, with the latter status reached when kubernetes and the Pod agree that everything is cool.</p>
<p>I <em>suspect</em>, but would have to experiment to know for sure, that the error message is just misleading; it isn't truly that kubernetes doesn't know anything about your Pod, but merely that it couldn't port-forward to it in the state it's in.</p>
<p>You may also experience better success by creating a <code>Service</code> of <code>type: NodePort</code> and then talk to the Node's IP on the allocated port; that side-steps this kubectl-shell mess entirely, but does <strong>not</strong> side-step the Ready part -- only Pods in the Ready state will receive traffic from a Service</p>
<hr>
<p>As a minor, pedantic note, <code>--namespace</code> is an argument to <code>kubectl</code>, and not to <code>port-forward</code>, so the most correct invocation is <code>kubectl --namespace=kube-system port-forward kube-registry-v0-qr2ml 5000:5000</code> to ensure the argument isn't mis-parsed</p>
|
<p>What is the simplest way to train tensorflow models (using Estimator API) distributed across a home network? Doesn't look like ml-engine <a href="https://cloud.google.com/sdk/gcloud/reference/beta/ml-engine/local/train" rel="nofollow noreferrer">local train</a> allows you to specify IPs.</p>
| <p>Your best bet is to use something like Kubernetes. This is a work in progress, but I believe it does have support for distributed training as well -- <a href="https://github.com/tensorflow/k8s" rel="nofollow noreferrer">https://github.com/tensorflow/k8s</a>.</p>
<p>Alternatively for more low-tech automation options, these come to mind...</p>
<ol>
<li>You could have a script which still uses SSH and executes a script remotely.</li>
<li>You could have the individual workers poll a shared location for a file to use as a signal to download and execute a script.</li>
</ol>
|
<h1>Question</h1>
<p>It is not clear how to access the dashboard with HTTPS and cannot find a clear documentation (it just tells to use kubectl proxy). So what is the way to access the dashboard with HTTPS?</p>
<p><a href="https://github.com/kubernetes/dashboard" rel="noreferrer">Kubernetes Dashboard GitHub</a> tells:</p>
<blockquote>
<p>The shortcut <a href="http://localhost:8001/ui" rel="noreferrer">http://localhost:8001/ui</a> is deprecated. Use the full proxy URL shown above.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/dashboard/wiki/Installation#recommended-setup" rel="noreferrer">K8S Dashboard Recommended Setup</a> or
<a href="https://github.com/kubernetes/dashboard/wiki/FAQ" rel="noreferrer">K8S Dashboard FAQ</a> do not tell how to access the dashboard without proxy.</p>
<blockquote>
<p><strong>I'm accessing Dashboard over HTTPS</strong><BR><BR>
The reason why /ui redirect does not work for HTTPS is that it hasn't yet been updated in the core repository. You can track <a href="https://github.com/kubernetes/kubernetes/pull/53046#discussion_r145338754" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/53046#discussion_r145338754</a> to find out when it will be merged. Probably it won't be available until K8S 1.8.3+.
<br><br>
Correct links that can be used to access Dashboard are in our documentation. Check Accessing Dashboard to find out more.</p>
</blockquote>
<hr>
<p>However, the <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="noreferrer">kubernetes-dashboard.yaml</a> manifest defines the service endpoint to the dashboard as below:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
</code></pre>
<p>And the cluster IP (in my environment) assigned is below.</p>
<pre><code># kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.101.199.14 <none> 443/TCP 4h
</code></pre>
<p>Simply create a SSH tunnel to the 10.101.199.14:443 and access to it (<a href="https://localhost:8001" rel="noreferrer">https://localhost:8001</a>) shows the dashboard.</p>
<p><a href="https://i.stack.imgur.com/NSuIa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NSuIa.png" alt="enter image description here"></a></p>
<p>So, basically, there is no need to use kubectl proxy and directly access the clusterIP:443 is the way to access the dashboard with HTTPS? </p>
<p>Kindly suggest where is the up-to-date and accurate documentation on how to use the K8S dashboard.</p>
<h2>Environment</h2>
<pre><code># kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>As far as I know, You would not want to expose your k8s dashboard to external world Since It's a graphical way to get access to your k8s cluster that's why the service type of k8s-dashboard is clusterIP instead of LoadBalancer or NodePort( Minikube uses it).</p>
<p>Now If you want to access the dashboard without exposing it to the external world.There are 2 ways which you have described in the question.</p>
<ul>
<li>Kubectl proxy (It create HTTP proxy to kube-api Server)</li>
<li>Kubectl port-forward (it create TCP proxy to k8s-dashboard pod)</li>
</ul>
|
<p>We have a web application of flask framework written in python. The application code is dockerized and deployed in kubernetes.</p>
<p>The application writes to a file in static directory once a user hits an API. I am unable to do so. I know that as when I check the list of files in the static location using another API, it does not show that file. Please help.</p>
<p>YAML file is given below:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: backend-rc
labels:
type: backend-type
spec:
replicas: 32
template:
metadata:
labels:
type: backend-type
spec:
containers:
- name: backend-container
image: gcr.io/syw-msm/ts-m:pc-v1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
</code></pre>
<p>The dockerfile configuration is given below:</p>
<pre><code>FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential python-tk
COPY . /app
WORKDIR /app
RUN chmod -R 777 /app/static
RUN pip install -r requirements.txt
RUN pip install fbprophet
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
</code></pre>
| <p>You have 32 replicas in your definition. The API call which makes the app to write the file locally hits one pod (replica), and the call that checks for its existence hits a different one. It is expected that the file is not there.</p>
<p>If you scale the number of replicas to 1, it will work. You can try for testing purposes, but this probably isn't what you want. </p>
<p>If you need storage with r/w access from multiple replicas, you will need to rework your app to use some form of shared file system or database.<br>
In case you care about that file this is a good idea anyway - pods are meant to be ephemeral and are not for storing state. </p>
|
<p>I'm just getting started with kubernetes and setting up a cluster on AWS using kops. In many of the examples I read (and try), there will be commands like:</p>
<pre><code>kubectl run my-app --image=mycompany/myapp:latest --replicas=1 --port=8080
kubectl expose deployment my=app --port=80 --type=LoadBalancer
</code></pre>
<p>This seems to do several things behind the scenes, and I can view the manifest files created using <code>kubectl edit deployment</code>, and so forth However, i see many examples where people are creating the manifest files by hand, and using commands like <code>kubectl create -f</code> or <code>kubectl apply -f</code></p>
<p>Am I correct in assuming that both approaches accomplish the same goals, but that by creating the manifest files yourself, you have a finer grain of control?</p>
<p>Would I then have to be creating Service, ReplicationController, and Pod specs myself?</p>
<p>Lastly, if you create the manifest files yourself, how do people generally structure their projects as far as storing these files? Are they simply in a directory alongside the project they are deploying?</p>
| <p>The fundamental question is how to apply all of the K8s objects into the k8s cluster. There are several ways to do this job. </p>
<ul>
<li>Using Generators (Run, Expose)</li>
<li>Using Imperative way (Create)</li>
<li>Using Declarative way (Apply)</li>
</ul>
<p>All of the above ways have a different purpose and simplicity. For instance, If you want to check quickly whether the container is working as you desired then you might use <strong>Generators</strong> .</p>
<p>If you want to version control the k8s object then it's better to use <strong>declarative</strong> way which helps us to determine the accuracy of data in k8s objects. </p>
<p>Deployment, ReplicaSet and Pods are different layers which solve different problems.All of these concepts provide flexibility to k8s.</p>
<ul>
<li>Pods: It makes sure that related containers are together and provide efficiency.</li>
<li>ReplicaSet: It makes sure that k8s cluster has desirable replicas of the pods</li>
<li>Deployment: It makes sure that you can have different version of Pods and provide the capability to rollback to the previous version </li>
</ul>
<p><strong>Lastly</strong>, It depends on use case how you want to use these concepts or methodology. It's not about which is good or which is bad. </p>
|
<p>Is it possible to create a pod that will run a container after the main container terminates? I am imagining something similar to an init container, except that runs at the end rather than at the beginning. (I would use this to look for various metadata files created by the main process and send them to our data warehouse)</p>
| <p>No, there is not AFAIK. You would have to use pods hooks for that. You can read about pod hooks <a href="https://blog.openshift.com/kubernetes-pods-life/" rel="noreferrer">on this post</a></p>
<p><a href="https://i.stack.imgur.com/aMj6Z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/aMj6Z.png" alt="enter image description here"></a></p>
<p>As explained on <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noreferrer">Kubernetes documentation</a>, containers can access a hook by implementing and registering a handler for that hook. There are two types of hook handlers that can be implemented for Containers:</p>
<p>Exec - Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container.
HTTP - Executes an HTTP request against a specific endpoint on the Container.</p>
<p>For your use case, I'd use <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noreferrer">the pre-stop hook</a> to execute a command inside the container and look for those metadata files.</p>
<blockquote>
<p>This hook is called immediately before a container is terminated. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler.</p>
</blockquote>
|
<p>I am using OS Ubuntu 16.0.4 and i installed minikube on it.
I need to copy some files to minikube, so how can i do that?
I tried next command but it asked me on password and i don't know it</p>
<pre><code>scp /media/myuser/sourceFolder [email protected]:/home/docker/destiationFolder
</code></pre>
<p><strong>Note:</strong> minikube IP is <strong>192.168.99.100</strong>, and i used <strong>docker</strong> as default user for minikube but actually i don't know if it correct ot not.</p>
<p>So what is the default username and password for minikube and how can i copy files from my local machine into minikube?</p>
<p>Thanks :)</p>
| <p>On the host machine you can use the <code>ssh-key</code> and <code>ip</code> subcommands of the <code>minikube</code> command:</p>
<pre><code>scp -i $(minikube ssh-key) <local-path> docker@$(minikube ip):<remote-path>
</code></pre>
<p>So the command from the question becomes:</p>
<pre><code>scp -i $(minikube ssh-key) /media/myuser/sourceFolder docker@$(minikube ip):/home/docker/destiationFolder
</code></pre>
|
<p>I'm new to Kubernetes and I'm trying to understand some security stuff.</p>
<p>My question is about the Group ID (= gid) of the user running the container.</p>
<p>I create a Pod using this official example: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
</code></pre>
<p>In the documentation, they say:</p>
<blockquote>
<p>In the configuration file, the <strong>runAsUser</strong> field specifies that for any
Containers in the Pod, the <strong>first process runs with user ID 1000</strong>. The
<strong>fsGroup</strong> field specifies that <strong>group ID 2000 is associated with all</strong>
<strong>Containers in the Pod</strong>. Group ID 2000 is also associated with the
volume mounted at /data/demo and with any files created in that
volume.</p>
</blockquote>
<p>So, I go into the container:</p>
<pre><code>kubectl exec -it security-context-demo -- sh
</code></pre>
<p>I see that the first process (i.e. with PID 1) is running with user 1000 => OK, that's the behavior I expected.</p>
<pre><code> $ ps -f -p 1
UID PID PPID C STIME TTY TIME CMD
1000 1 0 0 13:06 ? 00:00:00 /bin/sh -c node server.js
</code></pre>
<p>Then, I create a file "testfile" in folder /data/demo. This file belongs to group "2000" because /data/demo has the "s" flag on group permission:</p>
<pre><code>$ ls -ld /data/demo
drwxrwsrwx 3 root 2000 39 Dec 29 13:26 /data/demo
$ echo hello > /data/demo/testfile
$ ls -l /data/demo/testfile
-rw-r--r-- 1 1000 2000 6 Dec 29 13:29 /data/demo/testfile
</code></pre>
<p>Then, I create a subfolder "my-folder" and remove the "s" flag on group permission. I create a file "my-file" in this folder:</p>
<pre><code>$ mkdir /data/demo/my-folder
$ ls -ld /data/demo/my-folder
drwxr-sr-x 2 1000 2000 6 Dec 29 13:26 /data/demo/my-folder
$ chmod g-s /data/demo/my-folder
$ ls -ld /data/demo/my-folder
drwxr-xr-x 2 1000 2000 6 Dec 29 13:26 /data/demo/my-folder
$ touch /data/demo/my-folder/my-file
$ ls -l /data/demo/my-folder/my-file
-rw-r--r-- 1 1000 root 0 Dec 29 13:27 /data/demo/my-folder/my-file
</code></pre>
<p>I'm surprised that this file belongs to group "root", i.e. group with GID 0.
I expected that it should belong to group "2000" according to this sentence in the documentation:</p>
<blockquote>
<p>The fsGroup field specifies that group ID 2000 is associated with all
Containers in the Pod</p>
</blockquote>
<p>With the following commands, I see that user with UID "1000" in the container has primary Unix group "0", not 2000.</p>
<pre><code>$ id
uid=1000 gid=0(root) groups=0(root),2000
$ cat /proc/1/status
...
Pid: 1
...
Uid: 1000 1000 1000 1000
Gid: 0 0 0 0
...
Groups: 2000
...
</code></pre>
<p>Does anyone have some explanations?</p>
<p>Why is not the user's GID set to the value of "fsGroup" field in the Pod's security context?</p>
<p>Why the user's GID is set to 0 = root?</p>
<p>Is it a bug in Kubernetes (I'm using v1.8.0)?</p>
<p>Did I misunderstand the documentation?</p>
<p>Thanks!</p>
| <p>Unfortunately, setting the primary group ID is currently not supported in Kubernetes, and will default to <code>gid=0</code>.</p>
<p>There is an open issue for implementing this: <a href="https://github.com/kubernetes/features/issues/213" rel="noreferrer">https://github.com/kubernetes/features/issues/213</a></p>
|
<p>I am using kubernetes Replication controller for my application to run in GCP, the application is a dockerized image.</p>
<p>I want to configure the Pod creating while replication creation, I am using a cluster of 64cores. </p>
<p>Can someone explain what exactly the meaning of the resource portion of the yaml</p>
<pre><code>resources:
limits:
cpu: "1"
requests:
cpu: 500m
</code></pre>
<p>Please tell me about requests and cpu.</p>
<p>I want to allocate 60 CPU core for the particular rc(replication controller) because my application API initiate 60 parallel processes and the entire API need to run 4K processes to complete.</p>
<p>Thanks in advance. </p>
| <p>The <code>resource</code> block allows you to specify requests and limits for compute resources such as CPU and memory. </p>
<p>Requests are considered by the scheduler when it needs to select a node to run the pod. Limits, on the other hand, are a runtime concern that are applied to the workload through the container runtime.</p>
<p>In your specific example, you are defining a workload that needs <em>at least</em> <code>500m</code> (or 0.5 CPU) to be scheduled on a node. That is, the pod will only run on nodes that have more than 0.5 CPU available. Once running, the pod will be limited or throttled to only use 1 CPU on the machine (Depending on your container runtime, your pod might be able to burst if there is no CPU contention).</p>
<p>You can find more detailed documentation here:
<a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run</a></p>
|
<p>I have a template that is basically an utility container for running kubectl inside a pod.</p>
<p>What I want to do, is to be able to have multiple deployments of that same template, with different names, as in "utilitypod-randomID".</p>
<p>Is there a way to do that, via kubectl and some shell scripting, or something similar?</p>
<p>The current template looks like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: utilitypod
namespace: blah-dev
labels:
purpose: utility-pod
spec:
containers:
- name: utilitypod
image: blahblah/utilitypod:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 28800; done;" ]
env:
- name: KUBERNETES_SERVICE_HOST
value: "api.dev.blah.internal"
- name: KUBERNETES_SERVICE_PORT
value: "443"
</code></pre>
| <p>You can replace <code>name</code> with <code>generateName</code>, which adds a random suffix. Your template will look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
generateName: utilitypod-
namespace: blah-dev
labels:
purpose: utility-pod
spec:
containers:
- name: utilitypod
image: blahblah/utilitypod:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 28800; done;" ]
env:
- name: KUBERNETES_SERVICE_HOST
value: "api.dev.blah.internal"
- name: KUBERNETES_SERVICE_PORT
value: "443"
</code></pre>
<p>Mind you, this will only work with <code>kubectl create -f template.yaml</code>, not <code>apply</code>, as <code>apply</code> looks for a resource by its name and tries to compare their definitions, but this template doesn't contain a specific name.</p>
|
<h1>Question</h1>
<p>There is a way to create a service account and get token as in <a href="https://stackoverflow.com/questions/42170380/how-to-add-users-to-kubernetes-kubectl">How to Add Users to Kubernetes (kubectl)?</a> but is there a way to get or create a token for a normal user?</p>
<h1>Background</h1>
<p>Followed <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="noreferrer">Configure RBAC In Your Kubernetes Cluster</a> and created a normal user.</p>
<p>Bind a cluster role to the user as below (not sure this is correct, appreciate suggestions). I would like to create a token for the user and use it to access the dashboard but do not know how to do.</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: kube-system
name: dashboard-admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["get", "list", "watch"]
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dashboard-admin-rolebinding
namespace: office
subjects:
- kind: User
name: myuser
apiGroup: "rbac.authorization.k8s.io"
roleRef:
kind: ClusterRole
name: dashboard-admin-role
apiGroup: "rbac.authorization.k8s.io"
</code></pre>
| <p><a href="https://kubernetes.io/docs/admin/authentication/#service-account-tokens" rel="noreferrer">API requests are tied to either a normal user or a service account, or are treated as anonymous requests</a>.</p>
<ul>
<li>Normal users are assumed to be managed by an outside, independent service (private keys, third parties like Google Accounts, even a file with a list of usernames and passwords). Kubernetes does not have objects which represent normal user accounts.</li>
<li>Service accounts are users managed by the Kubernetes API, bound to specific namespaces. Service accounts are tied to a set of credentials stored as Secrets. Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the <code>kubectl create serviceaccount ACCOUNT_NAME</code> command. This creates a service account in the current namespace and an associated secret that holds the public CA of the API server and a signed JSON Web Token (JWT).</li>
</ul>
<p>So you can create a serviceaccount and then <a href="https://kubernetes.io/docs/admin/authentication/#putting-a-bearer-token-in-a-request" rel="noreferrer">use that token to authenticate the requests to the API</a>.</p>
<p>Something similar to this example</p>
<pre><code>$ kubectl create serviceaccount jenkins
serviceaccount "jenkins" created
$ kubectl get serviceaccounts jenkins -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
# ...
secrets:
- name: jenkins-token-1yvwg
</code></pre>
<p>And then fetch the token</p>
<pre><code>$ kubectl get secret jenkins-token-1yvwg -o yaml
apiVersion: v1
data:
ca.crt: (APISERVER'S CA BASE64 ENCODED)
namespace: ZGVmYXVsdA==
token: (BEARER TOKEN BASE64 ENCODED)
kind: Secret
metadata:
# ...
type: kubernetes.io/service-account-token
</code></pre>
|
<h1>Question</h1>
<p>It is not clear how to access the dashboard with HTTPS and cannot find a clear documentation (it just tells to use kubectl proxy). So what is the way to access the dashboard with HTTPS?</p>
<p><a href="https://github.com/kubernetes/dashboard" rel="noreferrer">Kubernetes Dashboard GitHub</a> tells:</p>
<blockquote>
<p>The shortcut <a href="http://localhost:8001/ui" rel="noreferrer">http://localhost:8001/ui</a> is deprecated. Use the full proxy URL shown above.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/dashboard/wiki/Installation#recommended-setup" rel="noreferrer">K8S Dashboard Recommended Setup</a> or
<a href="https://github.com/kubernetes/dashboard/wiki/FAQ" rel="noreferrer">K8S Dashboard FAQ</a> do not tell how to access the dashboard without proxy.</p>
<blockquote>
<p><strong>I'm accessing Dashboard over HTTPS</strong><BR><BR>
The reason why /ui redirect does not work for HTTPS is that it hasn't yet been updated in the core repository. You can track <a href="https://github.com/kubernetes/kubernetes/pull/53046#discussion_r145338754" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/53046#discussion_r145338754</a> to find out when it will be merged. Probably it won't be available until K8S 1.8.3+.
<br><br>
Correct links that can be used to access Dashboard are in our documentation. Check Accessing Dashboard to find out more.</p>
</blockquote>
<hr>
<p>However, the <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="noreferrer">kubernetes-dashboard.yaml</a> manifest defines the service endpoint to the dashboard as below:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
</code></pre>
<p>And the cluster IP (in my environment) assigned is below.</p>
<pre><code># kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.101.199.14 <none> 443/TCP 4h
</code></pre>
<p>Simply create a SSH tunnel to the 10.101.199.14:443 and access to it (<a href="https://localhost:8001" rel="noreferrer">https://localhost:8001</a>) shows the dashboard.</p>
<p><a href="https://i.stack.imgur.com/NSuIa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NSuIa.png" alt="enter image description here"></a></p>
<p>So, basically, there is no need to use kubectl proxy and directly access the clusterIP:443 is the way to access the dashboard with HTTPS? </p>
<p>Kindly suggest where is the up-to-date and accurate documentation on how to use the K8S dashboard.</p>
<h2>Environment</h2>
<pre><code># kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>As no time to test the suggestion by Suresh, used below for now.</p>
<p>Get the kubernetes-dashboard service account token (given cluster-admin role).</p>
<pre><code>$ kubectl get secret -n kube-system | grep kubernetes-dashboard
kubernetes-dashboard-token-42b78 kubernetes.io/service-account-token 3 1h
$ kubectl describe secret kubernetes-dashboard-token-42b78 -n kube-system
Name: kubernetes-dashboard-token-42b78
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=kubernetes-dashboard
kubernetes.io/service-account.uid=36347792-ecdf-11e7-9ca8-06bb783bb15c
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: <TOKEN>
</code></pre>
<p>Start SSH tunnel.</p>
<pre><code>ssh -L localhost:8001:172.31.4.117:6443 centos@<K8SServer>
</code></pre>
<p>Use Chrome <a href="https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj?hl=en" rel="nofollow noreferrer">ModHeader</a> extension to send the Bearer token.</p>
<p><a href="https://i.stack.imgur.com/ThIln.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ThIln.png" alt="enter image description here"></a></p>
<p>Access the API server endpoint via SSH tunnel (local port 8001).</p>
<pre><code>https://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
</code></pre>
<p><a href="https://i.stack.imgur.com/4oAeS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4oAeS.png" alt="enter image description here"></a></p>
|
<p>I have created a deployment which create a replica-set and this replica-set create pods. Now I found couple of time the pod is getting deleted (do not know why) and then the replica-set create a new pod.</p>
<p>I am unable to find the details for the previous pod. I need to know why the pod was deleted. Kubernetes server version 1.5.6. I am also not seeing anything in events. Whereas I can see some events in newer version of Kubernetes server.</p>
<p>Please let me know in case of any more information required.</p>
<p>I have given a simple use case here where I created a deployment and then deleted pod inside it:</p>
<pre><code>[kubernate-test]$ kubectl run pod-deleted-reason-why --image=busybox -- sh -c 'while true; do echo i am running, but not sure can fail any time;sleep 10;done'
deployment "pod-deleted-reason-why" created
[kubernate-test]$ kubectl get deployments |grep reason
pod-deleted-reason-why 1 1 1 1 32s
[kubernate-test]$ kubectl get replicaset |grep reason
pod-deleted-reason-why-59c9df7594 1 1 1 59s
[kubernate-test]$ kubectl get pods |grep reason
pod-deleted-reason-why-59c9df7594-nr8rs 1/1 Running 0 1m
[kubernate-test]$ kubectl delete pod pod-deleted-reason-why-59c9df7594-nr8rs
pod "pod-deleted-reason-why-59c9df7594-nr8rs" deleted
[kubernate-test]$ kubectl get pods |grep reason
pod-deleted-reason-why-59c9df7594-qpq5b 1/1 Running 0 1m
</code></pre>
<p>Here I manually deleted the pod - <code>pod-deleted-reason-why-59c9df7594-nr8rs</code> and replica-set has created a new pod - <code>pod-deleted-reason-why-59c9df7594-qpq5b</code>.</p>
<p>If the the pod was delete for some other reason (say node reboot) how could I know why the pod was deleted?</p>
| <p>You can use <code>kubectl describe pod ...</code> to see more information</p>
<p>EDIT:</p>
<p>Correction, I didn't realize you were deleting things yourself, if you manually delete the pod I don't think you can get that information back. If a pod has existed and then someone deletes it the only way that happens is if you actually delete it. If it gets restarted for whatever reason the pod name stays the same.</p>
|
<p>I have a Kubernetes deployment on GCE, which I'd like to get automatically updated based on new images being created in Google Container Registry (ideally via a Build Trigger). Is there a way to do that?</p>
<p>Thanks in advance.</p>
<p>-Mark</p>
| <p>I was able to do this using GCR and Cloud Builder with a <code>cloudbuild.yaml</code> file like the below. For it to work, the service account with a name like <code>[email protected]</code> had to have the IAM permissions assigned by clicking Project -> Editor. This is required so that the Cloud Build service can make SSH keys and add them to your GCE metadata to allow Cloud Builder to SSH in. This SSHing is the big work-around to effectively run any command on your GCE VM server. </p>
<pre><code>steps:
# Build Docker image: docker build -f Dockerfile -t gcr.io/my-project/my-image:latest .
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-f', 'Dockerfile', '-t', 'gcr.io/my-project/my-image:latest', '.']
# Push to GCR: gcloud docker -- push gcr.io/my-project/my-image:latest
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/my-project/my-image:latest']
# Connect to GCE server and pull new image
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'ssh', '$_SERVER', '--zone', '$_ZONE', '--command', 'gcloud docker -- pull gcr.io/my-project/my-image:latest']
# Connect to server and stop current container
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'ssh', '$_SERVER', '--zone', '$_ZONE', '--command', 'docker stop my-image']
# Connect to server and stop current container
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'ssh', '$_SERVER', '--zone', '$_ZONE', '--command', 'docker rm my-image']
# Connect to server and start new container
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'ssh', '$_SERVER', '--zone', '$_ZONE', '--command', 'docker run --restart always --name my-image -d -p 443:443 --log-driver=gcplogs gcr.io/my-project/my-image:latest']
substitutions:
_SERVER: 'my-gce-vm-server'
_ZONE: 'us-east1-c'
</code></pre>
<h2>Bonus Pro Tips:</h2>
<ol>
<li>the <code>substitutions</code> are nice in case you prop up a new server some day and want to use it instead</li>
<li>using <code>--log-driver=gcplogs</code> makes your Docker logs show up in your Google Cloud Console's Stackdriver Logging in the appropriate "GCE VM Instance". Just be sure to have "All logs" and "Any Log Level" selected since Docker logs have no log level and are not <code>syslog</code> or <code>activity_log</code> messages </li>
</ol>
|
<p>I'm running Elasticsearch within Kubernetes and despite setting container limits on CPU use, Elasticsearch is able to exceed the limits and starve other containers.</p>
<p>For various reasons, I'm running the containers with:</p>
<blockquote>
<p>privileged: true</p>
</blockquote>
<p>Would this allow Elasticsearch to ignore the CPU limits?</p>
| <p>As you can see in <a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities" rel="nofollow noreferrer">https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities</a> the container in a privileged mode has no reason not to respect the limits it is given. That is, by default. If you run container in a privileged mode and grant it access to enough of the filesystem, you should be able to ie. modify cgroups and escape (set differently) the limits. This would require pretty specific, targeted operation, hence it's doubtfull any software that is not intended to manage cgroups (or exploit the system for more resources) will do that by default.</p>
|
<p>I am playing with <code>Docker</code>, <code>Kubernetes</code>, <code>Google Cloud Platform(GCP)</code> and <code>Gitlab</code> recently to achieve <code>CI/CD</code> from <code>commit</code> to <code>staging</code>.
So far I have succeeded <code>testing</code>, <code>building</code> and <code>pushing</code> the image to <code>Container registry of Gitlab</code>. </p>
<p>I have a small node and docker application which output <code>'Hello world'</code>. Also, I have built my docker image in <code>Container registry of Gitlab</code>. At this moment the process is docker-in-docker. I want to push my image from <code>Gitlab container registry</code> to <code>Kubernetes engine</code> in GCP. I have installed both <code>kubectl</code> and <code>gcloud sdk</code>. The <a href="https://docs.gitlab.com/ee/topics/autodevops/#auto-devops" rel="nofollow noreferrer">Auto DevOps</a> seems to be promising but I want to implement my own <code>.gitlab-ci.yml</code> file.</p>
<p>Here is my <code>.gitlab-ci.yml</code> below:</p>
<pre><code>stages:
- testing
- build
- staging
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/surajneupane55/node-app-
testing
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/surajneupane55/node-
app-testing:latest
test:
stage: testing
image: node:boron
script:
- npm install
- npm test
build_image:
stage: build
only: [master]
image: docker:git
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN
registry.gitlab.com/surajneupane55
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
staging_site:
//I need help here!!!
//For staging my project in Kubernetes cluster in GCP
//Already created node-app Kubernetes cluster
</code></pre>
<p>Please, let me know if my approach is wrong since this is my first learning project with CI/CD. </p>
| <p>A simple <code>gitlab-ci.yml</code> file to build and deploy in GKE with <code>Google Container Registry(GCR)</code>. First, we build image and push it to GCR. With valid credentials, we can easily connect the GKE with GCR and deploy. </p>
<pre><code>stages:
- build
- deploy
variables:
CONTAINER_TEST_IMAGE: gcr.io/node-testing-189614/node-testing:latest
build_image:
stage: build
only: [master]
image: google/cloud-sdk
services:
- docker:dind
script:
- echo "$GOOGLE_KEY_SECRET" > key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file key.json
- gcloud config set project node-testing-189614
- gcloud container builds submit -t $CONTAINER_TEST_IMAGE .
deploy_staging:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY_SECRET" > key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file key.json
- gcloud config set project node-testing-189614
- gcloud config set compute/zone europe-west1-b
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials node-testing
- kubectl delete pods --all
- kubectl apply -f staging.yml
environment:
name: staging
url: http://***.***.**.***:****/ //External IP from Kubernetes
only:
- master
</code></pre>
<p>Above we delete pods in GKE because we always want to update the image with the latest tag. The best possible solution available so far is to delete the pods and let the <code>staging.yml</code> file creates new one if not available.</p>
<p>Finally <code>staging.yml</code> looks like this:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: node-testing
spec:
replicas: 2
template:
metadata:
labels:
app: node-testing
spec:
containers:
- name: node-testing
image: gcr.io/node-testing-189614/node-testing:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: gcr.io/node-testing-189614/node-testing
</code></pre>
|
<h3>TL;DR:</h3>
<p>Kubernetes pods created via terraform don't have a <code>/var/run/secrets</code> folder, but a pod created according to the <code>hello-minikube</code> tutorial does - <em>why is that, and how can I fix it</em>?</p>
<p>Motivation: I need traefik to be able to talk to the k8s cluster.</p>
<h3>Details</h3>
<p>I have set up a local Kubernetes cluster w/ minikube and set up terraform to work with that cluster.</p>
<p>To set up traefik, you need to create an <code>Ingress</code> and a <code>Deployment</code> which are <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/3" rel="noreferrer">not yet supported by terraform</a>. Based on the workaround posted in that issue, I use an even simpler module to execute yaml files via terraform:</p>
<pre><code># A tf-module that can create Kubernetes resources from YAML file descriptions.
variable "name" {}
variable "file_name" { }
resource "null_resource" "kubernetes_resource" {
triggers {
configuration = "${var.file_name}"
}
provisioner "local-exec" {
command = "kubectl apply -f ${var.file_name}"
}
}
</code></pre>
<p>The resources created in this way show up correctly in the k8s dashboard.</p>
<p>However, the ingress controller's pod logs:</p>
<pre><code>time="2017-12-30T13:49:10Z"
level=error
msg="Error starting provider *kubernetes.Provider: failed to create in-cluster
configuration: open /var/run/secrets/kubernetes.io/serviceaccount/token:
no such file or directory"
</code></pre>
<p><em>(line breaks added for readability)</em></p>
<p><code>/bin/bash</code>ing into the pods, I realize <strong>none of them</strong> have a path <code>/var/run/secrets</code>, except for the <code>hello-minikube</code> pod from the <a href="https://github.com/kubernetes/minikube" rel="noreferrer">minikube tutorial</a>, which was created with just two <code>kubectl</code> commands:</p>
<pre><code>$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
</code></pre>
<p>Compared to the script in the terraform issue, I removed <code>kubectl</code> params like <code>--certificate-authority=${var.cluster_ca_certificate}</code>, but then again, I didn't provide this when setting up <code>hello-minikube</code> either, and the original script doesn't work as-is, since I couldn't figure out how to access the provider details from <code>~/.kube/config</code> in terraform.</p>
<p>I tried to find out if <code>hello-minikube</code> is doing something fancy, but I couldn't find its source code.</p>
<p>Do I have to do something specific to make the token available? According to <a href="https://github.com/containous/traefik/issues/611" rel="noreferrer">traefic issue 611</a>, the InCluster-configuration should be automated, but as it currently stands I'm stuck.</p>
<h3>Versions</h3>
<p>Host system is a Windows 10 machine</p>
<pre><code>> minikube version
minikube version: v0.24.1
> kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<h3>Related</h3>
<p>There are some related questions and github issues, but they didn't help me fix the problem yet either.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/30690186/how-do-i-access-the-kubernetes-api-from-within-a-pod-container">How do I access the Kubernetes api from within a pod container?</a></li>
<li><p><a href="https://stackoverflow.com/questions/47721646/why-would-var-run-secrets-kubernetes-io-serviceaccount-token-be-an-empty-file-i">Why would /var/run/secrets/kubernetes.io/serviceaccount/token be an empty file in a Pod?</a></p></li>
<li><p><a href="https://github.com/datawire/telepresence/issues/353" rel="noreferrer">https://github.com/datawire/telepresence/issues/353</a></p></li>
<li><a href="https://github.com/openshift/origin/issues/13865" rel="noreferrer">https://github.com/openshift/origin/issues/13865</a></li>
</ul>
| <p>Foremost, thank you for an <strong>amazing</strong> question write up; I would use this question as a template for how others should ask!</p>
<p>Can you check the <code>automountServiceAccountToken</code> field in the <code>PodSpec</code> and see if it is <code>true</code>?</p>
<p>The only other constructive question I know is whether its <code>serviceAccountName</code> points to an existing S.A.; I would hope a bogus one would bomb the deploy, but don't know for sure what will happen in that case</p>
|
<p>Been working on a web app with a simple database model that only needs CRUD operations, figured MongoDB would be perfect for it. The most important constraints of the project is that it be able to scale from a small amount of users to a large amount. I’ve been looking at the cloud launcher and I’ve noticed that the most popular MongoDB solution advertises a cost of ~$350/mo. This is a surprisingly large amount that makes me consider using cloud sql for my database instead. Is there a better way to deploy MongoDB to GCP that’s more fitted to my use case? I’ve been reading about automatic scaling with kubernetes but I can’t find anything about price. Any and all advice is greatly appreciated </p>
| <p>I haven't used mongodb with kubernetes but we do use the cloud launcher solution at work. We use 2 nodes(n1-standard-1) and an arbiter(micro) + 100GB storage on each node which comes up around $100 a month. You would need a replicaset in a production environment so this seems to be a reasonable base cost.</p>
<p>Kubernetes does not provide a lot of advantages over the classic GCE deployment for mongodb compared to a webserver. Setting up a replicaset on kubernetes is a bit more work compared to GCE setup. <a href="https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474" rel="nofollow noreferrer">https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474</a> and <a href="http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html" rel="nofollow noreferrer">http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html</a> should serve as decent references but wouldn't lower your costs. Scaling nodes would be slightly easier though but does not strictly translate to scaling mongodb.</p>
|
<p>Been working on a web app with a simple database model that only needs CRUD operations, figured MongoDB would be perfect for it. The most important constraints of the project is that it be able to scale from a small amount of users to a large amount. I’ve been looking at the cloud launcher and I’ve noticed that the most popular MongoDB solution advertises a cost of ~$350/mo. This is a surprisingly large amount that makes me consider using cloud sql for my database instead. Is there a better way to deploy MongoDB to GCP that’s more fitted to my use case? I’ve been reading about automatic scaling with kubernetes but I can’t find anything about price. Any and all advice is greatly appreciated </p>
| <p>I have lately been working on a similar solution.</p>
<p>GCP announced that they don't charge for Kubernetes cluster management but only for resources used by it (instances, network ...):
<a href="https://cloud.google.com/kubernetes-engine/pricing" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/pricing</a></p>
<p>In general, databases are high maintenance (data mounts, backups, migrations...), so I would not start running Mongo on Kubernetes right away. You could get there but it will be more complicated than deploying your web app on Kubernetes.</p>
<p>Better to use MongoDB as a service that supports GCP (e.g. <a href="https://www.mongodb.com/cloud/atlas" rel="nofollow noreferrer">MongoDB Atlas</a>), I have done so myself and see a few other companies do that.
If you scale gradually you should be able to control your costs.</p>
<p>The web app itself should be easy to deploy and maintain on Kubernetes.</p>
|
<p>This is a bit of a beginner's question. I'm attempting to get a simple Hello World Python flask application deployed in a kubernetes cluster on IBM Cloud. The application (<code>main.py</code>):</p>
<pre><code>import os
from flask import Flask
app = Flask(__name__)
@app.route('/')
def welcomeToMyapp():
return 'Ciao'
port = os.getenv('PORT', '5000')
if __name__ == "__main__":
app.run(host='0.0.0.0', port=int(port))
</code></pre>
<p>I build my Docker image with <code>docker build --rm -t kube-hw .</code> and <code>Dockerfile</code>:</p>
<pre><code>FROM ubuntu:latest
WORKDIR /app
ADD requirements.txt /app
RUN apt-get -y update
RUN apt-get -y install python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
ADD main.py /app
EXPOSE 80
CMD ["python3", "main.py"]
</code></pre>
<p>I run it locally with <code>docker run --rm -p 5000:5000 kube-hw</code>. That works fine. I can browse <code>http://0.0.0.0:5000/</code>.</p>
<p>However, when I run the same image on k8s on IBM Cloud I can't seem to access the URL endpoint. My deployment steps are (from the article mentioned in the answer below):</p>
<pre><code>docker tag kube-hw registry.ng.bluemix.net/sudoku/kube-hw:latest
docker push registry.ng.bluemix.net/sudoku/kube-hw:latest
kubectl run kube-hw --image=registry.ng.bluemix.net/sudoku/kube-hw:latest --port=80
kubectl expose deployment kube-hw --port=80 --target-port=5000 --type=NodePort
</code></pre>
<p>I then use <code>kubectl describe pod kube-hw</code> to get the external IP address <code>10.77.223.141</code>:</p>
<pre><code>Name: kube-hw-3409617459-5bczp
Namespace: default
Node: 10.77.223.141/10.77.223.141
Start Time: Sat, 23 Dec 2017 14:52:39 -0500
Status: Running
IP: 172.30.205.113
</code></pre>
<p>And <code>kubectl describe service kube-hw</code> to get the port <code>30930</code>:</p>
<pre><code>Name: kube-hw
Namespace: default
Labels: run=kube-hw
Annotations: <none>
Selector: run=kube-hw
Type: NodePort
IP: 172.21.250.32
Port: <unset> 80/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30930/TCP
Endpoints: 172.30.205.116:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>The URL <code>http:\\10.77.223.141:30930</code> doesn't resolve. I can look at the log and see the app running, but no requests are getting to it. The article uses <code>--target-port=8888</code>. But since I've used port <code>5000</code> in the container I changed to <code>--target-port=5000</code>. I'm wondering if I have a misalignment of port numbers somewhere.</p>
| <p>The IP address that you need is the external IP address of a worker node in your cluster. The IP address that you have identified is an internal address.</p>
<p>To get an external IP address, run <code>bx cs workers <cluster_name></code></p>
<p>Then form the URL as follows: <code>http://<external_node_ip>:<NodePort></code></p>
<p>Here's a doc with more information on using NodePort: <a href="https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport" rel="nofollow noreferrer">https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport</a></p>
|
<p>I am attempting to deploy a Neo4j <a href="https://neo4j.com/docs/operations-manual/3.3/clustering/causal-clustering/" rel="nofollow noreferrer">causal cluster</a> in Kubernetes. I've followed <a href="https://github.com/neo4j-contrib/kubernetes-neo4j" rel="nofollow noreferrer">this guide</a> on Github, and deployed the cluster in Google Kubernetes Engine and it works as expected, i.e., the cluster replicates writes in the followers, and catch up in case of failure (deleting pods).</p>
<p>What I want to do next is to expose this cluster to accessed from outside of it. </p>
<p>The challenge I'm facing is that, in order to connect to a remote causal cluster, I need to have a static name/IP address of any of the <code>CORE</code> servers, using the <code>bolt+routing</code> URI, so that the driver can route the requests accordingly (read, write, etc.).</p>
<p>As it's shown here, the <a href="https://github.com/neo4j-contrib/kubernetes-neo4j/blob/master/cores/dns.yaml" rel="nofollow noreferrer">service</a> is exposed as <code>ClusterIP</code> mode, so it's only accessible from within the cluster. I have attempted to change it to <code>NodePort</code> and <code>LoadBalancer</code> modes, and in those cases, the <code>CORE</code> Neo4j cluster members cannot find each other.</p>
<p>How can I keep the internal communication of the required ports (Raft, Transactions, etc.) and expose the <code>7687</code> (and possibly <code>7474</code> for browser) for external communication?</p>
| <p>You can have multiple <code>Service</code>s for the same deployment in Kubernetes.</p>
<ul>
<li>have a <code>ClusterIP</code> Service, so nodes can continue to communicate with each oter.</li>
<li>have a <code>LoadBalancer</code> Service, so you can expose the public port to the Internet.</li>
</ul>
<p>If you’re looking to restrict internal communication <em>further</em>, you should look at Kubernetes Network Policies (but I doubt that's what you need). Some resources on this:</p>
<ul>
<li><a href="https://github.com/ahmetb/kubernetes-networkpolicy-tutorial" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-networkpolicy-tutorial</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a></li>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy</a> </li>
</ul>
|
<p>I created a deployment in Kubernetes 1.9 running on GKE that makes use of secrets put into environment variables. I uploaded the secrets into GKE using a yaml file with the secrets base64 encoded. </p>
<p>What I'm seeing in my container is that the environment variable is there, but the value includes trailing whitespace. Here's what it would look like if I set up an environment variable FOO with the value "bar", where the base64 I put in the secrets yaml would be "YmFyCg==":</p>
<pre><code>$ echo $FOO
bar
$ echo \"$FOO\"
"bar "
$ echo $FOO | base64
YmFyCg==
$ echo "$FOO" | base64
YmFyIAo=
</code></pre>
<p>This is causing no end of difficulties for applications that read from environment variables expecting the value to be encoded without the additional whitespace, e.g., <code>POSTGRES_PASSWORD</code> and <code>POSTGRES_USER</code> in the <code>postgres:9.6</code> image. Other variables in the environment (including those set from my deployment yaml without secrets) do not include the trailing whitespace; it's only secrets that have the problem. </p>
| <p>Your echo is adding the newline. Add -n to omit a trailing newline </p>
|
<p>I'm trying to setup Jenkins to run in a container on Kubernetes, but I'm having trouble persisting the volume for the Jenkins home directory.</p>
<p>Here's my deployment.yml file. The image is based off <code>jenkins/jenkins</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: 1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins
imagePullPolicy: "Always"
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
</code></pre>
<p>However, if i then push a new container to my image repository and update the pods using the below commands, Jenkins comes back online but asks me to start from scratch (enter admin password, none of my Jenkins jobs are there, no plugins etc)</p>
<pre><code>kubectl apply -f kubernetes (where my manifests are stored)
kubectl set image deployment/jenkins-deployment jenkins=1234567.dkr.ecr.us-east-1.amazonaws.com/mycompany/jenkins:$VERSION
</code></pre>
<p>Am I misunderstanding how this volume mount is meant to work?</p>
<hr>
<p>As an aside, I also have backup and restore scripts which backup the Jenkins home directory to s3, and download it again, but that's somewhat outside the scope of this issue.</p>
| <p>You should use <strong>PersistentVolumes</strong> along with <strong>StatefulSet</strong> instead of <strong>Deployment</strong> resource if you wish your data to survive re-deployments|restarts of your pod.</p>
|
<p>I have created a deployment which create a replica-set and this replica-set create pods. Now I found couple of time the pod is getting deleted (do not know why) and then the replica-set create a new pod.</p>
<p>I am unable to find the details for the previous pod. I need to know why the pod was deleted. Kubernetes server version 1.5.6. I am also not seeing anything in events. Whereas I can see some events in newer version of Kubernetes server.</p>
<p>Please let me know in case of any more information required.</p>
<p>I have given a simple use case here where I created a deployment and then deleted pod inside it:</p>
<pre><code>[kubernate-test]$ kubectl run pod-deleted-reason-why --image=busybox -- sh -c 'while true; do echo i am running, but not sure can fail any time;sleep 10;done'
deployment "pod-deleted-reason-why" created
[kubernate-test]$ kubectl get deployments |grep reason
pod-deleted-reason-why 1 1 1 1 32s
[kubernate-test]$ kubectl get replicaset |grep reason
pod-deleted-reason-why-59c9df7594 1 1 1 59s
[kubernate-test]$ kubectl get pods |grep reason
pod-deleted-reason-why-59c9df7594-nr8rs 1/1 Running 0 1m
[kubernate-test]$ kubectl delete pod pod-deleted-reason-why-59c9df7594-nr8rs
pod "pod-deleted-reason-why-59c9df7594-nr8rs" deleted
[kubernate-test]$ kubectl get pods |grep reason
pod-deleted-reason-why-59c9df7594-qpq5b 1/1 Running 0 1m
</code></pre>
<p>Here I manually deleted the pod - <code>pod-deleted-reason-why-59c9df7594-nr8rs</code> and replica-set has created a new pod - <code>pod-deleted-reason-why-59c9df7594-qpq5b</code>.</p>
<p>If the the pod was delete for some other reason (say node reboot) how could I know why the pod was deleted?</p>
| <p>Describe pod is the way to go. I see this kind of thing everyday. Everytime I launch a new deployment I list the pods with <code>kubectl get pods</code>, then once I see the container creating message for the new pod I want to monitorize, I grab the id and then run <code>kubectl describe pod [ID]</code>. Even if the pod is replaced by a new one you can still see the exit error. You can probably also do <code>kubectl logs [ID]</code> to see more information.</p>
|
<p>CockroachDB has a relatively simple clustering mechanism, you initialize the DB with a command line option pointing at the host name of the other cockroach machines (but, this question is relevant really for any peer to peer clustered db).</p>
<p>One of the benefits of Cockroach is you can cluster across regions within a continent. Cockroach themselves published a good k8s config to standup a cockroach cluster on stateful sets. See <a href="https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml" rel="nofollow noreferrer">this</a> config.</p>
<p>I'm trying to find a way to span the cockroach cluster across two GKE clusters in different regions. DNS and connectivity between the regions isn't really an issue, but I can't figure out how to address the stateful set instances. Internal to the cluster, they're cockroachdb-1.cockroach. Is there any way to allow these to be cross cluster addressable? One option would be to expose as a nodeport and point instances from the second cluster to machines with ports in the first cluster. That seems hacky and if the machine goes down represents a single point of failure. Any other ideas about how to do this? I also explored k8s federation, but I don't think it really addresses this issue either (though I could be wrong). </p>
<p>One final option would be exposing each instance through a load balancer...I don't really like that, but maybe it's the only way?</p>
| <p>This is a good question that I've been meaning to play around with soon. You've been checking into a reasonable set of ideas. The core problem, as you allude to, is that every cockroach process needs to be able to individually address every other cockroach process.</p>
<p>I don't know how well cluster federation has developed over the last 12-18 months, but it seems like that's where this really should be solved.</p>
<p>Barring great developments in cluster federation, the "easiest" way that comes to mind is to use host networking for all the cockroachdb pods. You can specify a few known machine IPs as the join addresses for new pods to connect to, and then they'll all be able to talk to each other. I've made this work with StatefulSets before (by setting <code>dnsPolicy: ClusterFirstWithHostNet</code> along with <code>hostNetwork: true</code>), but I'm not sure it's a well-supported use case. You'd probably be better off using a DaemonSet (with a label selector to only run on certain nodes if you don't want it on them all). Something like this: <a href="https://gist.github.com/a-robinson/ec2b86783ccbf053c83ba83170673d63" rel="nofollow noreferrer">https://gist.github.com/a-robinson/ec2b86783ccbf053c83ba83170673d63</a></p>
<p>If that doesn't tickle your fancy, then creating a service for each StatefulSet instance unfortunately is probably the next best bet. As of a fairly recent change in Kubernetes, a separate label will be created for each pod, which should make this easier than it used to be: <a href="https://github.com/kubernetes/kubernetes/pull/55329" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/55329</a></p>
<p>I'd love to see other suggestions for this, though, since it's all kind of manual or infrastructure-specific.</p>
|
<p>Sorry for the dumb question I'm new to cloud computing.</p>
<p>I'm trying to do <a href="https://medium.com/google-cloud/how-to-create-google-container-engine-gke-using-the-google-cloud-platform-console-925dd1cc9400" rel="nofollow noreferrer">this</a> tutorial and I've listened to <a href="https://youtu.be/g0dN8Hkh5H8?list=PLfa0JnnntRviUWOHSofUusCbaEe6u2MDX&t=1298" rel="nofollow noreferrer">this</a> video of Google Next.</p>
<p>According the linked tutorials I should see two different menu items on my dashboard, but only Kubernetes Engine appears on mine.
I can't figure out what's the reason of not seeing the Container Engine menu item.</p>
<p>As I understand, the Kubernetes engine should be a managed container engine (managed by Google) and the Container Engine is unmanaged.
If I want to experiment with Docker Swarm or with Apache Mesos, the Kubernetes engine is not good for me. If I type 'Container Engine' into search field, the Kubernetes Engine comes up.</p>
<p>Questions:
Why is this menu item visible on video and in tutorials, and it's not visible on my dashboard? What happened to this menu item?
Was this funcionality removed from google cloud? Or was this functionality somehow merged into Kubernetes Engine?</p>
| <p>Google Container Engine is <strong>renamed</strong> as Google Kubernetes Engine. It’s the same product. </p>
<p><a href="https://cloudplatform.googleblog.com/2017/11/introducing-Certified-Kubernetes-and-Google-Kubernetes-Engine.html" rel="noreferrer">https://cloudplatform.googleblog.com/2017/11/introducing-Certified-Kubernetes-and-Google-Kubernetes-Engine.html</a></p>
<p>The tutorial and the video you linked are created before the rename has happened, and therefore they’re probably not going to reflect the name change.</p>
|
<p>I can't mount GCE PersistentVolumes using Kubernetes 1.8.0, each POD are stuck in ContainerCreating state.</p>
<p>This output is from a test environment I put up for this lab. Worth to mention is that I'm using Compute Engine, NOT Kubernetes Engine.</p>
<p>I have not configured any cloud settings and I wounder if this might be the root cause but gcloud works perfectly fine from the worker and all my VMs in this lab environment are allowed full access to the API. </p>
<p>Error message on the worker</p>
<pre><code>Jan 2 13:03:58 worker-0 kubelet[1421]: E0102 13:03:58.733299 1421 kubelet.go:1628] Unable to mount volumes for pod "mysql-cgui-01-5c85f7dd86-gt2s8_default(ab17eaf2-efb6-11e7-a385-42010af0000a)": timeout expired waiting for volumes to attach/mount for pod "default"/"mysql-cgui-01-5c85f7dd86-gt2s8". list of unattached/unmounted volumes=[mysql-cgui-01]; skipping pod
</code></pre>
<p>POD description</p>
<pre><code>bofh:~$ kubectl describe pod mysql-cgui-01-5c85f7dd86-gt2s8
Name: mysql-cgui-01-5c85f7dd86-gt2s8
Namespace: default
Node: worker-0/10.240.0.20
Start Time: Tue, 02 Jan 2018 12:15:49 +0000
Labels: name=mysql-cgui-01
pod-template-hash=1741938842
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"mysql-cgui-01-5c85f7dd86","uid":"ab10f9ef-efb6-11e7-a385-42010af...
Status: Pending
IP:
Created By: ReplicaSet/mysql-cgui-01-5c85f7dd86
Controlled By: ReplicaSet/mysql-cgui-01-5c85f7dd86
Containers:
mysql-cgui-01:
Container ID:
Image: external/mysql:latest
Image ID:
Port: 3306/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Mounts:
/data/mysql from mysql-cgui-01 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tb6sm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mysql-cgui-01:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: mysql-cgui-01
FSType: ext4
Partition: 0
ReadOnly: false
default-token-tb6sm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tb6sm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 55m default-scheduler Successfully assigned mysql-cgui-01-5c85f7dd86-gt2s8 to worker-0
Normal SuccessfulMountVolume 55m kubelet, worker-0 MountVolume.SetUp succeeded for volume "default-token-tb6sm"
Warning FailedMount 41m (x6 over 53m) kubelet, worker-0 Unable to mount volumes for pod "mysql-cgui-01-5c85f7dd86-gt2s8_default(ab17eaf2-efb6-11e7-a385-42010af0000a)": timeout expired waiting for volumes to attach/mount for pod "default"/"mysql-cgui-01-5c85f7dd86-gt2s8". list of unattached/unmounted volumes=[mysql-cgui-01]
Warning FailedSync 41m (x6 over 53m) kubelet, worker-0 Error syncing pod
Normal SuccessfulMountVolume 38m kubelet, worker-0 MountVolume.SetUp succeeded for volume "default-token-tb6sm"
Warning FailedMount 4m (x15 over 36m) kubelet, worker-0 Unable to mount volumes for pod "mysql-cgui-01-5c85f7dd86-gt2s8_default(ab17eaf2-efb6-11e7-a385-42010af0000a)": timeout expired waiting for volumes to attach/mount for pod "default"/"mysql-cgui-01-5c85f7dd86-gt2s8". list of unattached/unmounted volumes=[mysql-cgui-01]
Warning FailedSync 2m (x16 over 36m) kubelet, worker-0 Error syncing pod
</code></pre>
<p>Testing using gcloud from worker-0</p>
<pre><code>worker-0:~$ gcloud compute disks list
NAME ZONE SIZE_GB TYPE STATUS
bofh europe-west1-d 20 pd-standard READY
controller-0 europe-west1-c 200 pd-standard READY
controller-1 europe-west1-c 200 pd-standard READY
controller-2 europe-west1-c 200 pd-standard READY
mysql-cgui-01 europe-west1-c 10 pd-standard READY
mysql-cgui-02 europe-west1-c 10 pd-standard READY
worker-0 europe-west1-c 200 pd-standard READY
worker-1 europe-west1-c 200 pd-standard READY
worker-2 europe-west1-c 200 pd-standard READY
</code></pre>
<p>Worker-0 kubelet flags</p>
<pre><code>ExecStart=/usr/local/bin/kubelet \
--allow-privileged=true \
--anonymous-auth=false \
--authorization-mode=Webhook \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--cluster-dns=10.32.0.10 \
--cluster-domain=cluster.local \
--container-runtime=docker \
--image-pull-progress-deadline=2m \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--network-plugin=cni \
--pod-cidr=10.200.0.0/24 \
--register-node=true \
--require-kubeconfig \
--runtime-request-timeout=15m \
--tls-cert-file=/var/lib/kubelet/worker-0.pem \
--tls-private-key-file=/var/lib/kubelet/worker-0-key.pem \
--cloud-provider=gce \
--v=2
</code></pre>
<p>Worker-0 kube-proxy flags</p>
<pre><code>ExecStart=/usr/local/bin/kube-proxy \
--cluster-cidr=10.200.0.0/16 \
--kubeconfig=/var/lib/kube-proxy/kubeconfig \
--proxy-mode=iptables \
--v=2
</code></pre>
<p>Controller kube-scheduler flags</p>
<pre><code>ExecStart=/usr/local/bin/kube-scheduler \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--v=2
</code></pre>
<p>Controller kube-controllermanager flags</p>
<pre><code>ExecStart=/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
--leader-elect=true \
--master=http://127.0.0.1:8080 \
--root-ca-file=/var/lib/kubernetes/ca.pem \
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--cloud-provider=gce \
--allocate-node-cidrs=false \
--configure-cloud-routes=false \
--v=2
</code></pre>
<p>controller kube-apiserver flags</p>
<pre><code>ExecStart=/usr/local/bin/kube-apiserver \
--admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--advertise-address=10.240.0.10 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--enable-swagger-ui=true \
--etcd-cafile=/var/lib/kubernetes/ca.pem \
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \
--event-ttl=1h \
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--insecure-bind-address=127.0.0.1 \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
--kubelet-https=true \
--runtime-config=api/all \
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-ca-file=/var/lib/kubernetes/ca.pem \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--cloud-provider=gce \
--v=2
</code></pre>
<p>Versions</p>
<pre><code>bofh:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
bofh:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 12d v1.8.0
worker-1 Ready <none> 12d v1.8.0
worker-2 Ready <none> 12d v1.8.0
</code></pre>
<p>PV and PVC info</p>
<pre><code>bofh:~$ kubectl get pv
No resources found.
bofh:~$ kubectl get pvc
No resources found.
bofh:~$ kubectl describe pv
bofh:~$ kubectl describe pvc
</code></pre>
<p>StorageClass config</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
</code></pre>
<p>Storage class output</p>
<pre><code>bofh:~$ kubectl get sc
NAME PROVISIONER
slow kubernetes.io/gce-pd
standard kubernetes.io/gce-pd
</code></pre>
| <p>You need to add the cloud-provider flag to the apiserver, kubelet and controller-manager.</p>
<p>You also then need to configure a storageclass for the GCE</p>
|
<p>Pod created by job gets killed with Message:</p>
<pre><code>Normal DeadlineExceeded Job was active longer than specified deadline
</code></pre>
<p>This is a pod running training process with Tensorflow. The weird thing is this pod is just created with no thing error in <code>kubectl log</code> output.</p>
| <p>As stated in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup</a> you can see that <code>DeadlineExceeded</code> means your process run longer then the longest time it is allowed to in your spec. </p>
|
<p>I'm having an issue right now when trying to create a PHP (Laravel) pod in my Kubernetes cluster. As php-fpm is a service run by the user www-data, he doesn't have access to the environments variables that Docker or Kubernetes sets up, which is the source of my problem.</p>
<p>We have beta and production pods, and previously, the only thing I did was setup an environment variable in my Kubernetes deployment which would tell the container if it had to use the beta or the prod configuration.</p>
<p>Sadly, since the php-fpm process can't access them, this is not possible. Is there a clean way to handle this? One way I can think of doing it right now is to put my two different .env files (config file for Laravel) in two different secrets in Kubernetes, then after that have a script as my Docker image's entrypoint that copies the right .env at the root of the project since the script will have access to the ENV variables.</p>
<p>If someone has a cleaner way in mind, I'm all ears! </p>
| <p>You could put the <code>clear_env = no</code> in your php-fpm.conf or a file included from it, <a href="https://github.com/docker-library/php/issues/74#issuecomment-79102385" rel="nofollow noreferrer">as discussed here</a>.</p>
|
<p>I have a running <code>DaemonSet</code> which is running on all nodes. I want to remove it from a node in order to completely drain it, as <code>kubectl drain</code> doesn't get rid of them. Without deleting my <code>DaemonSet</code>, what's a good way to temporarily remove those pods from the node? I've tried draining it and deleting the <code>DaemonSet</code> pods, but the <code>DaemonSet</code> will still reschedule them, disregarding that the node is set as <code>Unschedulable: true</code>.</p>
| <p>You need to use <code>--ignore-daemonsets</code> key when you drain kubernetes node:</p>
<blockquote>
<p>--ignore-daemonsets=false: Ignore DaemonSet-managed pods.</p>
</blockquote>
<p>So, in order to drain kubernetes node with <code>DaemonSets</code> in cluster, you need to execute:</p>
<pre><code>kubectl drain <node_name> --ignore-daemonsets
</code></pre>
<p>If you need to <strong>Remove DaemonSet pod from a node completely</strong>, you can specify a <code>.spec.template.spec.nodeSelector</code> in <code>DaemonSet</code> (the DaemonSet controller will create Pods on nodes which match that node selector) and set that label to all nodes except the one you need to completely drain.</p>
|
<p>How to delete all the contents from a kubernetes node?<br/> Contents include deployments, replica sets etc. I tried to delete deplyoments seperately. But kubernetes recreates all the pods again.<br/> Is there there any ways to delete all the replica sets present in a node? </p>
| <p>If you are testing things, the easiest way would be</p>
<pre><code>kubectl delete deployment --all
</code></pre>
<p>Althougth if you are using minikube, the easiest would probably be delete the machine and start again with a fresh node</p>
<pre><code>minikube delete
minikube start
</code></pre>
<p>If we are talking about a production cluster, <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="noreferrer">Kubernetes has a built-in feature to drain a node of the cluster</a>, removing all the objects from that node safely.</p>
<blockquote>
<p>You can use <code>kubectl drain</code> to safely evict all of your pods from a node before you perform maintenance on the node. Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.</p>
<p><strong>Note</strong>: By default <code>kubectl drain</code> will ignore certain system pods on the node that cannot be killed; see the kubectl drain documentation for more details.</p>
<p>When <code>kubectl drain</code> returns successfully, that indicates that all of the pods (except the ones excluded as described in the previous paragraph) have been safely evicted (respecting the desired graceful termination period, and without violating any application-level disruption SLOs). It is then safe to bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine.</p>
</blockquote>
<p>First, identify the name of the node you wish to drain. You can list all of the nodes in your cluster with</p>
<pre><code>kubectl get nodes
</code></pre>
<p>Next, tell Kubernetes to drain the node:</p>
<pre><code>kubectl drain <node name>
</code></pre>
<p>Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node). <code>drain</code> waits for graceful termination. You should not operate on the machine until the command completes.</p>
<p>If you leave the node in the cluster during the maintenance operation, you need to run</p>
<pre><code>kubectl uncordon <node name>
</code></pre>
<p>afterwards to tell Kubernetes that it can resume scheduling new pods onto the node.</p>
<p>Please, note that if there are any pods that are not managed by ReplicationController, ReplicaSet, DaemonSet, StatefulSet or Job, then <code>drain</code> will not delete any pods unless you use --force, <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain" rel="noreferrer">as mentioned in the docs</a>.</p>
<pre><code>kubectl drain <node name> --force
</code></pre>
|
<p>I'm following <a href="https://www.katacoda.com/courses/kubernetes/liveness-readiness-healthchecks" rel="nofollow noreferrer">https://www.katacoda.com/courses/kubernetes/liveness-readiness-healthchecks</a></p>
<p>I saw an interesting command to parse json object use flag <code>-o=jsonpath</code> in kubectl as below: </p>
<pre><code># pod=$(kubectl get pods --selector="name=bad-frontend" --output=jsonpath={.items..metadata.name})
# kubectl describe pod $pod
</code></pre>
<p>How <code>map[string]interface {}, []interface {}{map[string]interface {}, map[string]interface {}{}</code> is used in the raw json?</p>
<p>How can I easily know <code>{.itme..metadata.name}</code> is correct, not using something <code>{..item.metadata..name}</code>?</p>
<p>Raw file to check (if you pass wrongly {}, it will return this to you):</p>
<pre><code>map[string]interface {}{"kind":"List", "apiVersion":"v1", "metadata":map[string]interface {}{}, "items":[]interface {}{map[string]interface {}{"status":map[string]interface {}{"phase":"Running", "conditions":[]interface {}{map[string]interface {}{"type":"Ready", "status":"False", "lastProbeTime":interface {}(nil), "lastTransitionTime":"2018-01-03T14:41:26Z", "reason":"ContainersNotReady", "message":"containers with unready status: [bad-frontend]"}}, "hostIP":"127.0.0.1", "podIP":"172.18.0.3", "startTime":"2018-01-03T14:41:26Z", "containerStatuses":[]interface {}{map[string]interface {}{"ready":false, "restartCount":6, "image":"katacoda/docker-http-server:unhealthy", "imageID":"docker://sha256:dc680e51481ae0256b5483e0d3c0bd188215a67b0926d4ed07e8a9fe55e16154", "containerID":"docker://3a067a667e8e73784c761d75c3614d3aa138a6d498b22b48e1c23aa1d8158170", "name":"bad-frontend", "state":map[string]interface {}{"waiting":map[string]interface {}{"reason":"CrashLoopBackOff", "message":"Back-off 2m40s restarting failed container=bad-frontend pod=bad-frontend-zbgrw_default(28bdddce-f094-11e7-9f7b-0242ac110036)"}}, "lastState":map[string]interface {}{"terminated":map[string]interface {}{"containerID":"docker://3a067a667e8e73784c761d75c3614d3aa138a6d498b22b48e1c23aa1d8158170", "exitCode":2, "reason":"Error", "startedAt":"2018-01-03T14:44:57Z", "finishedAt":"2018-01-03T14:45:06Z"}}}}}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"bad-frontend-zbgrw", "generateName":"bad-frontend-", "uid":"28bdddce-f094-11e7-9f7b-0242ac110036", "labels":map[string]interface {}{"name":"bad-frontend"}, "annotations":map[string]interface {}{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"default\",\"name\":\"bad-frontend\",\"uid\":\"24a58c4a-f094-11e7-9f7b-0242ac110036\",\"apiVersion\":\"v1\",\"resourceVersion\":\"15\"}}\n"}, "namespace":"default", "selfLink":"/api/v1/namespaces/default/pods/bad-frontend-zbgrw", "resourceVersion":"300", "creationTimestamp":"2018-01-03T14:41:18Z"}, "spec":map[string]interface {}{"restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "nodeName":"127.0.0.1", "securityContext":map[string]interface {}{}, "containers":[]interface {}{map[string]interface {}{"terminationMessagePath":"/dev/termination-log", "imagePullPolicy":"IfNotPresent", "name":"bad-frontend", "image":"katacoda/docker-http-server:unhealthy", "resources":map[string]interface {}{}, "livenessProbe":map[string]interface{}{"timeoutSeconds":1, "periodSeconds":10, "successThreshold":1, "failureThreshold":3, "httpGet":map[string]interface {}{"path":"/", "port":80, "scheme":"HTTP"}, "initialDelaySeconds":1}, "readinessProbe":map[string]interface {}{"httpGet":map[string]interface {}{"path":"/", "port":80, "scheme":"HTTP"}, "initialDelaySeconds":1, "timeoutSeconds":1, "periodSeconds":10, "successThreshold":1, "failureThreshold":3}}}}}}}
</code></pre>
| <p>I'm happy that I found a good solution explained in details here:
<a href="https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba" rel="nofollow noreferrer">https://gist.github.com/so0k/42313dbb3b547a0f51a547bb968696ba</a></p>
<p>Basically you have to install jq or jiq, then type <code>kubectl get no -o json | jid -q | pbcopy</code>.
The fun part begins: you type . and item, etc will be proposed for you to choose, you can easily construct your filter syntax.</p>
|
<p>My backend, in Rocket (Rust), does not have compression built in. So, it is dependent on the proxy to compress it. Though nginx ingress controller supports it, I thought whether the default one had it too as it has high availability.</p>
<p>If it does not have, then how should I setup?</p>
| <p><strong>UPDATE(2018-01-31):</strong> It looks like Cloud HTTP(S) Load Balancer <strong>supports</strong> GZIP. You just have to serve compressed content from your backend and the load balancer will pass it on.</p>
<p>However, NGINX is confused because of the <code>Via</code> header (it thinks proxies don't support GZIP, and on most cloud providers this is correct, but not Google). See this FAQ: <a href="https://cloud.google.com/cdn/docs/troubleshooting#compression-not-working" rel="noreferrer">https://cloud.google.com/cdn/docs/troubleshooting#compression-not-working</a> </p>
<blockquote>
<p>If you are using the nginx web server software, modify the nginx.conf
configuration file to enable compression. The location of this file
depends on where nginx is installed. In many Linux distributions, the
file is stored at /etc/nginx/nginx.conf. To allow nginx compression to
work with HTTP(S) load balancing, add the following two lines to the
http section of nginx.conf:</p>
<pre><code>gzip_proxied any;
gzip_vary on;
</code></pre>
</blockquote>
|
<p>I setup my Kubernetes cluster using kops, and I did so from local machine. So my <code>.kube</code> directory is stored on my local machine, but I setup <code>kops</code> for state storage in <code>s3</code>.</p>
<p>I'm in the process of setting up my CI server now, and I want to run my <code>kubectl</code> commands from that box. How do I go about importing the existing state to that server?</p>
| <p>To run <code>kubectl</code> command, you will need the cluster's apiServer URL and related credentials for authentication. Those data are by convention stored in <code>~/.kube/config</code> file. You may also view it via <code>kubectl config view</code> command.</p>
<p>In order to run <code>kubectl</code> on your CI server, you need to make sure the <code>~/.kube/config</code> file contains all the information that <code>kubectl</code> client needs. </p>
<p>With kops, a simple naive solution is to:</p>
<p>1) install kops, kubectl on your CI server</p>
<p>2) config the AWS access credential on your CI server (either via IAM Role or simply env vars), make sure it has access to your s3 state store path</p>
<p>3) set env var for kops to access your cluster:</p>
<pre><code> export NAME=${YOUR_CLUSTER_NAME}
export KOPS_STATE_STORE=s3://${YOUR_CLUSTER_KOPS_STATE_STORE}
</code></pre>
<p>4) Use kops export command to get the kubecfg needed for running kubectl</p>
<pre><code> kops export kubecfg ${YOUR_CLUSTER_NAME}
</code></pre>
<p>see <a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops_export.md" rel="noreferrer">https://github.com/kubernetes/kops/blob/master/docs/cli/kops_export.md</a></p>
<p>Now the <code>~/.kube/config</code> file on your CI server should contain all the information <code>kubectl</code> needs to access your cluster.</p>
<p>Note that this will use the default admin account on your CI server. To implement a more secure CI/CD environment, you should create a service account bind to a required permission scope (a namespace or type or resources for example), and place its credential on your CI server machine.</p>
|
<p>I can't get a rolebinding right in order to get node status from an app which runs in a pod on GKE.</p>
<p>I am able to create a pod from there but not get node status. Here is the role I am creating:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["nodes"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>This is the error I get when I do a getNodeStatus:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "nodes \"gke-cluster-1-default-pool-36c26e1e-2lkn\" is forbidden: User \"system:serviceaccount:default:sa-poc\" cannot get nodes/status at the cluster scope: Unknown user \"system:serviceaccount:default:sa-poc\"",
"reason": "Forbidden",
"details": {
"name": "gke-cluster-1-default-pool-36c26e1e-2lkn",
"kind": "nodes"
},
"code": 403
}
</code></pre>
<p>I tried with some minor variations but did not succeed.</p>
<p>Kubernetes version on GKE is 1.8.4-gke.</p>
| <p>Subresource permissions are represented as <code><resource>/<subresource></code>, so in the role, you would specify <code>resources: ["nodes","nodes/status"]</code></p>
|
<p>How can I create, deploy and run and manage <a href="https://en.wikipedia.org/wiki/Cron" rel="nofollow noreferrer">Cron jobs</a> on Hasura?</p>
| <p>Hasura suggests two ways to deploy and run Cron jobs.</p>
<h3>Cron microservice</h3>
<p>Hasura already has a microservice to run Cron jobs. </p>
<p>If you already have a Hasura project run:</p>
<pre><code>hasura microservice create mycron --template=python-cron
</code></pre>
<p>Change <code>mycron</code> to whatever you want to name your microservice. This will create a custom Python microservice designed to run Cron jobs. (Follow further instructions as prompted by <code>hasura</code> CLI)</p>
<p>To deploy this on Hasura, git commit and push to your cluster's remote.</p>
<pre><code>$ git add .
$ git commit -m "Add cron job"
$ git push hasura master
</code></pre>
<p>To know more about how to customize this microservice, you can read the <a href="https://hasura.io/hub/project/arvishankar/python-cron" rel="noreferrer">docs</a>.</p>
<h3>Kubernetes Cron jobs</h3>
<p>Since, Hasura runs on Kubernetes and Kubernetes (>= v1.8) already provides <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">Cron Jobs</a> as a first class resource, it is recommended to use Kubernetes Cron jobs wherever possible.</p>
<p>If you have <code>kubectl</code> installed you can check your Kubernetes version by running: <code>kubectl version</code>. In the output, the "server version" shows the version of the Kubernetes cluster. If you are running Kubernetes >= v1.8, we recommend you to use Kubernetes Cron jobs.</p>
<p>When using Kubernetes Cron jobs, you can version control your cron job specs inside your Hasura project, and use the <code>kubectl</code> tool to create and manage them.</p>
|
<p><strong>Kubernetes</strong> <strong>Secrets</strong> create files that are mounted as a <strong>volumeMount</strong>.</p>
<p>There is possibility to put multiple files in a single Secret.</p>
<p>Is there a way to create a Secret that would put files in a <strong>directory structure</strong> (i.e. in a folder) ?</p>
<p>There is no sign of it in the docs, and using <code>/</code> is not allowed in the key name, so it seems like it is not possible (except for making multiple secrets and mounting them in different volumes)</p>
<p>Does anyone know better?</p>
| <p>This is actually possible now: You need to use the <code>items</code> field to project the key/value pairs in the secret to specific paths that you want. See the example in the section titled "Projection of secret keys to specific paths" in the secrets documentation, which I've linked and copied below: <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
items:
- key: username
path: my-group/my-username
</code></pre>
<p>This will place the secret with key "username" at the path <code>/my_secret_volume/my-group/my-username</code></p>
|
<p>I would like how to call a service from any pod inside or outsite the node.</p>
<p>I have 3 nodes with deployment and services. I already have a kube-proxy.</p>
<p>I exec bash on other pod:</p>
<pre><code>kubectl exec --namespace=develop myotherdpod-78c6bfd876-6zvh2 -i -t -- /bin/bash
</code></pre>
<p>And inside my other pod I have tried to exec curl:</p>
<pre><code>curl -v http://myservice.develop.svc.cluster.local/user
</code></pre>
<p>This is my created service:</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "myservice",
"namespace": "develop",
"selfLink": "/api/v1/namespaces/develop/services/mydeployment-svc",
"uid": "1b5fb4ae-ecd1-11e7-8599-02cc6a4bf8be",
"resourceVersion": "10660278",
"creationTimestamp": "2017-12-29T19:47:30Z",
"labels": {
"app": "mydeployment-deployment"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 8080
}
],
"selector": {
"app": "mydeployment-deployment"
},
"clusterIP": "100.99.99.140",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
</code></pre>
| <p>It looks to me that something may be incorrect with the Network Overlay you deployed. First of all, I would double check that the pod can access kube-dns and obtain the proper IP of the service.</p>
<pre><code>nslookup myservice.develop.svc.cluster.local
nslookup myservice # If they are in the same namespace it should work as well
</code></pre>
<p>If you are able to confirm that, then I would also check if services like kube-proxy are working correctly. You can do it by using </p>
<pre><code>systemctl status kube-proxy
</code></pre>
<p>If that does not work I will also check the pods from the Overlay network by executing</p>
<pre><code>kubectl get pods --namespace=kube-system
</code></pre>
<p>If they are all ok, I would try using a different network overlay: <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a></p>
<p>If that did not work either, I would check if there are firewall rules preventing some communication between the nodes.</p>
|
<p>I am very new to kubernetes and have just got a stock kubernetes v.1.3.5 cluster up on AWS using kube-up. So far, I have been playing around with kubernetes in understanding it's mechanics (nodes, pods, svc and stuff). Based on my initial (or maybe crude) understanding , I had few questions:</p>
<p>1) How does routing to cluster IP work here (i.e in kube-aws) ? I see that the services have IPs in the range 10.0.0.0/16. I did a deployment with rc=3 of stock nginx and then attached a service to it with Node Port exposed. All works great! I can connect to the service from my dev machine. This nginx service has a cluster IP of 10.0.33.71:1321. Now, if I ssh into one of the minions(or nodes or VMS) and do a "telnet 10.0.33.71 1321", it connects as expected. But I am clueless how this works, I couldn't find any routes related to 10.0.0.0/16 in the VPC setup by kubernetes. What exactly happens under the hood here that results in a successful connection for app like telnet? However, If I ssh into the master node and do "telnet 10.0.33.71 1321", it does not connect. Why does it fail to connect from master?</p>
<p>2) There is a cbr0 interface inside each node. Each minion node has cbr0 configured as 10.244.x.0/24 and master has cbr0 as 10.246.0.0/24.
I can ping to any of the 10.244.x.x pods from any of the nodes(including master). But I am not able to ping 10.246.0.1 (cbr0 inside master node) from any of the minion nodes. What could be happening here? </p>
<p>Here's the routes set up by kubernetes in aws. VPC.</p>
<pre><code>Destination Target
172.20.0.0/16 local
0.0.0.0/0 igw-<hex value>
10.244.0.0/24 eni-<hex value> / i-<hex value>
10.244.1.0/24 eni-<hex value> / i-<hex value>
10.244.2.0/24 eni-<hex value> / i-<hex value>
10.244.3.0/24 eni-<hex value> / i-<hex value>
10.244.4.0/24 eni-<hex value> / i-<hex value>
10.246.0.0/24 eni-<hex value> / i-<hex value>
</code></pre>
| <p><strong><a href="http://www.markbetz.net/" rel="noreferrer">Mark Betz</a></strong> (<a href="https://twitter.com/markbetz" rel="noreferrer">SRE at Olark</a>) presents Kubernetes networking in three articles:</p>
<ul>
<li><strong><a href="https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727" rel="noreferrer">pods</a></strong></li>
<li><strong><a href="https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82" rel="noreferrer">services</a></strong>: </li>
<li><strong><a href="https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078" rel="noreferrer">ingress</a></strong></li>
</ul>
<p>For a pod, you are looking at:</p>
<p><a href="https://i.stack.imgur.com/L88G7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/L88G7.png" alt="pod network"></a></p>
<p>You find:</p>
<ul>
<li><strong>etho0</strong>: a "physical network interface"</li>
<li><strong>docker0</strong>/<strong>cbr0</strong>: a <a href="https://wiki.linuxfoundation.org/networking/bridge" rel="noreferrer">bridge</a> for connecting two <a href="https://en.wikipedia.org/wiki/Ethernet" rel="noreferrer">ethernet</a> segments no matter their protocol.</li>
<li><code>veth0</code>, <code>1</code>, <code>2</code>: Virtual Network Interface, one per container.<br>
<strong>docker0</strong> is the <a href="https://en.wikipedia.org/wiki/Default_gateway" rel="noreferrer">default Gateway</a> of <strong>veth0</strong>. It is named <strong>cbr0</strong> for "custom bridge".<br>
Kubernetes starts containers by sharing the <a href="https://docs.docker.com/engine/reference/run/#network-settings" rel="noreferrer">same <strong>veth0</strong></a>, which means each container must expose different ports.</li>
<li><strong>pause</strong>: a special container started in "<code>pause</code>", to detect SIGTERM sent to a pod, and forward it to the containers.</li>
<li><strong>node</strong>: an host</li>
<li><strong>cluster</strong>: a group of nodes</li>
<li><strong><a href="https://en.wikipedia.org/wiki/Routing_table" rel="noreferrer">router/gateway</a></strong> </li>
</ul>
<p>The last element is where things start to be more complex:</p>
<blockquote>
<p>Kubernetes assigns an overall address space for the bridges on each node, and then assigns the bridges addresses within that space, based on the node the bridge is built on.<br>
Secondly, it adds routing rules to the gateway at 10.100.0.1 telling it how packets destined for each bridge should be routed, i.e. which node’s <code>eth0</code> the bridge can be reached through. </p>
<p>Such a combination of virtual network interfaces, bridges, and routing rules is usually called an <strong><a href="https://en.wikipedia.org/wiki/Overlay_network" rel="noreferrer">overlay network</a></strong>. </p>
</blockquote>
<p>When a pod contacts another pod, it goes through a <strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">service</a></strong>.<br>
Why?</p>
<blockquote>
<p>Pod networking in a cluster is neat stuff, but by itself it is insufficient to enable the creation of durable systems. That’s because <strong>pods in Kubernetes are ephemeral</strong>.<br>
You can use a pod IP address as an endpoint but <strong>there is no guarantee that the address won’t change the next time the pod is recreated</strong>, which might happen for any number of reasons.</p>
</blockquote>
<p>That means: you need a reverse-proxy/dynamic load-balancer. And it better be resilient.</p>
<blockquote>
<p><strong>A service is a type of kubernetes resource that causes a proxy to be configured to forward requests to a set of pods</strong>.<br>
The set of pods that will receive traffic is determined by the selector, which matches labels assigned to the pods when they were created</p>
</blockquote>
<p>That service uses its own network. By default, its type is "<strong>ClusterIP</strong>"; it has its own IP.</p>
<p>Here is the communication path between two pods:</p>
<p><a href="https://i.stack.imgur.com/h0F5B.png" rel="noreferrer"><img src="https://i.stack.imgur.com/h0F5B.png" alt="two pods network"></a></p>
<p>It uses a <strong><a href="https://kubernetes.io/docs/reference/generated/kube-proxy/" rel="noreferrer">kube-proxy</a></strong>.<br>
This proxy uses itself a <strong><a href="http://www.netfilter.org/" rel="noreferrer">netfilter</a></strong>.</p>
<blockquote>
<p><strong>netfilter is a rules-based packet processing engine</strong>.<br>
It runs in kernel space and gets a look at every packet at various points in its life cycle.<br>
It matches packets against rules and when it finds a rule that matches it takes the specified action.<br>
Among the many actions it can take is redirecting the packet to another destination. </p>
</blockquote>
<p><a href="https://i.stack.imgur.com/0nHJP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0nHJP.png" alt="kube-proxy and netfilter"></a></p>
<blockquote>
<p>In this mode, kube-proxy:</p>
<ul>
<li><strong>opens a port</strong> (10400 in the example above) on the local host interface to listen for requests to the test-service,</li>
<li><strong>inserts netfilter rules to reroute packets destined for the service IP to its own port</strong>, and</li>
<li><strong>forwards those requests</strong> to a pod on port 8080.</li>
</ul>
<p>That is how a request to <code>10.3.241.152:80</code> magically becomes a request to <code>10.0.2.2:8080</code>.<br>
Given the capabilities of netfilter all that’s required to make this all work for any service is for <strong>kube-proxy to open a port and insert the correct netfilter rules for that service</strong>, which it does in response to notifications from the master api server of changes in the cluster.</p>
</blockquote>
<p>But:</p>
<blockquote>
<p>There’s one more little twist to the tale.<br>
I mentioned above that <strong>user space proxying is expensive due to marshaling packets</strong>.
In kubernetes 1.2, <strong>kube-proxy gained the ability to run in iptables mode</strong>. </p>
<p>In this mode, kube-proxy mostly ceases to be a proxy for inter-cluster connections, and instead delegates to netfilter the work of detecting packets bound for service IPs and redirecting them to pods, all of which happens in kernel space.<br>
In this mode kube-proxy’s job is more or less limited to keeping netfilter rules in sync.</p>
</blockquote>
<p>The network schema becomes:</p>
<p><a href="https://i.stack.imgur.com/aGxtU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/aGxtU.png" alt="netfilter in action"></a></p>
<p>However, this is not a good fit for <em>external</em> (public facing) communication, which needs an external fixed IP.</p>
<p>You have dedicated services for that: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer"><strong>nodePort</strong> and <strong>LoadBalancer</strong></a>: </p>
<blockquote>
<p>A service of type <strong>NodePort</strong> is a ClusterIP service with an additional capability: it is reachable at the IP address of the node as well as at the assigned cluster IP on the services network.<br>
The way this is accomplished is pretty straightforward:</p>
<p>When kubernetes creates a NodePort service, <strong>kube-proxy allocates a port in the range 30000–32767 and opens this port on the <code>eth0</code> interface of every node (thus the name “NodePort”)</strong>.</p>
<p>Connections to this port are forwarded to the service’s cluster IP.</p>
</blockquote>
<p>You get:</p>
<p><a href="https://i.stack.imgur.com/APpSf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/APpSf.png" alt="load-balancer / nodeport"></a></p>
<p>A Loadalancer is more advancer, and allows to expose services using stand ports.<br>
See the mapping here:</p>
<pre><code>$ kubectl get svc service-test
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
openvpn 10.3.241.52 35.184.97.156 80:32213/TCP 5m
</code></pre>
<p>However:</p>
<blockquote>
<p>Services of type LoadBalancer have some limitations. </p>
<ul>
<li>You cannot configure the lb to terminate https traffic.</li>
<li>You can’t do virtual hosts or path-based routing, <strong>so you can’t use a single load balancer to proxy to multiple services in any practically useful way</strong>. </li>
</ul>
<p>These limitations led to the addition in version 1.2 of a separate kubernetes resource for configuring load balancers, called an <strong><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a></strong>.</p>
<p><strong>The Ingress API supports TLS termination, virtual hosts, and path-based routing</strong>. It can easily set up a load balancer to handle multiple backend services.<br>
The implementation follows a basic kubernetes pattern: a resource type and a controller to manage that type.<br>
The resource in this case is an Ingress, which comprises a request for networking resources</p>
</blockquote>
<p>For instance:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: my-ssl-secret
rules:
- host: testhost.com
http:
paths:
- path: /*
backend:
serviceName: service-test
servicePort: 80
</code></pre>
<blockquote>
<p><strong>The ingress controller is responsible for satisfying this request by driving resources in the environment to the necessary state.<br>
When using an Ingress you create your services as type NodePort and let the ingress controller figure out how to get traffic to the nodes.</strong> </p>
<p>There are ingress controller implementations for GCE load balancers, AWS elastic load balancers, and for popular proxies such as NGiNX and HAproxy. </p>
</blockquote>
|
<p>I'm using kubectl cp to copy a jar file from my local file system into a the home directory of a POD in my minikube environment. However, the only way I can confirm that the copy succeeded is to issue a new kubectl cp command to copy the file back to a temp directory and compare the checksums. Is there a way to view the copied files directly? </p>
| <p>You can execute commands in a container using <code>kubectl exec</code> command.</p>
<p><strong>For example</strong>:</p>
<p>to check files in any folder:</p>
<pre><code>kubectl exec <pod_name> -- ls -la /
</code></pre>
<p>or to calculate <code>md5sum</code> of any file:</p>
<pre><code>kubectl exec <pod_name> -- md5sum /some_file
</code></pre>
|
<p>If develop a online real time game with websocket, multiplayers running on the different containers, how to sync data when add or reduce containers if they are playing?</p>
<p>Does kubernetes has any good feature on this case?</p>
| <p><a href="https://stackoverflow.com/users/5547718/thatbriandude">ThatBrianDude</a> already gave an awesome answer, and mine will not be that good. But I think <a href="https://stackoverflow.com/questions/46952537/how-to-use-kubernetes-to-do-multiplayer-online-game-with-websocket#comment80879143_46952660">your last comment</a> gave us more hints about the architecture you have in mind. I hope my humble answer will shed a light on more ideas to your game. Here are some suggestions:</p>
<p>First, avoid keeping any state in the websocket apps.</p>
<blockquote>
<p>The basic idea with containers is that they should be stateless.
<a href="https://stackoverflow.com/a/46952660/5269825">ThatBrianDude</a></p>
</blockquote>
<p>So, why not use caches and a messaging layer to help you with that. Imagine the following examples:</p>
<p>Situation 1: if the client sends an action to the websocket server, the server should put it in a queue/topic (some other service will process it later on).</p>
<p>Situation 2: The server might also listen to a(some) topic(s) for some types of messages, and send them back to the clients that need that information.</p>
<p>Situation 3: when the client asks for information or if the websocket server needs some information to send to the client, the server must read it from a cache, as reading from DB might be slow for a multiplayer game.</p>
<p>Situation 4: eventually a container is killed. The clients connected to that server will receive a connection error, and should reconnect. That means another handshake, and the player might feel it, depending on what the game was doing, so killing a container should not happen that often. But that would be just it, no information is lost.</p>
<p>This way, the websocket server containers are totally stateless, and the messaging topics and caches will help you to: provide all the information needed to containers, and; keep websockets, persistance and processing isolated and scalable.</p>
<p>Summing up, the information would flow like this:</p>
<ol>
<li>clients are showering the websocket server containers with actions</li>
<li>websocket servers just send them to the messaging layer</li>
<li>processing containers (which can be scalled too!) receive those messages, process them, save to the database and/or to a cache and eventually send more messages to other topics</li>
<li>(optional) websocket servers receive those messages and send them to the clients.</li>
</ol>
<p>Or like this:</p>
<ol>
<li>clients ask for information or websocket servers periodically need to send the world state to clients</li>
<li>websocket servers look up the information in the cache</li>
<li>and send it to the clients.</li>
</ol>
<p>Or even like this:</p>
<ol>
<li>Some processing servers are independent of messages, they just read the game/world state (from the cache?) periodically</li>
<li>they process the physics and mechanics of the game</li>
<li>and save the result back in the cache, which will be sent to the clients by the websocket servers periodically, or send it in a topic so the websocket server can listen to it and send it to the clients.</li>
</ol>
<p>Lastly, don't forget the suggestion to have one machine responsible for one game/world. It would be nice if each processing server (or each thread of a server) works with one game/world. That would make it easier to persist things without the need to sync stuff.</p>
|
<p>I am following <a href="https://github.com/kubernetes-incubator/cri-containerd/blob/master/docs/crictl.md#run-a-pod-sandbox-using-a-config-file" rel="nofollow noreferrer">Run a pod sandbox</a> link to create a sandbox using crictl. I am getting below error</p>
<pre><code>root@cri-master:~# crictl runs sandbox-config.json
FATA[0000] run pod sandbox failed: rpc error: code = Unknown desc =failed to setup network for sandbox "3ad790c715c817d22e6f6df95bf612dbc0ceaf05d2d94f94e62aa4b57234ea57": pods "nginx-sandbox" not found
</code></pre>
<p>Please could someone tell me what is being done wrong here</p>
| <p>So I posted same question in the cri-containerd github issue and I got this answer. This explains everything</p>
<p><a href="https://github.com/kubernetes-incubator/cri-containerd/issues/520" rel="nofollow noreferrer">Unable to create sandbox</a></p>
<blockquote>
<p>IIRC, calico daemonset watch apiserver to get a list of pods, and apply network configuration based on the pod spec. I think this is the problem.</p>
<p>If you use <code>crictl create</code> a sandbox yourself, there won't be corresponding pod on apiserver, thus calico reports that error.</p>
<p>In fact, we don't recommend user to run <code>crictl runs</code> and <code>crictl create</code> on a Kubernetes node , those commands are there just for some special debug case. And that's also why we make the command so hard to use (user need to prepare for a configuration file to create a sandbox/container).</p>
<p>In fact, even if you are able to create the sandbox, <code>kubelet</code> will eventually stop and delete it because it doesn't see the corresponding pod on apiserver.</p>
<p>If you just want to try crictl, the error is caused by the reason above.<br />
If you just want to try create a pod, use kubectl instead. :)</p>
</blockquote>
|
<p>I'm running a Kubernetes cluster, which has worked fine for several months. Now, today, when I was about to deploy some updates, I get timeouts from the server.</p>
<p>Running <code>$ kubectl get nodes</code> yields</p>
<pre><code>Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
</code></pre>
<p>Running <code>$ kubectl get pods --all-namespaces</code> yields</p>
<pre><code>Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
</code></pre>
<p>Running <code>$ kubectl get deployments</code> yields</p>
<pre><code>Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.extensions)
</code></pre>
<p>Running <code>$ kubectl get svc</code> yields</p>
<pre><code>Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get services)
</code></pre>
<p>Running <code>$ kubectl cluster-info</code> yields (note no output after the master)</p>
<pre><code>Kubernetes master is running at https://cluster.mysite.com
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
<p>As I get these timeouts for every command, troubleshooting is impossible. </p>
<p>How can I continue from here to access my servers? I'm using <code>kube-aws</code>, and an AWS CloudFormation VPC.</p>
<p>Thanks for your time.</p>
<p><strong>EDIT</strong>:</p>
<p>As per request, I ran <code>$ kubectl get pods -v 7</code> and after a bunch of cache returns got this:</p>
<pre><code>I0103 16:51:32.196859 25644 round_trippers.go:414] GET cluster.mysite.com/api/v1/nodes
I0103 16:51:32.196888 25644 round_trippers.go:421] Request Headers:
I0103 16:51:32.196894 25644 round_trippers.go:424] Accept: application/json
I0103 16:51:32.196899 25644 round_trippers.go:424] User-Agent: kubectl/v1.8.3 (darwin/amd64) kubernetes/f0efb3c
I0103 16:52:32.239841 25644 round_trippers.go:439] Response Status: 504 Gateway Timeout in 60044 milliseconds
</code></pre>
<p>I also ran <code>$ kubectl cluster-info dump -v 7</code> and got:</p>
<pre><code>I0103 16:51:32.196888 25644 round_trippers.go:421] Request Headers:
I0103 16:51:32.196894 25644 round_trippers.go:424] Accept: application/json
I0103 16:51:32.196899 25644 round_trippers.go:424] User-Agent: kubectl/v1.8.3 (darwin/amd64) kubernetes/f0efb3c
I0103 16:52:32.239841 25644 round_trippers.go:439] Response Status: 504 Gateway Timeout in 60044 milliseconds
I0103 16:52:32.242362 25644 helpers.go:207] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)",
"reason": "Timeout",
"details": {
"kind": "nodes",
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The list operation against nodes could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{\"name\":\"list\",\"kind\":\"nodes\"},\"code\":500}"
}
]
},
"code": 504
}]
</code></pre>
<p><strong>EDIT 2:</strong>
Okay, now I'm just getting <code>Unable to connect to the server: EOF</code> on every request and I'm starting to get scared. This is a production cluster and I can't even access it to try to troubleshoot. Anyone have a hint on how to proceed?</p>
<p><strong>EDIT 3:</strong>
I've gotten as far as realizing that the etcd cluster was not working properly, with 2/3 nodes out of sync. Restarting one node had it properly joining the cluster again, but the second one can't start the services. The services that don't start are:</p>
<ul>
<li>etcdadm-check.service</li>
<li>etcdadm-save.service</li>
<li>etcdadm-update-status.service</li>
<li>[email protected]</li>
</ul>
<p>The three former ones all give the error <code>etcdadm-check.service: Control process exited, code=exited status=3</code> and the last one gives <code>[email protected]: Start request repeated too quickly.</code>.</p>
<p>Any hints on how to handle this?</p>
<p>Also, after restoring the second etcd, I get <code>Unable to connect to the server: x509: certificate signed by unknown authority</code> when running any <code>kubectl</code> commands. Does this signify data loss? My certificates are still valid for over half a year, and I haven't changed anything about them.</p>
<p><strong>EDIT 4</strong>:
I still have the etcd-issue, but am following the instructions in camil's answer at this time, will update with the result. However, I solved the issue with the certificates not being valid simply by re-running <code>$ kube-aws render credentials</code> with the proper paths to my intermediate root CA, so that issue is solved.</p>
| <p>To avoid the timeouts, you can pass this flag <code>--request-timeout='1s'</code>. This will allow further debugging. </p>
<p>I see you are running <code>kube-aws</code>,so it will be safe to terminate the master instances (at least one, if you run multiple masters). The ASG will replace them automatically. You can do this also with the ETCD nodes.</p>
<p>If the issue still persists, then you have to ssh into masters and check the logs and services by running commands like:</p>
<pre><code>journalctl -xe
systemctl status -l kubelet.service
systemctl status -l flanneld.service
systemctl status -l docker.service
rkt list
</code></pre>
<p>You can also use this function to debug using <code>kubectl</code> from inside the masters:</p>
<pre><code>kubectl() {
/usr/bin/docker run --rm --net=host \
-v /etc/resolv.conf:/etc/resolv.conf \
-v /srv/kube-aws/plugins:/srv/kube-aws/plugins \
quay.io/coreos/hyperkube:v1.9.0_coreos.0 /hyperkube kubectl "$@"
}
</code></pre>
<p>Then try these commands: </p>
<pre><code>kubectl get componentstatus
kubectl cluster-info
kubectl get pods -n kube-system
kubectl get events -n kube-system
</code></pre>
<p>Check the connectivity to ETCD from masters</p>
<pre><code>export $(cat /etc/etcd-environment | tr -d "'")
/usr/bin/etcdctl \
--ca-file=/etc/kubernetes/ssl/etcd-trusted-ca.pem \
--cert-file=/etc/kubernetes/ssl/etcd-client.pem \
--key-file=/etc/kubernetes/ssl/etcd-client-key.pem \
--endpoints="${ETCD_ENDPOINTS}" \
cluster-health
</code></pre>
|
<p>If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?</p>
| <p>You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).</p>
<p>On some versions of kubernetes, this can be found under ~/.kube</p>
|
<p>If there an use case where kubectl apply should not be use to get a roll-update? </p>
<p>Despite the command original intent if we gave to kubectl apply a source file describing the resources of the cluster been updated over time, there any use case on which should not be use?</p>
| <p>The update strategy is specified in <code>.spec.strategy</code> in case of a Deployment and <code>.spec.updateStrategy.type</code> for DaemonSets and StatefulSets</p>
<p>For Deployments <code>.spec.strategy.type</code> can be “Recreate” or “RollingUpdate”. “RollingUpdate” is the default value.</p>
<p>For DaemonSets and StatefulSets, <code>.spec.updateStrategy.type</code> can be "OnDelete" or "RollingUpdate". "OnDelete" is the default value.</p>
<p><code>kubectl apply</code> will respect these strategies, so I see no reason why not using it.</p>
<p><code>kubectl rolling-update</code> is used only for ReplicationControllers which were replaced by Deployments</p>
|
<p>I have attempted to follow <a href="https://crondev.com/kubernetes-nginx-ingress-controller/" rel="nofollow noreferrer">this</a> tutorial to play around with an nginx ingress controller. Some details have changed as I was trying to get it to work - only one backend service instead of two, some port numbers and everything runs in the default namespace. I have a kubernetes master and 3 minions on CentOS Linux release 7.4.1708 VMs.</p>
<p>The backend and default backend are both accessible within the cluster through their respective service endpoints.
The nginx status page is available externally (MasterHostIP:32000/nginx_status).
The issue is that http requests to the backend app are refused either through the external path or from within the cluster to the nginx-ingress-controller-service endpoints.
Hopefully someone out there can see something obvious that I'm missing, or has had similar issues and knows how to overcome this.</p>
<pre><code>[root@master1 ~]# kubectl get endpoints
NAME ENDPOINTS AGE
appsvc1 10.244.1.2:80,10.244.3.4:80 3h
default-backend 10.244.1.3:8080,10.244.2.3:8080,10.244.3.5:8080 14d
kubernetes 10.134.45.136:6443 15d
nginx-ingress 10.244.2.5:18080,10.244.2.5:9999 2h
[root@master1 ~]# wget 10.244.2.5:9999
--2018-01-05 12:10:56-- http://10.244.2.5:9999/
Connecting to 10.244.2.5:9999... failed: Connection refused.
[root@master1 ~]# wget 10.244.2.5:18080
--2018-01-05 12:12:52-- http://10.244.2.5:18080/
Connecting to 10.244.2.5:18080... connected.
HTTP request sent, awaiting response... 404 Not Found
2018-01-05 12:12:52 ERROR 404: Not Found.
</code></pre>
<p>Requests to appsvc1 endpoints behave as expected, returning static html with "Hello app1!".</p>
<p><strong>Backend app deployment:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app1
spec:
replicas: 2
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: dockersamples/static-site
env:
- name: AUTHOR
value: app1
ports:
- containerPort: 80
</code></pre>
<p><strong>Backend Service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: appsvc1
spec:
ports:
- port: 9999
protocol: TCP
targetPort: 80
selector:
app: app1
</code></pre>
<p><strong>Application Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: appsvc1
servicePort: 9999
path: /app1
</code></pre>
<p><strong>nginx ingress controller deployment</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
app: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
serviceAccount: nginx
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-backend'
- '--configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf'
- --v=6
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
- containerPort: 9999
- containerPort: 18080
</code></pre>
<p><strong>nginx-ingress-controller-service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 9999
nodePort: 30000
name: http
- port: 18080
nodePort: 32000
name: http-mgmt
selector:
app: nginx-ingress-lb
</code></pre>
<p><strong>nginx ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: nginx-ingress
servicePort: 18080
</code></pre>
<p><strong>Update</strong>
It looks like port 9999 is not open in the ingress controller pod. Can anyone suggest why port 18080 gets opened but not 9999? :</p>
<pre><code>[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
app1-54cf69ff86-l7kp4 1/1 Running 0 17d
app1-54cf69ff86-qkksw 1/1 Running 0 17d
app2-7bc7498cbf-459vd 1/1 Running 0 2d
app2-7bc7498cbf-8x9st 1/1 Running 0 2d
default-backend-78484f94cf-fv6v4 1/1 Running 0 17d
default-backend-78484f94cf-vzp8l 1/1 Running 0 17d
default-backend-78484f94cf-wmjqh 1/1 Running 0 17d
nginx-ingress-controller-cfb567f76-wbck5 1/1 Running 0 15h
[root@master1 ~]# kubectl exec nginx-ingress-controller-cfb567f76-wbck5 -it bash
root@nginx-ingress-controller-cfb567f76-wbck5:/# netstat -tlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN 14/nginx: master pr
tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN 14/nginx: master pr
tcp 0 0 0.0.0.0:https 0.0.0.0:* LISTEN 14/nginx: master pr
tcp 0 0 0.0.0.0:https 0.0.0.0:* LISTEN 14/nginx: master pr
tcp 0 0 0.0.0.0:18080 0.0.0.0:* LISTEN 14/nginx: master pr
tcp 0 0 0.0.0.0:18080 0.0.0.0:* LISTEN 14/nginx: master pr
tcp6 0 0 [::]:http [::]:* LISTEN 14/nginx: master pr
tcp6 0 0 [::]:http [::]:* LISTEN 14/nginx: master pr
tcp6 0 0 [::]:https [::]:* LISTEN 14/nginx: master pr
tcp6 0 0 [::]:https [::]:* LISTEN 14/nginx: master pr
tcp6 0 0 [::]:18080 [::]:* LISTEN 14/nginx: master pr
tcp6 0 0 [::]:18080 [::]:* LISTEN 14/nginx: master pr
tcp6 0 0 [::]:10254 [::]:* LISTEN 5/nginx-ingress-con
</code></pre>
| <p>10.x adresses are internal. So the 404s are expected. The ingress controller doesn't make your internal services external all of a sudden. The ingress service is supposed to proxy requests to deployed services via a single address. Since I see you deployed the controller via node port, try making a request to the node's IP port 30000 with Host header test.com you should get your app. Every service you externalize will be available via the ingress IP, host header is set by HTTP clients and ingress controller will fan out requests based on that (as well as path and whatever else you want). So really it only works if you pay for domain names, as I assume you don't own test.com and asking users to fake the request header is not a reasonable interface</p>
<p>Also, since you have minion nodes (plural) you should really change the controller service type from NodePort to LoadBalancer. Node port is used in tutorials so as to be cheaper - LoadBalancer will spin up a cloud load balancer that you would have to pay for. Node port is OK while you're getting situated but certainly not something you can do later on. I really wish people would stop putting it in tutorials without any explanations</p>
|
<p>According to documentation of both <a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">kops</a> and <a href="https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/" rel="nofollow noreferrer">aws</a>, the dedicated <code>kops</code> user needs <code>IAMFullAccess</code> permission to operate properly.</p>
<p>Why is this permission needed?</p>
<p>Is there a way to avoid (i.e. restrict) this, given that it is a bit too intrusive to create a user with such a permission?</p>
<p><strong>edit</strong>: one could assume that the specific permission is needed to attach the respective roles to the <a href="https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_master_strict.json" rel="nofollow noreferrer">master(s)</a> and <a href="https://github.com/kubernetes/kops/blob/master/pkg/model/iam/tests/iam_builder_node_strict.json" rel="nofollow noreferrer">node(s)</a> instances; </p>
<p>therefore perhaps the question / challenge becomes how to:</p>
<ul>
<li>not use <code>IAMFullAccess</code></li>
<li>sync with the node creation / bootstrapping process and attach the above roles; (perhaps create a cluster on pre-configured instances? - no idea if kops provides for that)</li>
</ul>
| <p>As far as I understand kops design, it's meant to be end to end tool for provisioning you with k8s clusters. If you want to provision your nodes separately and deploy k8s on them I would suggest to use other tool, such as kubespray or kubeadm: </p>
<p><a href="https://github.com/kubernetes-incubator/kubespray" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/kubespray</a> </p>
<p><a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a></p>
|
<p>I am having two dockerized application which needs to run in kubernetes. </p>
<p>Here is the scenario which needs to achieve.</p>
<p>Docker-1 which is flask application.</p>
<p>Docker-2 which is python script will take input from the Docker-1 and execute and need to write some file in a shared volume of the Docker-1 container.</p>
<p>Here is the flask web-app code.</p>
<pre><code>from flask import Flask, request, Response, jsonify
app = Flask(__name__)
@app.route('/')
def root():
return "The API is working fine"
@app.route('/run-docker')
def run_docker_2():
args = "input_combo"
query = <sql query>
<initiate the docker run and pass params>
exit
#No return message need run as async
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=8080, threaded=True)
</code></pre>
<p>Docker file</p>
<pre><code>FROM ubuntu:latest
MAINTAINER Abhilash KK "[email protected]"
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential python-tk
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
</code></pre>
<p>requirements.txt</p>
<pre><code>flask
</code></pre>
<p>Python script for the second docker. start_docker.py</p>
<pre><code>import sys
input_combo = sys.argv[1]
query = sys.argv[2]
def function_to_run(input_combination,query):
#starting the model final creating file
function_to_run(input_combo, query)
</code></pre>
<p>Docker file 2</p>
<pre><code>FROM python
COPY . /script
WORKDIR /script
CMD ["python", "start_docker.py"]
</code></pre>
<p>Please help me to connect with the docker images. or let me know any other way to achieve this problem. The basic requirement is to add a message to some queue and that queue listens for in time interval and starts the process with FIFO manner.</p>
<p>Any other approach in GCP service to initiate an async job will take input from the client and create a file which is accessible from web-app python.</p>
| <p>First, create a Pod running "Docker-1" application. Then <a href="https://github.com/kubernetes-incubator/client-python" rel="nofollow noreferrer">Kubernetes python client</a> to spawn a second pod with "Docker-2".
You can share a volume between your pods in order to return the data to Docker1. In my code sample I'm using a host_path volume but you need to ensure that both pods are on the same node. I did add that code for readability.</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: docker1
labels:
app: docker1
spec:
replicas: 1
selector:
matchLabels:
app: docker1
template:
metadata:
labels:
app: docker1
spec:
containers:
- name: docker1
image: abhilash/docker1
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /shared
name: shared-volume
volumes:
- name: shared-volume
hostPath:
path: /shared
</code></pre>
<p>The code of run_docker_2 handler:</p>
<pre><code>from kubernetes import client, config
...
args = "input_combo"
config.load_incluster_config()
pod = client.V1Pod()
pod.metadata = client.V1ObjectMeta(name="docker2")
container = client.V1Container(name="docker2")
container.image = "abhilash/docker2"
container.args = [args]
volumeMount = client.V1VolumeMount(name="shared", mount_path="/shared")
container.volume_mounts = [volumeMount]
hostpath = client.V1HostPathVolumeSource(path = "/shared")
volume = client.V1Volume(name="shared")
volume.host_path = hostpath
spec = client.V1PodSpec(containers = [container])
spec.volumes = [volume]
pod.spec = spec
v1.create_namespaced_pod(namespace="default", body=pod)
return "OK"
</code></pre>
<p>A handler to read the returned results:</p>
<pre><code>@app.route('/read-results')
def run_read():
with open("/shared/results.data") as file:
return file.read()
</code></pre>
<p>Note that it could be useful to add a <a href="https://github.com/kubernetes-incubator/client-python/blob/master/examples/example2.py" rel="nofollow noreferrer">watcher</a> to wait for the pod to finish the job and then do some cleanup (delete the pod for instance)</p>
|
<p>I have deployed a Kubernetes cluster to a custom virtual network on Azure using acs-engine. There is an ASP.NET Core 2.0 Kestrel app running on the agent VMs and the app is accessed over VPN through a Service of the Azure internal load balancer type. Now I would like to enable HTTPS on the service. I have already obtained a domain name and a certificate but have no idea how to proceed. Apparently configuring Kestrel to use HTTPS and copying the certificate to each container is not the way to go.</p>
<p>I have checked out tutorials such as <a href="https://pascalnaber.wordpress.com/2017/10/27/configure-ingress-on-kubernetes-using-azure-container-service/" rel="nofollow noreferrer" title="ingress on k8s using acs">ingress on k8s using acs</a> and <a href="https://blogs.technet.microsoft.com/livedevopsinjapan/2017/02/28/configure-nginx-ingress-controller-for-tls-termination-on-kubernetes-on-azure-2/" rel="nofollow noreferrer">configure Nginx Ingress Controller for TLS termination on k8s on Azure</a> but both of them end up exposing a public external IP and I want to keep the IP internal and not accessible from the internet. Is this possible? Can it be done without ingresses and their controllers?</p>
| <p>While for some reason I still can't access the app through the ingress I was able to create an internal ingress service with the IP I want with the following configuration:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: nginx-ingress-svc
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 443
loadBalancerIP: 130.10.1.9
selector:
k8s-app: nginx-ingress-controller
</code></pre>
|
<p>I my 1.9 cluster created this deployment role for the dev user. Deployment works as expected. Now I want to give exec and logs access to developer. What role I need to add for exec to the pod?</p>
<pre><code>kind: Role
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>Error message:</p>
<pre><code>kubectl exec nginx -it -- sh
Error from server (Forbidden): pods "nginx" is forbidden: User "dev" cannot create pods/exec in the namespace "dev"
</code></pre>
<p>Thanks
SR</p>
| <p>The <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources" rel="nofollow noreferrer">RBAC docs say that</a></p>
<blockquote>
<p>Most resources are represented by a string representation of their name, such as “pods”, just as it appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a “subresource”, such as the logs for a pod. [...] To represent this in an RBAC role, use a slash to delimit the resource and subresource.</p>
</blockquote>
<p>To allow a subject to read both pods and pod logs, and be able to exec into the pod, you would write:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
</code></pre>
<p>Some client libraries may do an http GET to negotiate a websocket first, which would require the "get" verb. kubectl sends an http POST instead, that's why it requires the "create" verb in that case.</p>
|
<p>Running a cluster on Google Container engine. </p>
<p>Expect it to respect rewrite rule. Running the debug <a href="http://gcr.io/google_containers/echoserver:1.4" rel="noreferrer">echo server</a> it shows it's not respecting the http-rewrite rule as documented here in <a href="https://github.com/kubernetes/ingress/blob/827d8520ae070db695cf32859148ef08c9c37016/examples/rewrite/nginx/README.md#L17" rel="noreferrer">kubernetes ingress docs</a>.</p>
<p>Works locally on minikube just fine. The <code>realpath</code> parameter still has debug attached although rewrite is on to strip after match. Expect <code>/foo/bar/</code> vs <code>/debug/foo/bar</code>.</p>
<p>Attached </p>
<p>URL + response </p>
<blockquote>
<p><a href="http://homes.stanzheng.com/debug/foo/bar" rel="noreferrer">http://homes.stanzheng.com/debug/foo/bar</a>
<div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>CLIENT VALUES:
client_address=10.12.2.1
command=GET
real path=/debug/foo/bar
query=nil
request_version=1.1
request_uri=http://homes.stanzheng.com:8080/debug/foo/bar
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
accept-encoding=gzip, deflate
accept-language=en-US,en;q=0.8
connection=Keep-Alive
cookie=__cfduid=dfd6a6d8c2a6b361a3d72e3fc493295441494876880; _ga=GA1.2.5098880.1494876881
host=homes.stanzheng.com
upgrade-insecure-requests=1
user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36
via=1.1 google
x-cloud-trace-context=1586885dcac2d537189444861a8a462c/1232314719683944914
x-forwarded-for=204.154.44.39, 35.190.78.5
x-forwarded-proto=http
BODY:
-no body in request-</code></pre>
</div>
</div>
</p>
</blockquote>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rewrite
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: minikube.homes
http:
paths:
- path: /debug/*
backend:
serviceName: echoserver
servicePort: 8080</code></pre>
</div>
</div>
</p>
| <p>You could use an Nginx ingress controller as explained in this blog:</p>
<p><a href="http://rahmonov.me/posts/nginx-ingress-controller/" rel="nofollow noreferrer">http://rahmonov.me/posts/nginx-ingress-controller/</a></p>
<p>Once you follow those steps, you need to add the following annotation to your ingress yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rewrite
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: minikube.homes
http:
paths:
- path: /debug/*
backend:
serviceName: echoserver
servicePort: 8080
</code></pre>
|
<p>I'm trying to host a Jenkins image on GKE to run a build. Mostly, I've followed <a href="https://cloud.google.com/solutions/jenkins-on-kubernetes-engine-tutorial" rel="noreferrer">Google's tutorial for setting up Jenkins in Kubernetes</a>. I've got a fairly basic set-up with one master node which runs the builds.</p>
<p>I also want to be able to use Docker inside of the Jenkins environment, and so I've gone into Jenkins' Global Tools Configuration and added a Docker instance. I've additionally mapped the docker.sock in my deployment file to bypass a "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" problem.</p>
<p>My current deployment looks like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: master
spec:
containers:
- name: master
image: jenkins/jenkins:2.95
ports:
- containerPort: 8080
- containerPort: 50000
readinessProbe:
httpGet:
path: /login
port: 8080
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 2
failureThreshold: 5
env:
- name: JENKINS_OPTS
valueFrom:
secretKeyRef:
name: jenkins
key: options
- name: JAVA_OPTS
value: '-Xmx1400m'
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/run/docker.sock
name: docker-socket
securityContext:
privileged: true
resources:
limits:
cpu: 500m
memory: 1500Mi
requests:
cpu: 500m
memory: 1500Mi
volumes:
- name: jenkins-home
gcePersistentDisk:
pdName: jenkins-home
fsType: ext4
partition: 1
- name: docker-socket
hostPath:
path: /var/run/docker.sock
</code></pre>
<p>Unfortunately, any builds fail with the following error:</p>
<pre><code>Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:
....
dial unix /var/run/docker.sock: connect: permission denied
</code></pre>
<p>Most Google searches involving this error seem don't seem to be Kubernetes-related. </p>
<p>What am I missing?</p>
<p><strong>Update:</strong> To some extent, this works better if I use this configuration:</p>
<pre><code>spec:
replicas: 1
template:
metadata:
labels:
app: master
spec:
securityContext:
runAsUser: 0
containers:
</code></pre>
<p>which basically runs Jenkins as root. Unfortunately, that has some consequences for the way that pipelines manage auth credentials. When I try to use "withRepository(repoName, credentialId)", the pipeline adds an entry to <code>/var/jenkins_home/.dockercfg</code>, but a later docker push step doesn't seem to be able to find those credentials (I think that later step ends up looking in <code>/root/.dockercfg</code> or <code>/root/.docker/config.json</code>). </p>
| <p>You might want to try running it as the <code>1000</code> user:</p>
<pre><code>...
spec:
...
securityContext:
# Specify fsGroup for pod, so that the persistent volume is writable for the non-privileged uid/gid 1000
runAsUser: 1000
fsGroup: 1000
...
</code></pre>
<p>You may also find <a href="https://github.com/kubernetes/charts/blob/master/stable/jenkins/templates/jenkins-master-deployment.yaml" rel="nofollow noreferrer">this Helm chart</a> useful.</p>
|
<p><em>As far as I'm concerned, this is more of a development question than a server question, but it lies very much on the boundary of the two, so feel free to migrate to serverfault.com if that's the consensus).</em></p>
<p>I have a service, let's call it <code>web</code>, and it is declared in a <code>docker-compose.yml</code> file as follows:</p>
<pre><code> web:
image: webimage
command: run start
build:
context: ./web
dockerfile: Dockerfile
</code></pre>
<p>In front of this, I have a reverse-proxy server running Apache Traffic Server. There is a simple mapping rule in the <a href="http://%20https://docs.trafficserver.apache.org/en/latest/admin-guide/files/remap.config.en.html" rel="nofollow noreferrer">url remapping config file</a></p>
<pre><code>map / http://web/
</code></pre>
<p>So all incoming requests are mapped onto the <code>web</code> service described above. This works just peachily in <code>docker-compose</code>, however when I move the service to kubernetes with the following service description:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
clusterIP: None
ports:
- name: headless
port: 55555
targetPort: 0
selector:
io.kompose.service: web
status:
loadBalancer: {}
</code></pre>
<p>...traffic server complains because it cannot resolve the DNS name <code>web</code>.</p>
<p>I can resolve this by slightly changing the DNS behaviour of traffic server with the following config change:</p>
<pre><code>CONFIG proxy.config.dns.search_default_domains INT 1
</code></pre>
<p>(see <a href="https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/files/records.config.en.html#dns" rel="nofollow noreferrer">https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/files/records.config.en.html#dns</a>)</p>
<p>This config change is described as follows:</p>
<blockquote>
<p>Traffic Server can attempt to resolve unqualified hostnames by expanding to the local domain. For example if a client makes a request to an unqualified host (e.g. host_x) and the Traffic Server local domain is y.com, then Traffic Server will expand the hostname to host_x.y.com.</p>
</blockquote>
<p>Now everything works just great in kubernetes.</p>
<p>However, when running in <code>docker-compose</code>, traffic-server complains about not being able to resolve <code>web</code>.</p>
<p>So, I can get things working on both platforms, but this requires config changes to do so. I could fire a start-up script for traffic-server to establish if we're running in kube or docker and write the config line above depending on where we are running, but ideally, I'd like the DNS to be consistent across platforms. My understanding of DNS (and in particular, DNS default domains/ local domains) is patchy.</p>
<p>Any pointers? Ideally, a local domain for docker-compose seems like the way to go here.</p>
| <p>The default kubernetes local domain is</p>
<pre><code>default.svc.cluster.local
</code></pre>
<p>which means that the fully qualified name of the <code>web</code> service under kubernetes is <code>web.default.svc.cluster.local</code></p>
<p>So, in the <code>docker-compose</code> file, under the trafficserver config section, I can create an alias for <code>web</code> as <code>web.default.svc.cluster.local</code> with the following <code>docker-compose.yml</code> syntax:</p>
<pre><code>version: "3"
services:
web:
# ...
trafficserver:
# ...
links:
- "web:web.default.svc.cluster.local"
</code></pre>
<p>and update the mapping config in trafficserver to:</p>
<pre><code>map / http://web.default.svc.cluster.local/
</code></pre>
<p>and now the <code>web</code> service is reachable using the same domain name across <code>docker-compose</code> and <code>kubernetes</code>.</p>
|
<p>Is there a way to obtain log information about what is happening with Kubernetes after creating a pod. The <code>kubectl get pods</code> only provides a basic status message. In the case of downloading a large image this can take time and the <code>kubectl log</code> command does not provide any real information at this point. This command seems to only provide information when the container is running.</p>
<p>Is there a way to obtain more log information about the current state of a Kubernetes pod. Calling <code>docker pull</code> directly provides download status information, but that isn't obvious in Kubernetes.</p>
| <p>To add to previous answer, if your using a modern worker with systemd you will probably not have a <code>/var/log/docker.log</code> file at all.</p>
<p>You can see if downloads are active (on ubuntu/conjure-up) by:</p>
<ul>
<li>running bandwidth monitoring tools like <code>bmon</code> on the worker (or its hypervisor)</li>
<li>check download file progress on the worker: <code>du -s /var/lib/docker/tmp</code></li>
<li>check systemd logs: <code>journalctl --unit docker</code></li>
<li>Once download is complete, files will be removed from tmp dir</li>
</ul>
<p>If you see messages like: <code>Handler for GET /v1.26/images/docker.io/XXX/XXX:latest/json returned error: No such image: docker.io/XXX/XXX:latest</code> - then I think this means that the image isn't available and will be downloaded, not that it doesn't exist remotely ;-)</p>
|
<p>I'm a bit confused by some of the Kubernetes documentation on virtual IPs: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#the-gory-details-of-virtual-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#the-gory-details-of-virtual-ips</a>.</p>
<blockquote>
<p>Userspace
As an example, consider the image processing application described above. > When the backend Service is created, the Kubernetes master assigns a virtual IP address, for example 10.0.0.1.</p>
</blockquote>
<p>The Kubernetes master assigns that VIP address to what? Where is the VIP address assigned?</p>
<p>How does the virtual IPs of a service integrate with an external ip address?</p>
<blockquote>
<p>In order to allow users to choose a port number for their Services, we must ensure that no two Services can collide</p>
</blockquote>
<p>Does this mean that when running <code>kubectl get services</code> I could see services having the same port?</p>
<blockquote>
<p>When clients connect to the VIP, their traffic is automatically transported to an appropriate endpoint</p>
</blockquote>
<p>Who are the clients? Other services within the cluster or some joe smo who's just using your app and knows nothing about kubernetes.</p>
| <p>The virtual in VIP means that the IP is not attached to a network interface, technical (in the current default config with <code>kube-proxy</code>) this means it's an IPtables entry, purely used to provide a stable communication endpoint. I've written about it in greater detail in the blog post <a href="https://blog.openshift.com/kubernetes-services-by-example/" rel="noreferrer">Kubernetes Services By Example</a>, if you want to see how it works in a concrete setup.</p>
<p>Note that every node in the cluster has all the pod and service-related IPtables entries and this can lead to <a href="https://schd.ws/hosted_files/cloudnativeeu2017/ce/Scale%20Kubernetes%20to%20Support%2050000%20Services.pdf" rel="noreferrer">scalability and performance issues</a>.</p>
|
<p>I'm new to DevOps specifically using golang and microservice architecture.</p>
<p>I'm wondering if go applications should or should not be deployed in containers (Docker). In this case, I have a system built with micro-service architecture. For example here, I have 2 web services, A and B. Also, I have another web server that acts as a gateway in front of those two.</p>
<p>Both A and B need access to a database, MySQL for example. A handles table A, and B handles table B.</p>
<p>I know that in Go, source codes are compiled into a single executable binary file. And because I have 3 services here, I have 3 binaries. All three run as web servers exposing JSON REST API.</p>
<p>My questions are these:</p>
<ul>
<li><p><strong>Can I deploy these servers together in one host running on different ports?</strong>
If my host gets an IP x.x.x.x for example, my gateway can run in x.x.x.x:80, A in port 81, and B in port 82 for example. A and B will talk to a MySQL server somewhere outside or maybe inside the same host. Is this a good practice? Can Continuous Deployment works with this practice?</p>
</li>
<li><p><strong>Why should I deploy and run those binaries inside containers like Docker?</strong>
I know that since its release few years ago, Docker had found its way to be integrated inside a development workflow easily. But of course using Docker is not as simple as just compiling the code into a binary and then moving it to a deployment server. Using Docker we have to put the executable inside the container and then move the container as well to the deployment server.</p>
</li>
<li><p><strong>What about scalability and high availability without using Docker?</strong>
Can I just replicate my services and run them all at once in different hosts using a load balancer? This way I should deploy A, B, and gateway in one host, and another A, B, and gateway in another host, then set up the load balancer in front of them. A, B, and the gateway runs in ports 80, 81, and 82 respectively. This way I could have thousands of nodes in a form of VMs or LXD containers maybe, spread across hundreds of bare metals, deployed with a simple bash script and ssh, or Ansible if things get complex. Yes or no?</p>
</li>
<li><p><strong>And what about the scalability and availability of using Docker?</strong>
Should I just put all my services inside containers and manage them with something like Kubernetes to achieve high availability? Doing this does add overhead, right? Because the team will have to learn new technology like Kubernetes if they haven't known it yet.</p>
</li>
<li><p>Can you give me an example of some best practices for deploying golang services?</p>
</li>
</ul>
| <blockquote>
<p>I'm wondering if go applications should or should not be deployed in containers (Docker)<br>
Why should I deploy and run those binaries inside containers like Docker?</p>
</blockquote>
<p>Of course, provided you separate the build from the actual final image (in order to not include in said final image build dependencies)<br>
See "<a href="https://made2591.github.io/posts/goa-docker-multistage" rel="noreferrer">Golang, Docker and multistage build</a>" from <strong><a href="https://made2591.github.io/about/" rel="noreferrer">Matteo Madeddu</a></strong>.</p>
<blockquote>
<p>Can I deploy these servers together in one host running on different ports?</p>
</blockquote>
<p>Actually, they could all run in their own container on their own port, even if that port is the same.<br>
Intra-container communication will work, using <strong><a href="https://docs.docker.com/engine/reference/builder/#expose" rel="noreferrer">EXPOSEd port</a></strong>.
However, if they are accessed from outside, then their <a href="https://docs.docker.com/engine/reference/run/#expose-incoming-ports" rel="noreferrer"><em>published</em> port</a> need to be different indeed.</p>
<blockquote>
<p>What about scalibility and high availability without using Docker?<br>
And what about the scalability and availability of using Docker?</p>
</blockquote>
<p>As soon as you are talking about dynamic status, some kind of orchestration will be involved: see <a href="https://docs.docker.com/engine/swarm/" rel="noreferrer">Docker Swarm</a> or <a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a> for efficient cluster management.<br>
<a href="https://blog.docker.com/2017/12/kubernetes-in-docker-ee/" rel="noreferrer">Both are available with the latest docker</a>.</p>
<p>Examples:</p>
<ul>
<li>"<a href="https://medium.com/wattpad-engineering/building-and-testing-go-apps-monorepo-speed-9e9ca4978e19" rel="noreferrer"><strong>Building and testing Go apps + monorepo + speed</strong></a>": Or, how we test and build go code in a monorepo, with TravisCI, and deploy to Docker, quickly and easily. From <strong><a href="https://twitter.com/jharlap" rel="noreferrer">Jonathan Harlap</a></strong>, Principal Engineer @ Wattpad</li>
<li>"<a href="https://blog.alexellis.io/introducing-functions-as-a-service/" rel="noreferrer"><strong>Introducing Functions as a Service (OpenFaaS)</strong></a>", from <strong><a href="https://twitter.com/alexellisuk" rel="noreferrer">Alex Ellis</a></strong></li>
</ul>
|
<p>what is the meaning for "<strong>status:</strong>" in the kubernetes manifest file</p>
<pre><code>spec:
type: LoadBalancer
ports:
- name: "7000"
port: 7000
targetPort: 80
selector:
io.kompose.service: ams-app
status:
loadBalancer: {}
</code></pre>
| <p>See "<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#object-spec-and-status" rel="nofollow noreferrer">Object Spec and Status</a>"</p>
<blockquote>
<p>Every Kubernetes object includes two nested object fields that govern the object’s configuration: the object spec and the object status. </p>
<ul>
<li>The spec, which you must provide, describes your desired state for the object–the characteristics that you want the object to have. </li>
<li><strong>The status describes the actual state of the object, and is supplied and updated by the Kubernetes system</strong>. </li>
</ul>
<p>At any given time, the Kubernetes Control Plane actively manages an object’s actual state to match the desired state you supplied.</p>
</blockquote>
<p>So:</p>
<blockquote>
<p>For example, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes Deployment</a> is an object that can represent an application running on your cluster.<br>
When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running.<br>
The Kubernetes system reads the Deployment spec and starts three instances of your desired application–<strong>updating the status to match your spec</strong>.<br>
If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction–in this case, starting a replacement instance.</p>
</blockquote>
<p>In your case, what you mention matches the publication of a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer" rel="nofollow noreferrer">LoadBalancer</a>.</p>
<blockquote>
<p>The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer will be published in the Service’s <strong><code>status.loadBalancer</code></strong> field.<br>
For example:</p>
</blockquote>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
loadBalancerIP: 78.11.24.19
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 146.148.47.155
</code></pre>
|
<p>I tested kubernetes deployment with EBS volume mounting on AWS cluster provisioned by kops. This is deployment yml file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-deployment-volume
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
volumeMounts:
- mountPath: /myvol
name: myvolume
volumes:
- name: myvolume
awsElasticBlockStore:
volumeID: <volume_id>
</code></pre>
<p>After <code>kubectl create -f <path_to_this_yml></code>, I got the following message in pod description:</p>
<pre><code>Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation. status code: 403
</code></pre>
<p>Looks like this is just a permission issue. Ok, I checked policy for node role <code>IAM</code> -> <code>Roles</code> -> <code>nodes.<my_domain></code> and found that there where no actions which allow to manipulate volumes, there was only <code>ec2:DescribeInstances</code> action by default. So I added <code>AttachVolume</code> and <code>DetachVolume</code> actions:</p>
<pre><code> {
"Sid": "kopsK8sEC2NodePerms",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": [
"*"
]
},
</code></pre>
<p>And this didn't help. I'm still getting that error:</p>
<pre><code>Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation.
</code></pre>
<p>Am I missing something?</p>
| <p>I found a solution. It's described <a href="https://stackoverflow.com/questions/47278433/need-help-on-volume-mount-issue-with-kubernetes">here</a>.</p>
<p>In kops 1.8.0-beta.1, master node requires you to tag the AWS volume with:</p>
<p><code>KubernetesCluster</code>: <code><clustername-here></code></p>
<p>So it's necessary to create EBS volume with that tag by using <code>awscli</code>:</p>
<pre><code>aws ec2 create-volume --size 10 --region eu-central-1 --availability-zone eu-central-1a --volume-type gp2 --tag-specifications 'ResourceType=volume,Tags=[{Key=KubernetesCluster,Value=<clustername-here>}]'
</code></pre>
<p>or you can tag it by manually in <code>EC2</code> -> <code>Volumes</code> -> <code>Your volume</code> -> <code>Tags</code></p>
<p>That's it.</p>
<p><strong><em>EDIT:</em></strong></p>
<p>The right cluster name can be found within EC2 instances tags which are part of cluster. Key is the same: <code>KubernetesCluster</code>.</p>
|
<p>I have a Kafka cluster in <a href="https://kubernetes.io/" rel="nofollow noreferrer">kubernetes</a> with a lot of test data. I want to have some/all of that test data imported into my local Kafka cluster. This, way it would be easier for me to perform tests in the local environment with actual data from kubernetes.</p>
<p>So, is there a way to dump for eg: <em>5000 messages from a kafka topic</em> into a file and restore them into a local kafka topic ? </p>
| <ol>
<li><p><a href="https://docs.confluent.io/current/connect/connect-replicator/docs/index.html" rel="nofollow noreferrer">Replicator</a> is a commercial tool that enables you to replicate topics from one cluster to another. Similar to MirrorMaker though, it's designed to replicate <em>entire</em> topics, not just part of them.</p></li>
<li><p>You can use <a href="https://github.com/edenhill/kafkacat" rel="nofollow noreferrer">kafkacat</a> with <code>stdin</code>/<code>stdout</code> if you just want some kind of hacky option, but things like partitioning, topic config and all that stuff that you'd want to match for accurate testing you would have to ensure gets done properly.</p></li>
</ol>
|
<p>I created a Kubernetes cluster using AKS service.</p>
<p>When I execute <code>kubectl get nodes</code>, I expect to see the Master node. However, I don't. I only see the Agent (Role) nodes.</p>
<p>Is it possible to look at the Master node? The reason I want to do this is to check if RBAC is enabled in my cluster, and if not, enable it.</p>
| <blockquote>
<p>Is it possible to look at Master node? The reason I want to do this is
to check if RBAC is enabled in my cluster, and if not, enable it.</p>
</blockquote>
<p>It is <strong>not</strong> possible, Kubernetes master node managed by Azure, because Azure handles these critical maintenance tasks for you, AKS does <strong>not</strong> provide direct access (such as with SSH) to the cluster.</p>
<p>If you need more control over the Azure resource manager templates, you can use the open source <a href="https://github.com/Azure/acs-engine" rel="noreferrer">acs-engine</a> project to build your own custom kubernetes cluster and deploy it via the az CLI.</p>
|
<p>I'm using Kubernetes and I recently updated my admin certs used in the <code>kubeconfig</code>. However, after I did that, all the <code>helm</code> commands fail thus:</p>
<pre><code>Error: Get https://cluster.mysite.com/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: x509: certificate signed by unknown authority
</code></pre>
<p><code>kubectl</code> works as expected:</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-1-0-34.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-1-51.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-10-120.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-10-135.eu-central-1.compute.internal Ready <none> 27d v1.7.10+coreos.0
ip-10-1-11-71.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-12-199.eu-central-1.compute.internal Ready <none> 8d v1.7.10+coreos.0
ip-10-1-2-110.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
</code></pre>
<p>As far as I've been able to read, <code>helm</code> is supposed to use the same certificates as <code>kubectl</code>, which makes me curious as how how <code>kubectl</code> works, but <code>helm</code> doesn't?</p>
<p>This is a production cluster with internal releases handled through helm charts, so it being solved is imperative.</p>
<p>Any hints would be greatly appreciated.</p>
| <p>As a workaround you can try to disable certificate verification. Helm uses the kube config file (by default <code>~/.kube/config</code>). You can add <code>insecure-skip-tls-verify: true</code> for the <code>cluster</code> section:</p>
<pre><code>clusters:
- cluster:
server: https://cluster.mysite.com
insecure-skip-tls-verify: true
name: default
</code></pre>
<p>Did you already try to reinstall helm/tiller?</p>
<pre><code>kubectl delete deployment tiller-deploy --namespace kube-system
helm init
</code></pre>
<p>Also check if you have configured an invalid certificate in the cluster configuration.</p>
|
<p>In Kubernetes is it possible to add <code>hostPath</code> storage in Statefulset. If so, can someone help me with some example?</p>
| <p>Yes but it is definitely for testing purposes.</p>
<p>First you need to create as many Persistent Volume as you need</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-001
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
kind: PersistentVolume
apiVersion: v1
metadata:
name: hp-pv-002
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data02"
...
</code></pre>
<p>Afterwards, add this VolumeClaimsTemplate to your Statefulset</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: my-hostpath-volume
spec:
storageClassName: manual
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
</code></pre>
<p>Another solution is using the <a href="https://github.com/kubernetes-incubator/external-storage/blob/master/docs/README.md" rel="noreferrer">hostpath dynamic provisioner</a>. You do not have to create the PV bin advance but this remains a "proof-of-concept solution" as well and you will have to build and deploy the provisioner in your cluster.</p>
|
<p>Can I use environment variable in lifecycl.postStart.exe.command?
I have a script that has to be run in postStart command.
The command contains a secret, can I use valueFrom to get the secret to env, and use the env in postStart command?</p>
| <p>Yes, it is possible.</p>
<p>Using the example from <a href="https://blog.openshift.com/kubernetes-pods-life/" rel="noreferrer">this post to create hooks</a>, let's read a secret and pass it as environment variable to the container, to later read it in the <code>postStart</code> hook.</p>
<pre><code>---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: loap
spec:
replicas: 1
template:
metadata:
labels:
app: loap
spec:
containers:
-
command:
- sh
- "-c"
- "echo $(date +%s): START >> /loap/timing; sleep 10; echo $(date +%s): END >> /loap/timing;"
image: busybox
env:
- name: SECRET_THING
valueFrom:
secretKeyRef:
name: supersecret
key: password
lifecycle:
postStart:
exec:
command:
- sh
- "-c"
- "echo ${SECRET_THING} $(date +%s): POST-START >> /loap/timing"
preStop:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): PRE-HOOK >> /loap/timing"
livenessProbe:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): LIVENESS >> /loap/timing"
name: main
readinessProbe:
exec:
command:
- sh
- "-c"
- "echo $(date +%s): READINESS >> /loap/timing"
volumeMounts:
-
mountPath: /loap
name: timing
initContainers:
-
command:
- sh
- "-c"
- "echo $(date +%s): INIT >> /loap/timing"
image: busybox
name: init
volumeMounts:
-
mountPath: /loap
name: timing
volumes:
-
hostPath:
path: /tmp/loap
name: timing
</code></pre>
<p>If you examine the contents of <code>/tmp/loap/timings</code>, you can see the secret being shown</p>
<pre><code>my-secret-password 1515415872: POST-START
1515415873: READINESS
1515415879: LIVENESS
1515415882: END
1515415908: START
my-secret-password 1515415908: POST-START
1515415909: LIVENESS
1515415913: READINESS
1515415918: END
</code></pre>
|
<p>My microservice has multiple containers, each of which needs access to a different port. How do I expose this service on multiple ports using the Hasura CLI and project configuration files?</p>
<p>Edit: Adding the microservice's <code>k8s.yaml</code> (as requested by @iamnat)
Let's say I have two containers, <code>containerA</code> and <code>containerB</code>, that I want to expose over HTTP on ports <code>6379</code> and <code>8000</code> respectively.</p>
<pre><code>apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: www
hasuraService: custom
name: www
namespace: '{{ cluster.metadata.namespaces.user }}'
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: www
spec:
containers:
- name: containerA
image: imageA
ports:
- containerPort: 6379
- name: containerB
image: imageB
ports:
- containerPort: 8000
securityContext: {}
terminationGracePeriodSeconds: 0
status: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: www
hasuraService: custom
name: www
namespace: '{{ cluster.metadata.namespaces.user }}'
spec:
ports:
- port: 6379
name: containerA
protocol: HTTP
targetPort: 6379
- port: 8000
name: containerB
protocol: HTTP
targetPort: 8000
selector:
app: www
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata: {}
</code></pre>
| <p>TL;DR:
- Add an API gateway route for each HTTP endpoint you want to expose <a href="https://docs.hasura.io/0.15/manual/hasuractl/hasura_conf_generate-route.html" rel="nofollow noreferrer">[docs]</a></p>
<hr>
<p>Inside the kubernetes cluster, give your k8s spec this is what your setup will look like:</p>
<pre><code>http://www.default:8000 -> containerA
http://www.default:6379 -> containerB
</code></pre>
<p>So you need to create a route for each of those HTTP paths in <code>conf/routes.yaml</code>.</p>
<pre><code>www-a:
/:
upstreamService:
name: www
namespace: {{ cluster.metadata.namespaces.user }}
upstreamServicePath: /
upstreamServicePort: 8000
corsPolicy: allow_all
www-b:
/:
upstreamService:
name: www
namespace: {{ cluster.metadata.namespaces.user }}
upstreamServicePath: /
upstreamServicePort: 6379
corsPolicy: allow_all
</code></pre>
<p>This means that, you'll get the following:</p>
<pre><code>https://www-a.domain.com -> containerA
https://www-a.domain.com -> containerB
</code></pre>
|
<h2>Issue</h2>
<p>The fluentd daemonset manifest in <a href="https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd" rel="noreferrer">Kubernetes Logging with Fluentd</a> will cause an authorization error if RBAC is enabled.</p>
<pre><code>$ kubectl logs fluentd-4nzv7 -n kube-system
2018-01-06 11:28:10 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-01-06 11:28:10 +0000 [info]: starting fluentd-0.12.33
2018-01-06 11:28:10 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.10.0'
2018-01-06 11:28:10 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.29.0'
2018-01-06 11:28:10 +0000 [info]: gem 'fluent-plugin-record-reformer' version '0.9.1'
2018-01-06 11:28:10 +0000 [info]: gem 'fluent-plugin-secure-forward' version '0.4.5'
2018-01-06 11:28:10 +0000 [info]: gem 'fluentd' version '0.12.33'
2018-01-06 11:28:10 +0000 [info]: adding match pattern="fluent.**" type="null"
2018-01-06 11:28:10 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2018-01-06 11:28:11 +0000 [info]: adding match pattern="**" type="elasticsearch"
2018-01-06 11:28:11 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error="Exception encountered fetching metadata from Kubernetes API endpoint: pods is forbidden: User \"system:serviceaccount:kube-system:default\" cannot list pods at the cluster scope ({\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"pods is forbidden: User \\\"system:serviceaccount:kube-system:default\\\" cannot list pods at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"kind\":\"pods\"},\"code\":403}\n)"
2018-01-06 11:28:11 +0000 [info]: process finished code=256
2018-01-06 11:28:11 +0000 [warn]: process died within 1 second. exit.
</code></pre>
| <p>When you are defining your daemonset you can also define your RBAC.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fluentd-service-account
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluentd-service-account
subjects:
- kind: ServiceAccount
name: fluentd-service-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd-service-account
namespace: kube-system
rules:
- apiGroups: ["*"]
resources:
- pods
- namespaces
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-service-account
namespace: kube-system
</code></pre>
<p><a href="http://docs.heptio.com/content/tutorials/elastic.html" rel="noreferrer">Source.</a></p>
|
<p>I have a Kafka cluster in <a href="https://kubernetes.io/" rel="nofollow noreferrer">kubernetes</a> with a lot of test data. I want to have some/all of that test data imported into my local Kafka cluster. This, way it would be easier for me to perform tests in the local environment with actual data from kubernetes.</p>
<p>So, is there a way to dump for eg: <em>5000 messages from a kafka topic</em> into a file and restore them into a local kafka topic ? </p>
| <p>The way we do it (not on Kubernetes but it does not matter in this case) is:</p>
<ol>
<li>if we need to duplicate some part of data from our production cluster to a local/test cluster - we start a Flume agent that reads from prod Kafka cluster and pushes into test cluster. This only works with either live data (when you start your copy process now and let it run for whatever time is needed, capturing the live traffic), or if it is Ok to start getting data from the EARLIEST offset - because vanilla Flume per se does not allow you to specify specific range of offsets to consume from a topic (AFAIK)</li>
<li>if we do need data from a very specific range of offsets - we just run a very simple Java client (our own custom one, just a few lines of code) that seeks to the beginning offset and reads until the specified end offset of the source cluster/topic - and sends the events into the target Kafka cluster/topic</li>
</ol>
<p>we found these approaches simpler and more flexible that using more complex tools/frameworks like MirrorMaker.</p>
|
<p>I have a test cluster in GKE (it runs my non-essential dev services). I am using the following GKE features for the cluster:</p>
<ul>
<li>preemptible nodes (~4x f1-micro)</li>
<li>dedicated ingress node(s)</li>
<li>node auto-upgrade</li>
<li>node auto-repair</li>
<li>auto-scaling node-pools</li>
<li>regional cluster</li>
<li>stackdriver healthchecks</li>
</ul>
<p>I created my pre-emptible node-pool thusly (auto-scaling between 3 and 6 actual nodes across 3 zones):</p>
<pre><code>gcloud beta container node-pools create default-pool-f1-micro-preemptible \
--cluster=dev --zone us-west1 --machine-type=f1-micro --disk-size=10 \
--preemptible --node-labels=preemptible=true --tags=preemptible \
--enable-autoupgrade --enable-autorepair --enable-autoscaling \
--num-nodes=1 --min-nodes=0 --max-nodes=2
</code></pre>
<p>It all works great, most of the time. However, around 3 or 4 times per day, I receive healthcheck notifications regarding downtime on some services running on the pre-emptible nodes. (exactly what I would expect ONCE per 24h when the nodes get reclaimed/regenerated. But not 3+ times.)</p>
<p>By the time I receive the email notification, the cluster has already recovered, but when checking <code>kubectl get nodes</code> I can see that the "age" on some of the pre-emptible nodes is ~5min, matching the approx. time of the outage.</p>
<p>I am not sure where to find the logs for what is happening, or WHY the resets were triggered (poorly-set <code>resources</code> settings? unexpected pre-emptible scheduling? "auto-repair"?) I expect this is all in stackdriver somewhere, but I can't find WHERE. The Kubernetes/GKE logs are quite chatty, and everything is at INFO level (either hiding the error text, or the error logs are elsewhere).</p>
<p>I must say, I do enjoy the self-healing nature of the setup, but in this case I would prefer to be able to inspect the broken pods/nodes before they are reclaimed. I would also prefer to troubleshoot without tearing-down/rebuilding the cluster, especially to avoid additional costs.</p>
| <p>I was able to solve this issue through a brute force process, creating several test node-pools in GKE running the same workloads (I didn't bother connecting up ingress, DNS, etc), and varying the options supplied to <code>gcloud beta container node-pools create</code>.</p>
<p>Since I was paying for these experiments, I did not run them all simultaneously, although that would have produced a faster answer. I also did prefer the tests which keep the <code>--preemptible</code> option, since that affects the cost significantly.</p>
<p>My results determined that the issue was with the <code>--enable-autorepair</code> argument and removing it has reduced failed health-checks to an acceptable level (expected for preemptible nodes).</p>
|
<p>I have a strange error with kube-dns.</p>
<p>Environment:
Cluster with a single master and a few nodes deployed on AWS with kops.
Kubernetes version 1.8.4.</p>
<p>Problem is that I have flakiness in DNS name resolution (both cluster-internal or external names) in my pods. After troubleshooting I understood the problem arises only when pods are scheduled on a specific node, which is the one where one of the replicas of the kube-dns pod is running.</p>
<p>These are my kube-dns pods:</p>
<pre><code>$ kubectl -n kube-system get po -l k8s-app=kube-dns -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-dns-7f56f9f8c7-2ztbn 3/3 Running 0 2d 100.96.8.239 node01
kube-dns-7f56f9f8c7-h5w29 3/3 Running 0 17d 100.96.7.114 node02
</code></pre>
<p>If I run a test POD forcing it to run on <code>node02</code> everything seems fine. I can resolve any (valid) DNS name with no issues at all.</p>
<p>If I run the same test POD on <code>node01</code> name resolution is flaky: sometimes it fails (roughly 50% of the times) with the following error</p>
<pre><code>$ dig google.com
;; reply from unexpected source: 100.96.8.239#53, expected 100.64.0.10#53
</code></pre>
<p>The rest of the times it works flawlessly:</p>
<pre><code>$ dig google.com
; <<>> DiG 9.10.4-P3 <<>> google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24454
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A
;; ANSWER SECTION:
google.com. 60 IN A 209.85.202.100
google.com. 60 IN A 209.85.202.101
google.com. 60 IN A 209.85.202.102
google.com. 60 IN A 209.85.202.113
google.com. 60 IN A 209.85.202.138
google.com. 60 IN A 209.85.202.139
;; Query time: 2 msec
;; SERVER: 100.64.0.10#53(100.64.0.10)
;; WHEN: Mon Jan 08 10:46:42 UTC 2018
;; MSG SIZE rcvd: 135
</code></pre>
<p><code>/etc/resolv.conf</code> points correctly to the <code>kube-dns</code> service's IP address:</p>
<pre><code>$ head -n 1 /etc/resolv.conf
nameserver 100.64.0.10
$ kubectl -n kube-system get svc kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 33d
</code></pre>
<p>Apparently, on <code>node01</code> only, when the request is forwarded by the service to the <code>kube-dns</code> pod instance that runs on <code>node01</code> itself the error is triggered.</p>
<p>I tried restarting kube-proxy on <code>node01</code> but the problem remains.</p>
<p>I bet that rebooting/recreating <code>node01</code> would make the problem go away, but I need to ensure this problem doesn't happen again.</p>
<p>Does anybody have an idea what's going on?</p>
| <p>I found an <a href="https://github.com/kubernetes/kubernetes/issues/21613" rel="noreferrer">issue on github</a> that looks very similar to this one I am having, and the <a href="https://github.com/kubernetes/kubernetes/issues/21613#issuecomment-343190401" rel="noreferrer">solution</a> posted there seems to work.</p>
<p>Basically, I needed to load a kernel module with the following command:</p>
<pre><code>modprobe br_netfilter
</code></pre>
<p>Of course, YMMV</p>
|
<p>I built an app on node.js using Docker and I'm not sure how to scale it on a Kubernetes cluster so that I take the most out of my cluster hardware.</p>
<p>From a performance perspective which of the following is better:</p>
<p>clusterize my node app and run as many containers as needed</p>
<p>or</p>
<p>just run as many containers as needed without clustering ?</p>
<p>When I say clustering I mean this <a href="https://nodejs.org/api/cluster.html" rel="noreferrer">https://nodejs.org/api/cluster.html</a></p>
<p>My app is a simple CRUD Api backed by mongoDB. We estimate that it will have 1000 concurrent users. Our cluster has 3 nodes.</p>
| <p>The <a href="https://nodejs.org/api/cluster.html" rel="noreferrer">NodeJS cluster</a> mechanism is useful to allow NodeJS to more effectively use greater than a single core, so depending on your code it may benefit you, but it's highly dependent on your code and the various dependencies and how well they work (or not) with clustering.</p>
<p>As a general practice, if you can break your containers down into nicely parallelized efforts that can be run as pods within kubernetes, then I'd recommend the following as a process to see what works for you:</p>
<ol>
<li>set up a single pod with your code in it, and run a load test against it. Use the data that Kubernetes has from cAdvisor to characterize how much resources (cpu & memory) your pod likes to have.</li>
<li>set a resource limit for cpu and memory based on what you see above.</li>
<li>run a load test to validate what your single pod handles in terms of scale</li>
</ol>
<p>And from there, you have a baseline where you can use Kubernetes to scale this horizontally to validate the 1000 user concurrent baseline you want to achieve.
There's a good talk on this process from the 2017 Kubecon called <a href="https://www.youtube.com/watch?v=_l8yIqMpWT0" rel="noreferrer">Load Testing Kubernetes: How to optimize your cluster resource allocation in production</a></p>
<p>Once you have a baseline, you can run a prototype out leveraging the clustering in your code, and then compare against the non-clustered version. If you do this, I'd double-check that any limits you set are > 1 core for CPU, or you'll be self-limiting outside of the NodeJS runtime to get access to multiple cores, which would defeat the purpose of using clustering.</p>
<p>Depending on what you're doing in your code, there may be significant re-work needed to enable clustering, as it wants to leverage its own worker concept, and it's not clear what frameworks you're using and if they'll fit reasonably into that structure.</p>
|
<p>I am running Docker for Windows and Minikube (Kubernetes) to serve an Angular application. The angular application is served using <code>ng serve</code> and is working properly on windows. I created the below Dockerfile:</p>
<pre><code># Use an official Nodejs runtime as a parent image
FROM node:6.10.3
#Create an unprivileged user 'app'
#Install dependencies
RUN useradd --user-group --create-home --shell /bin/false app &&\
npm install -g grunt-cli &&\
npm install -g @angular/cli
# Define environment variable
ENV ENVIRONMENT="dev" HOME=/home/app
#Copy the code to the image
COPY . $HOME/frontend/
#Chmod the files to allow unprivileged user 'app' to read/write
RUN chown -R app:app $HOME/*
#Use unprivileged user 'app' to run the app inside the container
USER app
#Set Current Working Directory of the runtime
WORKDIR $HOME/frontend
#Install NPM dependencies
RUN npm install
# Make port 8080 and 4200 available to the world outside this container
EXPOSE 4200
# Start the webpack server when the container launches
CMD ng serve
</code></pre>
<p>I have my environment variables set in windows for minikube along with my Docker variables (minikube docker-env) </p>
<pre><code>DOCKER_API_VERSION
DOCKER_CERT_PATH
DOCKER_HOST
DOCKER_MACHINE_NAME
DOCKER_TLS_VERIFY
</code></pre>
<p>I built the Docker image and tagged it as v1 :</p>
<pre><code>docker build -t frontend:v1 .
</code></pre>
<p>I deployed the POD from the image and exposed the deployment:</p>
<pre><code>kubectl run frontend --image=frontend:v1 --port=4200
kubectl expose deployment frontend --type=LoadBalancer
</code></pre>
<p>The <code>kubectl get pods</code> is showing my pod as running and <code>kubectl get deployments</code> is showing that my deployment is available</p>
<p>I executed a curl using <code>kubectl exec curl localhost:4200</code>, which executes on the container, and it is properly returning my page but when I try to run my service using <code>minikube service frontend</code> it opens the browser but can't reach my application.</p>
<p>It is worth noting that I did the same exact steps with a normal Nodejs application running using <code>node server.js</code> and it is served correctly using the same exact steps.</p>
<p>Any idea why my angular application is not served over Kubernetes when started using ng serve or how to go about figuring out what is the issue? I can see the mappings are correct from the Kubernetes Dashboard.</p>
| <p>This is probably more related to <em>angular-cli</em>. <code>ng serve</code> only binds to <code>localhost</code> by default. Therefore your docker container is not listening on it's public IP address.</p>
<p>Try to use:</p>
<pre><code>ng serve --host 0.0.0.0
</code></pre>
|
<p>I have issue with kubernetes dashboard installation, appreciate your comments and solutions.</p>
<p><strong>ERROR:</strong>
unable to decode "<a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-head.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-head.yaml</a>": no kind "Role" is registered for version "rbac.authorization.k8s.io/v1" unable to decode "<a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-head.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-head.yaml</a>": no kind "RoleBinding" is registered for version "rbac.authorization.k8s.io/v1" unable to decode "<a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-head.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-head.yaml</a>": no kind "Deployment" is registered for version "apps/v1beta2"</p>
<p><strong>kubectl version</strong>
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}</p>
<p><strong>kubectl api-versions</strong>
apps/v1beta1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1beta1 autoscaling/v1 batch/v1 certificates.k8s.io/v1alpha1 extensions/v1beta1 policy/v1beta1 rbac.authorization.k8s.io/v1alpha1 storage.k8s.io/v1beta1 v1</p>
<p><strong>Pods Status</strong>
kube-system kubernetes-dashboard-3725693093-zm11m 0/1 CrashLoopBackOff</p>
| <p>If you don't have an <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">RBAC enabled cluster</a>, you won't be able to use RBAC objects, such as Role. That's why when trying to create a <code>Role</code> object, it fails saying it doesn't know anything about <code>Role</code> objects.
<a href="https://kubernetes.io/docs/admin/authorization/#authorization-modules" rel="nofollow noreferrer">From the docs</a></p>
<blockquote>
<p>When specified “RBAC” (Role-Based Access Control) uses the “rbac.authorization.k8s.io” API group to drive authorization decisions, allowing admins to dynamically configure permission policies through the Kubernetes API.</p>
</blockquote>
<p>You also need a more recent kubectl version, as mentioned <a href="https://github.com/kubernetes/kubernetes/issues/48629#issuecomment-334632061" rel="nofollow noreferrer">in this comment</a>. RBAC requires at least kubectl 1.6 and you have 1.5.</p>
|
<p>Let's say I'm using an GCE <code>ingress</code> to handle traffic from outside the cluster and terminate TLS (<code>https://example.com/api/items</code>), from here the request gets routed to one of two <code>services</code> that are only available inside the cluster. So far so good. </p>
<p>What if I have to call service B from service A, should I go all the way and use the cluster's external IP/domain and use HTTPS (<code>https://example.com/api/user/1</code>) to call the service or could I use the internal IP of the service and use HTTP (<code>http://serviceb/api/user/1</code>)? Do I have to encrypt the data or is it "safe" as long as it isn't leaving the private k8s network?</p>
<p>What if I want to have "internal" endpoints that should only be accessible from within the cluster - when I'm always using the external https-url those endpoints would be reachable for everyone. Calling the service directly, I could just do a <code>http://serviceb/internal/info/abc</code>.</p>
| <blockquote>
<p>What if I have to call service B from service A, should I go all the way and use the cluster's external IP/domain and use HTTPS (<a href="https://example.com/api/user/1" rel="noreferrer">https://example.com/api/user/1</a>) to call the service or could I use the internal IP of the service and use HTTP (<a href="http://serviceb/api/user/1" rel="noreferrer">http://serviceb/api/user/1</a>)?</p>
</blockquote>
<p>If you need to use the features that you API Gateway is offering (authentication, cache, high availability, load balancing) then YES, otherwise DON'T. The External facing API should contain only endpoints that are used by external clients (from outside the cluster).</p>
<blockquote>
<p>Do I have to encrypt the data or is it "safe" as long as it isn't leaving the private k8s network?</p>
</blockquote>
<p>"safe" is a very relative word and I believe that there are no 100% safe networks. You should put in the balance <strong>the probability</strong> of "somebody" or "something" sniffing data from the network and <strong>the impact</strong> that it has on your business if that happens.</p>
<p>If this helps you: for any project that I've worked for (or I heard from somebody I know), the private network between containers/services was more than sufficient.</p>
<blockquote>
<p>What if I want to have "internal" endpoints that should only be accessible from within the cluster - when I'm always using the external https-url those endpoints would be reachable for everyone. </p>
</blockquote>
<p>Exactly what I was saying on top of the answer. Keeping those endpoints inside the cluster makes them inaccessible by design from outside.</p>
<p>One last thing, managing a lot of <code>SSL</code> certificates for a lot of internal services is a pain that one should avoid if not necessary.</p>
|
<p>I have created a cluster on <code>aws</code> using <code>kops</code>.</p>
<p>However I am unable to find the file used as/by the certificate authority for spawning off client certs.</p>
<p>Does <code>kops</code> create such a thing by default?</p>
<p>If so, what is the recommended process for creating client certs?</p>
<p>The <a href="https://github.com/kubernetes/kops/blob/master/docs/security.md" rel="noreferrer">kops documentation</a> is not very clear about this.</p>
| <p>I've done it like <a href="https://gist.github.com/amitkgupta/d5ff7dfc691c0e55162f9196b61964d2#creating-users-in-the-developer-sandbox" rel="noreferrer">this</a> in the past:</p>
<ol>
<li>Download the <code>kops</code>-generated CA certificate and signing key from S3:
<ul>
<li><code>s3://<BUCKET_NAME>/<CLUSTER_NAME>/pki/private/ca/*.key</code></li>
<li><code>s3://<BUCKET_NAME>/<CLUSTER_NAME>/pki/issued/ca/*.crt</code></li>
</ul></li>
<li>Generate a client key: <code>openssl genrsa -out client-key.pem 2048</code></li>
<li><p>Generate a CSR:</p>
<pre><code>openssl req -new \
-key client-key.pem \
-out client-csr.pem \
-subj "/CN=<CLIENT_CN>/O=dev"`
</code></pre></li>
<li><p>Generate a client certificate:</p>
<pre><code>openssl x509 -req \
-in client-csr.pem \
-CA <PATH_TO_DOWNLOADED_CA_CERT> \
-CAkey <PATH_TO_DOWNLOADED_CA_KEY> \
-CAcreateserial \
-out client-crt.pem \
-days 10000
</code></pre></li>
<li>Base64-encode the client key, client certificate, and CA certificate, and populate those values in a <code>config.yml</code>, e.g. <a href="https://gist.github.com/amitkgupta/d5ff7dfc691c0e55162f9196b61964d2#dev-kube-configyml" rel="noreferrer">this</a></li>
<li>Distribute the populated <code>config.yml</code> to your developers.</li>
</ol>
<p>5 and 6 can obviously be distributed by whatever means you want, don't need to make the <code>config.yml</code> for your developers.</p>
|
<p>I'm trying to create an AWS ELB through a kubernetes Service of type LoadBalancer and I can't figure out the combination of annotations needed to achieve the result I need.</p>
<p>This is the closest I can get:
<a href="https://i.stack.imgur.com/qQV0c.png" rel="nofollow noreferrer">AWS ELB generated when deploying the yaml below</a></p>
<p>Using this service definition:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my_app
namespace: my_namespace
labels:
dns: route53
annotations:
domainName: my_app.my.domain.com
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:iam::accountId:server-certificate/CertificateName"
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: my_app
version: my_version
ports:
- protocol: TCP
port: 80
targetPort: non_secure_port_name
name: http
- protocol: TCP
port: 443
targetPort: secure_port_name
name: https
</code></pre>
<p>The problem is that I'd need the instance protocol for the https port to be https as well, <a href="https://i.stack.imgur.com/av4Fg.png" rel="nofollow noreferrer">like this</a></p>
<p>By editing the ELB manually, everything works like a charm but I'd like to be able to achieve the configuration in the 2nd picture through the .yaml configuration of my Kubernetes Service so no manual tweaks are needed for my services to work as expected when deployed.</p>
<p>Is it possible? What annotation or particular configuration am I missing?</p>
| <p>Here's the incantation for terminating TLS at ELB using the AWS cert</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:foo
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: ingress-nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: http
selector:
app: ingress-nginx
type: LoadBalancer
</code></pre>
<p>If you want to force SSL you do that at the ingress resource definition with <code>ingress.kubernetes.io/ssl-redirect</code> annotation</p>
|
<p>I want to deploy Identity Server 4 on Kubernetes 1.8, and use this as a Federation Gateway between my web application and Azure Active Directory (to begin with). </p>
<p>If I call Identity Server from my web application using the local k8s service name, my users are redirected to the wrong Identity Server URL (containing the local k8s service name) during Sign in which clearly won't work. We are using an implicit flow.</p>
<p>I therefore setup a Azure Load balancer with dns name and configured Identity Server to be externally accessible with the domain name as the PublicOrigin URL.</p>
<p>However, my web application <strong>which runs in the same cluster</strong> cannot access Identity Server using the external URL of the Identity Server (discovery fails). </p>
<p>If I run Identity Server on another Kubernetes cluster then everything works fine.</p>
<p><strong>My question is:</strong></p>
<p>How do you properly deploy Identity Server in Kubernetes? Do I really need another Kubernetes cluster?</p>
<p>Note: I am using Kubernetes on Azure created with ACS engine (because we have mixed windows and linux containers). </p>
| <p>I'm using AKS (Azure managed kubernetes) and have a single client asp.net core 2 web app in the same cluster as my IS4 service with no issues. Both webapps are fronted by Nginx with kube-lego for LetsEncrpyt TLS support, and DNS is provided by Azure DNS.</p>
<p>I'm not using the PublicOrigin but instead the client app's Authority (in the openidconnect setup) uses the full (external Azure) DNS name of the IS4 service. You can use PublicOrigin if you want to use the cluster service naming from your clients</p>
|
<p>After upgrading my cluster in GKE the dashboard will no longer accept certificate authentication. </p>
<p>No problem there's a token available in the .kube/config says my colleague</p>
<pre><code> user:
auth-provider:
config:
access-token: REDACTED
cmd-args: config config-helper --format=json
cmd-path: /home/user/workspace/google-cloud-sdk/bin/gcloud
expiry: 2018-01-09T08:59:18Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
</code></pre>
<p>Except in my case there isn't...</p>
<pre><code> user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /home/user/Dev/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
</code></pre>
<p>I've tried re-authenticating with gcloud, comparing gcloud settings with colleagues, updating gcloud, re-installing gcloud, checking permissions in Cloud Platform. Pretty much everything I can think of, by still no access token will be generated. </p>
<p>Can anyone help please?!</p>
<pre><code> $ gcloud container clusters get-credentials cluster-3 --zone xxx --project xxx
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-3.
$ gcloud config list
[core]
account = xxx
disable_usage_reporting = False
project = xxx
Your active configuration is: [default]
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4"
</code></pre>
| <p>Ok, very annoying and silly answer - you have to make any request using kubectl for the token to be generated and saved into the kubeconfig file.</p>
|
<p>Can someone explain how the cpu usage is calculated inside pods with multiple containers for use with an Horizontal Pod Autoscaler?
Is it the mean value and how is this calculated?</p>
<p>For example:
If we have 2 containers:</p>
<ul>
<li>Container1 requests 0.5 cpu and uses 0 cpu</li>
<li>Container2 requests 1 cpu and uses 2 cpu</li>
</ul>
<p>If we calculate both seperatly and take the mean: (0% + 200%)/2 = 100% usage?</p>
<p>If we take the sums and take the mean: 2/1.5 = 133% usage?</p>
<p>Or is my logic way off?</p>
| <p>As of kubernetes 1.9 HPA calculates pod cpu utilization as total cpu usage of all containers in pod divided by total request. So in your example the calculated usage would be 133%. I don't think that's specified in docs anywhere, but the relevant code is here: <a href="https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49</a></p>
<p>However, I would consider this an implementation detail. As such it can easily change in future versions.</p>
|
<p>I am experimenting with a <code>kubernetes</code> cluster on <code>aws</code>.</p>
<p>At the end of the day, I want to expose 2 urls:</p>
<ul>
<li><code>production.somesite.com</code></li>
<li><code>staging.somesite.com</code></li>
</ul>
<p>When exposing <strong>1</strong> url, things (at least in the cloud landscape) seem to be easy.</p>
<p>You make the service <code>LoadBalancer</code> type --> <code>aws</code> provisions an ELB --> you assign an <code>A</code> type alias record (e.g. <code>whatever.somesite.com</code>) to ELB's <code>dns</code> name and boom, there is your service publicly available via the hostname you like.</p>
<p>I assume one easy (and I guess not best-pracise-wise) way of going about this is to expose <strong>2</strong> ELBs.</p>
<p>Is <code>Ingress</code> the (good) alternative?</p>
<p>If so, what is the <code>Route53</code> record I should create?</p>
<p>For what that matters (and in case this may be a dealbreaker for <code>Ingress</code>): </p>
<ul>
<li><code>production.somesite.com</code> will be <strong>publicly</strong> available</li>
<li><code>staging.somesite.com</code> will have restrictive acces</li>
</ul>
| <p>Ingress is for sure one possible solution.</p>
<p>You need to deploy in your cluster an <code>Ingress controller</code> (for instance <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a>) than expose it with a <code>Service</code> of type <code>LoadBalancer</code> as you did previously.</p>
<p>In <code>route53</code>, you need to point any domain names you want to be served by your ingress controller to ELB's name, exactly as you did previously.</p>
<p>The last thing you need to do is create an <code>Ingress</code> resource for every domain you want your ingress controller to be aware of (more on this here: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a>).</p>
<p>That being said, if you plan to only have 2 public URLs in your cluster I'd use 2 ELBs. Ingress controller is another component to be maintained/monitored in your cluster, so take this into account when evaluating the tradeoffs.</p>
|
<p>I'm trying to enable cors with ingress without success using the following annotation:</p>
<pre><code> kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
</code></pre>
| <p>The correct annotation is <code>ingress.kubernetes.io/enable-cors: "true"</code></p>
|
<p>The official <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="noreferrer">kubernetes guidelines</a>, instructs on updating the deployment either by performing a command line <code>set</code>:</p>
<pre><code>kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
</code></pre>
<p>or by inline editing (that will launch the default editor I guess)</p>
<pre><code>kubectl edit deployment/nginx-deployment
</code></pre>
<p>However both processes make consistency more difficult given that one needs to go and udpate offline the <code>my-deployment.yml</code> file, where the up & running deployment came from. (and this deprives one from the advantage of keeping their manifests version-controlled).</p>
<p>Is there a way to</p>
<ul>
<li>launch a deployment via the file</li>
<li>perform (when needed) updates to the <strong>same</strong> file</li>
<li>update the deployment by pointing to the same, updated file?</li>
</ul>
| <p>You can do it simply by following steps - </p>
<ol>
<li>Edit the deployment.yaml file </li>
<li><p>Run below command - </p>
<pre><code>kubectl apply -f deployment.yaml
</code></pre></li>
</ol>
<p>This is what I usually follow. You can use a kubectl patch or edit also. </p>
|
<p>I'm using <a href="http://zero-to-jupyterhub.readthedocs.io/en/latest/=" rel="nofollow noreferrer">Jupyterhub + Kubernetes</a> to provide a hosted development environment for a large programming class (>100 students). It's running on top of GKE with autoscaling enabled. As additional students log in, more nodes are dynamically added to the pool to handle the increased demand.</p>
<p>I'm running into an issue where the node pool is exhausting the project quota of external IPs, effectively limiting the size of the pool to 8 concurrent nodes. The exact error is <a href="https://serverfault.com/questions/869418/google-cloud-in-use-addresses-quota-exceeded/869429">this one</a>. The nodes sit behind a reverse proxy for communicating with end user; as far as I can tell, the only use of these public IPs is to enable direct SSH into each individual node. I don't need or even want this functionality, since it presents a unnecessary attack surface.</p>
<p>How can I disable the automatic assignment of ephemeral IPs to these worker nodes? There must be a way since the docs for GKE suggest that autoscaling can grow up to something like 1000 nodes. I don't see how this could be possible if they are all subject to the same tiny external IP quota.</p>
| <p>The solution you're looking for is simply increasing your quotas in console for your GCP project (IAM & admin -> Quotas). It's just a few clicks and usually takes only a few minutes for them to get approved. </p>
<p>Right now it's not possible to create GKE nodes without public IPs. Even if it were, it wouldn't help you as you'd just hit other quotas (cpu/disk), so also raise those. </p>
|
<p>I am new to all things Kubernetes so still have much to learn.</p>
<p>Have created a two node Kubernetes cluster and both nodes (master and worker) are ready to do work which is good:</p>
<pre><code>[monkey@k8s-dp1 nginx-test]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-dp1 Ready master 2h v1.9.1
k8s-dp2 Ready <none> 2h v1.9.1
</code></pre>
<p>Also, all Kubernetes Pods look okay:</p>
<pre><code>[monkey@k8s-dp1 nginx-test]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-dp1 1/1 Running 0 2h
kube-system kube-apiserver-k8s-dp1 1/1 Running 0 2h
kube-system kube-controller-manager-k8s-dp1 1/1 Running 0 2h
kube-system kube-dns-86cc76f8d-9jh2w 3/3 Running 0 2h
kube-system kube-proxy-65mtx 1/1 Running 1 2h
kube-system kube-proxy-wkkdm 1/1 Running 0 2h
kube-system kube-scheduler-k8s-dp1 1/1 Running 0 2h
kube-system weave-net-6sbbn 2/2 Running 0 2h
kube-system weave-net-hdv9b 2/2 Running 3 2h
</code></pre>
<p>However, if I try to create a new deployment in the cluster, the deployment gets created but its pod fails to go into the appropriate RUNNING state. e.g.</p>
<pre><code>[monkey@k8s-dp1 nginx-test]# kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml
deployment "nginx-deployment" created
[monkey@k8s-dp1 nginx-test]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-569477d6d8-f42pz 0/1 ContainerCreating 0 5s
default nginx-deployment-569477d6d8-spjqk 0/1 ContainerCreating 0 5s
kube-system etcd-k8s-dp1 1/1 Running 0 3h
kube-system kube-apiserver-k8s-dp1 1/1 Running 0 3h
kube-system kube-controller-manager-k8s-dp1 1/1 Running 0 3h
kube-system kube-dns-86cc76f8d-9jh2w 3/3 Running 0 3h
kube-system kube-proxy-65mtx 1/1 Running 1 2h
kube-system kube-proxy-wkkdm 1/1 Running 0 3h
kube-system kube-scheduler-k8s-dp1 1/1 Running 0 3h
kube-system weave-net-6sbbn 2/2 Running 0 2h
kube-system weave-net-hdv9b 2/2 Running 3 2h
</code></pre>
<p>I am not sure how to figure out what the problem is but if I for example do a <code>kubectl get ev</code>, I can see the following suspect event:</p>
<pre><code><invalid> <invalid> 1 nginx-deployment-569477d6d8-f42pz.15087c66386edf5d Pod
Warning FailedCreatePodSandBox kubelet, k8s-dp2 Failed create pod sandbox.
</code></pre>
<p>But I don't know where to go from here. I can also see that the nginx docker image itself never appears in <code>docker images</code>.</p>
<p>How do I find out more about the problem? Am I missing something fundamental in the kubernetes setup?</p>
<p>--- NEW INFO ---</p>
<p>For background info in case it helps... </p>
<p>Kubernetes nodes are running on CentOS 7 VMs hosted on Windows 10 hyper-v.</p>
<p>--- NEW INFO --- </p>
<p>Running <code>kubectl describe pods</code> shows the following Warning:</p>
<pre><code>Warning NetworkNotReady 1m kubelet, k8s-dp2 network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
</code></pre>
<p>--- NEW INFO ---</p>
<p>Switched off the Hyper-v VMs running Kubernetes for the night after my day job hours were over and on my return to the office this morning, I powered up the Kubernetes VMs once again to carry on and, for about 15 mins, the command:</p>
<p><code>kubectl get pods --all-namespaces</code> was still showing <code>ContainerCreating</code> for those nginx pods the same as yesterday but, right now, the command is now showing all pods as <code>Running</code> including the nginx pods... i.e. the problem solved itself after a full reboot of both master and worker node VMs.</p>
<p>I now did another full reboot again and all pods are showing as Running which is good.</p>
| <p>Use <code>kubectl describe pod <name></code> to see more info</p>
|
<p>If I have a service that is running on a cluster, can I deploy and add some new pods to the service dynamically?</p>
<p>For instance, if the service has the label "app=MyApp", can the new created pods with the same label "app=MyApp" be managed by the service?</p>
| <p>Short answer - yes. And this is exactly what a kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">replicaSet</a> enables, and these days you're most often encouraged to create one of those through a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>. Once created, you can scale up (or down) and the service will dynamically update it's internal forwarding based on the state of the pod's response to "Ready" (see the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">docs for a readiness probe</a>).</p>
<p>The <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">label/selector mechanism</a> that you're describing is exactly how that works within Kubernetes in a variety of places, and service <-> Pods that match is one of the most common</p>
|
<p>I am trying to connect to a pod running inside a GKE cluster -
I can ssh into the nodes within the cluster, but when I try the following command to get into a bash within a pod, I get an error:</p>
<pre><code>kubectl --namespace=prod exec -it test-webserver-3998817321-728hj -- /bin/bash
</code></pre>
<p>-> Error from server: error dialing backend: ssh: rejected: connect failed (Connection timed out)</p>
<p>How to connect to a running pod within a gke cluster by using kubectl command? Is there something misconfigured with my firewall? I've got the following ssh rule:</p>
<pre><code>NAME NETWORK DIRECTION PRIORITY ALLOW DENY
sshaccess default INGRESS 1000 tcp:22,icmp
</code></pre>
<p>When I try the above command on a local cluster, I can easily connect.</p>
<p>Sometimes it works, and sometimes it doesn't. As far as I understand, the Loadbalancer (Ingress) might be responsible for this behaviour?</p>
| <p>I got exactly the same error message and for me also, it sometimes works and sometimes not.</p>
<p>In my case, this was caused by mis configuration of firewall.
I restricted most of the outgoing traffic with egress except 443 port.
I add a rule which allow outgoing traffic from k8s node to internal IPs(IPs in the same subnet) for every port.
If your case is also caused by prohibiting egress from k8s to internal-nodes, create new firewall-rules to allow the transaction.</p>
<p><code>gcloud compute firewall-rules create <firewall-name> --network <network-name> --action allow --rules tcp --direction egress --destination-ranges <internal-ip range> --target-tags <server-ip from which transaction goes out.></code></p>
|
<p>I have set up the nginx ingress controller following <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md" rel="noreferrer">this guide</a>.</p>
<p>The ingress works well and I am able to visit the <code>defaultbackend</code> service and my own service as well.</p>
<p>But when reviewing the objects created in the Google Cloud Console, in particular the load balancer object which was created automatically, I noticed that the health check for the other nodes are failing:
<a href="https://i.stack.imgur.com/VRhEC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VRhEC.png" alt="enter image description here"></a></p>
<p>Is this because the ingress controller process is only running on the one node, and so it's the only one that passes the health check?
How do I make the other nodes pass?</p>
| <p>Your assumption is correct. The healthy node is indeed the one running the nginx pod. </p>
<p>The guide you're using configures the service with <code>externalTrafficPolicy: Local</code>.(<a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/gce-gke/service.yaml" rel="noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/gce-gke/service.yaml</a>)</p>
<p>That policy configures kube-proxy to never route traffic for a service off of the node where it arrives. So, when the load balancer traffic reached the nodes that have no nginx pod, the health check failed and the load balancer stopped sending traffic to them.</p>
<p>This configuration had the advantage of avoiding an extra network hop to get to the nginx pod. If you need more nodes to handle the traffic, you can ensure that there are nginx pods running there too. If you don't mind the extra network hop, you can change the <code>externalTrafficPolicy</code> too.</p>
|
<p>I'm very new to Kubernetes. We are using Kubernetes cluster on Google Cloud Platform.</p>
<p>I have created Cluster, Services, Pod, Replica controllers.</p>
<p>I have created Horizontal Pod Autoscaler and it is based on CPU Params.</p>
<h2>Cluster details</h2>
<p>Default running node count is set to 3</p>
<p>3GB allocatable memory per node</p>
<p>Default running node count is 3 in the cluster.</p>
<p>After running for 1 hour Service and Nodes showing NodeUnderMemoryPressure Issues.</p>
<p>How to resolve this ??
If you any more details, please ask </p>
<p>Thanks</p>
| <p>I don't know how much traffic is hitting your cluster, but I would highly recommend running <a href="https://prometheus.io/docs/introduction/overview/" rel="nofollow noreferrer">Prometheus</a> in your cluster.</p>
<p>Prometheus is an open-source monitoring and alerting tool, and integrates very well with Kubernetes.</p>
<p>This tool should give you a much better view of memory consumption, CPU usage, amongst <strong>many</strong> other monitoring capabilities, that will allow you to effectively troubleshoot these types of issues.</p>
|
<p>Created a <code>kubernetes</code> cluster with private topology on <code>aws</code> using <code>kops</code></p>
<p>My application exposes several services. As expected, services communicate among each other using their names, i.e. the <code>name</code> field below:</p>
<pre><code>kind: Service
metadata:
name: myservice
namespace: staging_namespace
</code></pre>
<p>Here is the question:</p>
<p>Assuming that I will deploy <strong>2</strong> version of my application (e.g. <code>testing</code> and <code>staging</code>) in <strong>different namespaces</strong>, will this prevent service name collision?</p>
<p>Will namespace separation allow </p>
<ul>
<li><p><code>service1</code> reach the correct <code>myservice</code> in <code>staging_namespace</code> in my <code>staging</code> deployment</p></li>
<li><p><code>service1</code> reach the correct <code>myservice</code> in <code>testing_namespace</code> in my <code>testing</code> deployment</p></li>
</ul>
<p>?</p>
<p>Using </p>
<pre><code>kops version
Version 1.8.0 (git-5099bc5)
</code></pre>
<p>and </p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>As I understand, Namespace is a virtual cluster in k8s cluster, therefore it provides isolation and separation of concern. It does not conflict If you have same services or pods in different namespaces. </p>
<p>All service are assigned a DNS Record which is unique therefore there is no conflict. for instance, </p>
<pre><code>my-svc.my-namespace.svc.cluster.local.
</code></pre>
<p><strong>In your case, it would be like this</strong></p>
<pre><code>my-svc.staging_namespace.svc.cluster.local.
my-svc.testing_namespace.svc.cluster.local.
</code></pre>
<p>I have attached a link for further research.
<a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">services-networking</a></p>
<p><strong>Edit:1</strong> </p>
<p><strong>Purpose and Motivation of Namespaces</strong> </p>
<p>The functionality of namespace is to provide the logical separation of environments which is providing a uniqueness to k8s resources even being a same pod or service.</p>
<blockquote>
<p>Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">Virtual cluster are called namespaces</a></p>
|
Subsets and Splits