prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>With manually installed Kubernetes <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="noreferrer">on CoreOS</a>, how does one install and use the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager" rel="noreferrer">Kubernetes addon manager</a>?</p>
<p>I've found references to the addon manager being the current standard way of installing Kubernetes addons, but I can't find any authoritative documentation on it. Hoping someone can help me out here.</p>
| <p>The addon manager is deployed as a normal pod or a deployment, with a simple <code>kubectl apply -f</code>.</p>
<p>The yaml looks something like this, look at the specific version that you need:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: kube-addon-manager
namespace: kube-system
labels:
component: kube-addon-manager
spec:
hostNetwork: true
containers:
- name: kube-addon-manager
# When updating version also bump it in:
# - cluster/images/hyperkube/static-pods/addon-manager-singlenode.json
# - cluster/images/hyperkube/static-pods/addon-manager-multinode.json
# - test/kubemark/resources/manifests/kube-addon-manager.yaml
image: gcr.io/google-containers/kube-addon-manager:v6.4-beta.1
command:
- /bin/bash
- -c
- /opt/kube-addons.sh 1>>/var/log/kube-addon-manager.log 2>&1
resources:
requests:
cpu: 5m
memory: 50Mi
volumeMounts:
- mountPath: /etc/kubernetes/
name: addons
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: false
volumes:
- hostPath:
path: /etc/kubernetes/
name: addons
- hostPath:
path: /var/log
name: varlog
</code></pre>
<p>The addon manager observes the specific yaml files under <code>/etc/kubernetes/addons/</code>, put any addon you like here to install it.</p>
|
<p>How can I use the <code>--output=custom-columns</code> option for the the Kubernetes CLI (<code>kubectl</code>) to select array elements?</p>
<p>In other words, how can I turn this:</p>
<pre class="lang-shell prettyprint-override"><code>kubectl get services
#=>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-manager NodePort 10.3.242.200 146.148.20.235 9000:32619/TCP 0d
spark-master NodePort 10.3.242.209 104.199.21.235 9000:32619/TCP 0d
</code></pre>
<p>into this?</p>
<pre class="lang-shell prettyprint-override"><code>kubectl get services \
--output=custom-columns=. . .
#=>
NAME EXTERNAL-ENDPOINT
kafka-manager 146.148.20.225:9000
spark-master 104.199.21.225:7077
</code></pre>
<p>What I have so far:</p>
<pre class="lang-shell prettyprint-override"><code>kubectl get services \
--output=custom-columns='NAME:.metadata.name,IP:.spec.clusterIP,PORT:.spec.ports'
#=>
NAME IP PORT
kafka-manager 10.3.242.200 [map[nodePort:32619 port:9000 protocol:TCP targetPort:9000]]
spark-master 10.3.242.209 [map[nodePort:30588 port:7077 protocol:TCP targetPort:7077]]
</code></pre>
| <p><strong>TLDR</strong></p>
<p>For an element that is in a list use <code>*</code> in square brackets.</p>
<p>So your query should look like this:</p>
<pre class="lang-shell prettyprint-override"><code>kubectl get service \
-n kube-system \
-o=custom-columns='NAME:.metadata.name,IP:.spec.clusterIP,PORT:.spec.ports[*].targetPort'
#=>
NAME IP PORT
kube-dns 10.0.0.10 53,53
kubernetes-dashboard 10.0.0.250 9090
</code></pre>
<p>Notice the <code>*</code> in <code>PORT:.spec.ports[*].targetPort</code>.</p>
<p><strong>Details:</strong></p>
<p>So <code>kubectl</code> is expecting a <code>json-path-expr</code> after <code>header</code>. The error I got when playing with expressions was following:</p>
<pre class="lang-shell prettyprint-override"><code>expected <header>:<json-path-expr>
</code></pre>
<p>To iterate over all elements in a list instead of putting an index just use <code>*</code>.</p>
<p>Various other json-path expressions can be found <a href="http://goessner.net/articles/JsonPath" rel="noreferrer">here</a>.</p>
|
<p>Is there a recommended way of initiating 'docker build' commands from a container loaded in to Kubernetes?</p>
<p>IE. Spinning up a Jenkins container (from the community image) and then ensuring that docker is installed within that container so you can issue 'docker build' commands.</p>
<p>I've read up on various methods such as DIND (Docker in Docker) containers and running links between the Jenkins container and the DIND container. Of course with Kubernetes this would be different.</p>
| <p>There are two ways of accessing docker daemon from a Kubernetes Pod. </p>
<ol>
<li><p>You can expose the docker daemon running on the host machine (this is the docker daemon used by kubernetes to spin up your container) inside your container. To do this you need to modify your Pod specification to add a hostPath for the docker daemon socket (typically <code>/var/run/docker.sock</code>). Now, you can install docker inside your container and access the docker daemon of the host machine. </p></li>
<li><p>The second method is using Docker-in-Docker (DinD). In this method, you can use the concept of a <a href="http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html" rel="noreferrer">sidecar container</a> to run the docker in docker daemon. The main container in the pod will have to be configured to talk to the docker daemon in the sidecar container. You can do this by setting an environment variable <code>DOCKER_HOST</code> to <code>tcp://localhost:2375</code>. You can find the complete Pod specification and a lot more details on the differences between the two approaches on my <a href="https://applatix.com/case-docker-docker-kubernetes-part-2/" rel="noreferrer">blog article about DinD on Kubernetes</a></p></li>
</ol>
|
<p>I need a deployment where each pod has a single container and each container has 2 java processes running. Since a container starts with a process(P1), and if that particular process(P1) is killed, the pod restarts. Is it possible, that container starts with 2 processes, and even if one of them is killed, the container(or pod in our case, since each pod has only one container) restarts? I could not find any documentation related to this which says it can/cannot be done. Also, how can I start the container with 2 processes? If I try something like this (javaProcess is a java file) in my docker image, it runs only the first process :</p>
<pre><code>java -jar abc.jar
java javaProcess
or
java javaProcess
java -jar abc.jar
</code></pre>
<p>If I start the container with one process(P1) and start the other process(P2) after the container is up, the container would not be bound to P2 and hence if P2 terminates, the container won't restart. But, I need it to restart!</p>
| <p>You can do this using <a href="http://supervisord.org/" rel="nofollow noreferrer">supervisord</a>. Your main process should be bound to supervisord in docker image and two java processes should be managed using supervisord. </p>
<blockquote>
<p>supervisord‘s primary purpose is to create and manage processes based
on data in its configuration file. It does this by creating
subprocesses. Each subprocess spawned by supervisor is managed for the
entirety of its lifetime by supervisord (supervisord is the parent
process of each process it creates). When a child dies, supervisor is
notified of its death via the SIGCHLD signal, and it performs the
appropriate operation.</p>
</blockquote>
<p>Following is a sample supervisord config file which start two java processes. (supervisord.conf)</p>
<pre><code>[supervisord]
nodaemon=true
[program:java1]
user=root
startsecs = 120
autorestart = true
command=java javaProcess1
[program:java2]
user=root
startsecs = 120
autorestart = true
command=java javaProcess2
</code></pre>
<p>In your docker file you should, do something like this:</p>
<pre><code>RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
</code></pre>
|
<p>I am running minikube on my Mac laptop. I am using VirtualBox to host the minikube virtual machine, following the <a href="https://github.com/kubernetes/minikube#quickstart" rel="nofollow noreferrer">official instructions</a>.</p>
<p>I would like a pod that I am going to deploy into the cluster to be able to ping a server I will be running on my laptop. Assuming (for now) that I am not defining a Kubernetes Service of type ExternalName to represent that server, what IP or hostname should I use from within the program running in my pod?</p>
<p><strong>EDIT</strong>: From my pod I can <code>ping 10.0.2.2</code> and get answers back. However, trying to <code>telnet</code> to <code>10.0.2.2</code> on port <code>9092</code>, where I happen to have an H2 database running, just hangs.</p>
<p>Finally, <code>minikube ssh</code>, which apparently logs me into the VirtualBox VM, maybe? running as <code>docker</code>? results in all the same behaviors above, in case this matters, which suggests this is fundamentally a question, I suppose, about VirtualBox.</p>
<p><strong>EDIT #2</strong>: A restart of VirtualBox solved the connection issue (!). Nevertheless, <code>10.0.2.2</code> still seems like magic to me; I'd like to know where that IP comes from.</p>
| <p>You should be able to use the ipv4 address listed under <code>vboxnet0</code>. Look for the <code>vboxnet0</code> interface in the list of outputs for <code>ifconfig</code>. Alternatively the address <code>10.0.2.2</code> will also map back to the host from the guest.</p>
<p>This IP address will be accessible from within the guest but not directly from within a pod. To make it accessible from within a pod you will need to create a headless service that exposes this IP address.<br>
See this answer for how to create a headless service:
<a href="https://stackoverflow.com/questions/43354167/minikube-expose-mysql-running-on-localhost-as-service/43477742#43477742">Minikube expose MySQL running on localhost as service</a></p>
<p>So for example I ran a server at port :8000 on my host and did the following to access it in a pod:</p>
<pre><code>$ kubectl create -f service.yaml
----service.yaml----
apiVersion: v1 kind: Service metadata:
name: my-service spec:
ports:
- protocol: TCP
port: 1443
targetPort: 8000
$ kubectl create -f endpoint.yaml
----endpoint.yaml----
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.168.99.1 #ipv4 address from vboxnet0
ports:
- port: 8000
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 2h
my-service 10.0.0.83 <none> 1443/TCP 17m
</code></pre>
<p>Then you can access the host service by using 10.0.0.83:1443 within a pod.</p>
|
<p>Given a registry with several tags for an image, e.g.:</p>
<pre><code>myimage 0.0.3
myimage 0.0.2
</code></pre>
<p>Can I somehow extract <code>0.0.3</code> (or rather the most recent tag) into a variable in bash?<br>
I'd like to find out this value so that I could then <code>tagValue++</code> it and use it in a Jenkins pipeline for actions such as:</p>
<ul>
<li><code>docker build</code> to build the next image with an updated tag</li>
<li><code>docker push</code> to push this new image tag to the registry</li>
<li><code>kubectl set image</code> to update a Kubernetes cluster with pods, using the updated image tag</li>
</ul>
<p>Of course, if anyone has a better strategy I am all for hearing it! </p>
<p>Alternatives:</p>
<ul>
<li><p>Get the value from the Kubernetes Deployment's YAML file, then run the actions above, update the file with the updated tag and push the updated file back to the repository?</p></li>
<li><p>Same as the alternative above, but use <code>kubectl replace</code> instead of <code>kubectl set image</code>? (<a href="https://vishh.github.io/docs/user-guide/kubectl/kubectl_replace/" rel="nofollow noreferrer">example here</a> from the docs)</p></li>
</ul>
| <p>we use this with an inhouse v2 docker registry. it depends on jq, so it may not be the right fit for you. the registry seems to return them in the order in which they were added, so we just grab the latest, which is quite possible with jq, if you have a new enough version:</p>
<pre><code>tags_url="https://hot.registry.example.com/v2/somerepoprefix/${repo}/tags/list"
newest_tag=$(curl $tags_url | jq -r '(.tags | reverse)[0]')
</code></pre>
|
<p>I'm running a Kubernetes cluster on AWS using kops. I've mounted an EBS volume onto a container and it is visible from my application but it's read only because my application does not run as root. How can I mount a <code>PersistentVolumeClaim</code> as a user other than root? The <code>VolumeMount</code> does not seem to have any options to control the user, group or file permissions of the mounted path.</p>
<p>Here is my Deployment yaml file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: notebook-1
spec:
replicas: 1
template:
metadata:
labels:
app: notebook-1
spec:
volumes:
- name: notebook-1
persistentVolumeClaim:
claimName: notebook-1
containers:
- name: notebook-1
image: jupyter/base-notebook
ports:
- containerPort: 8888
volumeMounts:
- mountPath: "/home/jovyan/work"
name: notebook-1
</code></pre>
| <p>The Pod Security Context supports setting an <code>fsGroup</code>, which allows you to set the group ID that owns the volume, and thus who can write to it. The example in the docs:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
# specification of the pod's containers
# ...
securityContext:
fsGroup: 1234
</code></pre>
<p>More info on this is <a href="https://kubernetes.io/docs/concepts/policy/security-context/" rel="noreferrer">here</a></p>
|
<p>When creating a Job in kubernetes 1.6, the following error occurs:</p>
<pre><code>Error from server (BadRequest): error when creating "job.yaml":
Job in version "v1" cannot be handled as a Job: [pos 217]:
json: expect char '"' but got char '1'
</code></pre>
<p>The job.yaml in question is:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: sysbench-oltp
spec:
template:
metadata:
name: sysbench-oltp
spec:
containers:
- name: sysbench-oltp
image: sysbench-oltp:1.0
env:
- name: OLTP_TABLE_SIZE
value: 10000
- name: DB_NAME
value: "test"
- name: DB_USER
value: "test_user"
</code></pre>
<p>Any variations on the API do not seem to matter at all. Anybody have any idea of what the problem is?</p>
| <p>Found the solution:</p>
<p>The JSON parser returns a rather unrelated error on a piece of the data in the environment variables:</p>
<pre><code> env:
- name: OLTP_TABLE_SIZE
value: 10000
</code></pre>
<p>Should read:</p>
<pre><code> env:
- name: OLTP_TABLE_SIZE
value: "10000"
</code></pre>
<p>After which all the parsing works as it should. </p>
|
<p>I have following setting for readiness probe:</p>
<pre><code> readinessProbe:
httpGet:
path: /xyzapi/health
port: 8888
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 30
successThreshold: 2
failureThreshold: 5
</code></pre>
<p>I want this readiness probe to be hit only once. It should fail or pass my deployment base on one try only.</p>
<p>I did some googling, but not much of help. Any kube expert? please help.</p>
| <p>Oswin Noetzelmann's comment is spot-on. The purpose of a readiness probe is to continually check for the state of readiness. You can probably still change it so that your readiness-check script checks once and then caches that result, but it wouldn't be idiomatic use of readiness. </p>
<p>Better alternatives for a one-time check are: init-containers or just using a wrapper script which wraps the actual work of your main container and performs the check you want.</p>
<p>I think it is a good use case for <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#what-can-init-containers-be-used-for" rel="nofollow noreferrer">init-containers</a>. From the documentation, one of the common purposes of init containers is:</p>
<blockquote>
<p>They run to completion before any app Containers start, whereas app
Containers run in parallel, so Init Containers provide an easy way to
block or delay the startup of app Containers until some set of
preconditions are met.</p>
</blockquote>
|
<p>I have a K8S cluster running on DigitalOcean. I have a Postgresql database running there and I want to create a volume using the DigitalOcean BlockStorage to be used by the Postgresql pod as volume. Is there any examples on how to do that? </p>
<p>If it's not possible to use DigitalOcean blockstorage then how do most companies run their persistence storage for databases? </p>
| <p>No official support yet. You can try the example from someone in <a href="https://github.com/kubernetes/kubernetes/issues/32334" rel="nofollow noreferrer">this github issue</a>:</p>
<blockquote>
<p>Update: I finished writing a volume plugin for digitalocean. Attach/detach is working on my cluster. Looking for anyone willing to
test this on their k8s digitalocean cluster. My branch is
<a href="https://github.com/wardviaene/kubernetes/tree/do-volume" rel="nofollow noreferrer">https://github.com/wardviaene/kubernetes/tree/do-volume</a></p>
<p>You can use the following spec in your pod yml:</p>
<pre><code>spec:
containers:
- name: k8s-demo
image: yourimage
volumeMounts:
- mountPath: /myvol
name: myvolume
ports:
- containerPort: 3000
volumes:
- name: myvolume
digitaloceanVolume:
volumeID: mykubvolume
fsType: ext4 Where mykubvolume is the volume created in DigitalOcean in the same region.
</code></pre>
<p>You will need to add create a config file:</p>
<p>[Global] apikey = do-api-key region = your-region and add these
parameters to your kubernetes processes: --cloud-provider=digitalocean
--cloud-config=/etc/cloud.config</p>
<p>I'm still waiting for an issue in the godo driver to be resolved,
before I can submit a PR (digitalocean/godo#102)</p>
</blockquote>
|
<p>First of all, this is a question regarding my thesis for school. I have done some research about this, it seems like a problem that hasn't been tackled yet (might not be that common).</p>
<p>Before jumping right into the problem, I'll give a brief example of my use case.</p>
<p>I have multiple namespaces containing microservices depending on a state X. To manage this the microservices are put in a namespace named after the state. (so namespaces state_A, state_B, ...)</p>
<p>Important to know is that each microservice needs this state at startup of the service. It will download necessary files, ... according to the state. When launching it with state A version 1, it is very likely that the state gets updated every month. When this happens, it is important to let all the microservices that depend on state A upgrade whatever necessary (databases, in-memory state, ...).</p>
<p>My current approach for this problem is simply using events, the microservices that need updates when the state changes can subscribe on the event and migrate/upgrade accordingly. The only problem I'm facing is that while the service is upgrading, it should still work. So somehow I should duplicate the service first, let the duplicate upgrade and when the upgrade is successful, shut down the original. Because of this the used orchestration service would have to be able to create duplicates (including duplicating the state).</p>
<p>My question is, are there already solutions for my problem (and if yes, which ones)? I have looked into Netflix Conductor (which seemed promising with its workflows and events), Amazon SWF, Marathon and Kubernetes, but none of them covers my problem.</p>
<p>Best of all the existing solution should not be bound to a specific platform (Azure, GCE, ...).</p>
| <p>For uninterrupted upgrade you should use clusters of nodes providing your service and perform a rolling update, which takes out a single node at a time, upgrading it, leaving the rest of the nodes for continued servicing. I recommend looking at the concept of virtual services (e.g. in <a href="https://coreos.com/kubernetes/docs/latest/services.html" rel="nofollow noreferrer">kubernetes</a>) and <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update-intro/" rel="nofollow noreferrer">rolling updates</a>.</p>
<p>For inducing state I would recommend looking into container initialization mechanisms. For example in docker you can use <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer">entrypoint scripts</a> or in kubernetes there is the concept of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init containers</a>. You should note though that today there is a trend to decouple services and state, meaning the state is kept in a DB that is separate from the service deployment, allowing to view the service as a stateless component that can be replaced without losing state (given the interfacing between the service and required state did not change). This is good in scenarios where the service changes more frequently and the DB design less frequently.</p>
<p>Another note - I am not sure that representing state in a namespace is a good idea. Typically a namespace is a static construct for organization (of code, services, etc.) that aims for stability. </p>
|
<p>I am using below yaml file to create the pod, kubectl command giving below error.</p>
<p>How to correct this error message?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
kubectl create -f commands.yaml
error: error validating "commands.yaml": error validating data: found invalid field env for v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>follow example from this page.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/define-command-argument-container/</a></p>
<p>Thanks
-SR</p>
| <p>Your—syntactically correct—YAML results in an incorrect data-structure for kubernetes. In YAML the indentations can affect the structure of the data. See <a href="https://docs.saltstack.com/en/latest/topics/troubleshooting/yaml_idiosyncrasies.html" rel="noreferrer">this</a>. </p>
<p>I think this should be correct:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
</code></pre>
|
<p>I have a sandbox Kubernetes cluster, in which I shutdown all pods at night, so it can scale down with the <code>cluster-autoscaler</code> add-on.</p>
<p>The problem is, it almost always keep the master plus 2 nodes running.</p>
<p>Looking into <code>cluster-autoscaler</code> logs, I see the problem seems to be this:</p>
<pre><code>Fast evaluation: node ip-172-16-38-51.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: dns-controller-3586597043-531v5
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: heapster-564189836-3h2ts
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: kube-dns-1321724180-c0rjr
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: kube-dns-autoscaler-265231812-dv17j
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: kubernetes-dashboard-2396447444-6bwtq
Fast evaluation: node ip-172-16-49-207.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: monitoring-influxdb-grafana-v4-50v9d
Fast evaluation: node ip-172-16-51-146.ec2.internal cannot be removed: non-deamons set, non-mirrored, kube-system pod present: cluster-autoscaler-776613730-kqgk2
</code></pre>
<p>and because those pods are spread, cluster-autoscaler ends up keeping 2 or more nodes up even when there is nothing running in the default namespace...</p>
<p>Is there a way of forcing or inducing Kubernetes to schedule all those pods together?</p>
<p>The idea is to make the cluster run at night with the master plus one node only. If there isn't, I was thinking on add a <code>Scheduled Action</code> to the <code>AutoScale Group</code>, so it would be forced to run all in the same node.</p>
| <p>An alternative to using nodeselector would be to use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature" rel="nofollow noreferrer">inter-pod affinity</a> to ensure that your pods get packed better. </p>
<p>'Preferred' rather than 'Required' affinity can be used to ensure that Kubernetes will try and schedule your pods together on the same node, but if it cannot, they will schedule on different nodes.</p>
<p>From the documentation, it will let you specify rules like:</p>
<blockquote>
<p>This pod should (or, in the case of
anti-affinity, should not) run in an X if that X is already running
one or more pods that meet rule Y. X is a topology domain like node, rack, cloud provider zone, cloud provider region, etc.</p>
</blockquote>
|
<p>When running <code>helm upgrade --install --namespace $PROJECT_NAMESPACE --values values.yaml --name $SOME_NAME some/chart</code>.</p>
<p>I get <code>Error: unknown flag: --name</code>.</p>
<p>Is there no way to set the name of a chart you are targeting with <code>upgrade</code>? Is this only possible for <code>install</code>?</p>
| <p>The solution was that no <code>--name</code> was needed.</p>
<p>The syntax for a Helm Upgrade is <code>"helm upgrade [RELEASE] [CHART]"</code>, so the "RELEASE" is the same as what would be the <code>--name</code> in a Helm Install.</p>
|
<p>I am using Google Container Engine . Now I want auto scaling functionality in my cluster . As per documentation <strong>GKE autoscaler</strong> is in <strong>beta release</strong> . I can also enable autoscaling in <strong>instance group</strong> that is managing cluster nodes .
Cluster autoscaler add/remove nodes so that all scheduled pods have a place to run where instance group add/remove nodes based on different policies like average cpu utilization .
I think by adjusting pods CPU limit and target CPU utilization for pods in Kubernetes autoscaler , Managed Instance Group autoscaling can also be used to resize GKE cluster .
So my question is what should I use ? </p>
| <p>Short answer - don't use GCE MIG autoscaling feature. It will just not work properly with your cluster.</p>
<p>See details in this FAQ:
<a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#should-i-use-a-cpu-usage-based-node-autoscaler-with-kubernetes" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#should-i-use-a-cpu-usage-based-node-autoscaler-with-kubernetes</a></p>
<p>(read the question linked above and 2 next ones)</p>
|
<p>Problem : Not able to write in the directory inside the container. </p>
<p>I am using hostPath storage for the persistent storage requirements. I am not using PV anc PVC to use hospath, instead of that, using it's volume plugin. for example</p>
<pre><code>{
"apiVersion": "v1",
"id": "local-nginx",
"kind": "Pod",
"metadata": {
"name": "local-nginx"
},
"spec": {
"containers": [
{
"name": "local-nginx",
"image": "fedora/nginx",
"volumeMounts": [
{
"mountPath": "/usr/share/nginx/html/test",
"name": "localvol"
}
]
}
],
"volumes": [
{
"name": "localvol",
"hostPath": {
"path": "/logs/nginx-logs"
}
}
]
}
}
</code></pre>
<p>Note: nginx pod is just for exmaple.</p>
<p>My directory on host is getting created as "drwxr-xr-x. 2 root root 6 Apr 23 18:42 /logs/nginx-logs"
and same permissions are reflecting inside the pod, but as it's 755, other user i.e. user inside the pod is not able to write/create file inside the mounted dir.</p>
<p>Questions:</p>
<ol>
<li><p>Is there any way out to avoid the problem specified above?</p></li>
<li><p>Is there any way to specify the directory permission in case of Hostpath storage?</p></li>
<li><p>Is there any field which I can set in the following definition to give the required permission?</p></li>
</ol>
<hr>
<pre><code>"volumes":{
"name": "vol",
"hostPath": {
"path": "/any/path/it/will/be/replaced"}}
</code></pre>
| <p>I think the problem you are encountering is not related to the user or group (your pod definition does not have RunAsUser spec, so by default it is run as root), but rather to the SELinux policy. In order to mount a host directory to the pod with rw permissions, it should have the following label: <code>svirt_sandbox_file_t</code> . You can check the current SElinux label with the following command: <code>ls -laZ <your host directory></code> and change it with <code>chcon -Rt svirt_sandbox_file_t <your host directory></code>.</p>
|
<p>Google App Engine (Flex) has an elegant way to ensure that apps are exposed to the internet using HTTPS. (From what I know, you just specify <code>secure: always</code> in app.yaml, and you are good to go (<a href="https://cloud.google.com/appengine/docs/standard/python/config/appref#handlers_element" rel="nofollow noreferrer">https://cloud.google.com/appengine/docs/standard/python/config/appref#handlers_element</a>) </p>
<p>Does the Google Container Engine have a similar straight forward way to ensure HTTPS connections, for instance when using the <code>kubectl expose</code> command? (e.g.
<code>kubectl expose deployment my_app --type=LoadBalancer --port [433]</code>) </p>
| <p>Assuming you have a containerized application that already knows how to terminate TLS connections and has TLS certificates, you can use the <code>kubectl expose</code> command you mentioned to create a load balancer on port 443. It should work.</p>
<p>If you do not have TLS certificates and you're expecting Google Cloud to terminate the TLS for you, that is possible as well. You can use kube-lego to fetch TLS certificates from LetsEncrypt for free and create a kubernetes <code>Ingress</code> resource which later configures the Cloud Load Balancer to terminate the TLS for you. You can find a tutorial here: <a href="https://github.com/jetstack/kube-lego/tree/master/examples/gce" rel="nofollow noreferrer">https://github.com/jetstack/kube-lego/tree/master/examples/gce</a></p>
|
<p>In the pre-k8s, pre-container world, I have a cloud VM that runs nginx and lets an authorized user scp new content into the webroot.</p>
<p>I'd like to build a similar setup in a k8s cluster to host static files, with the goal that:</p>
<ol>
<li>An authorized user can scp new files in</li>
<li>These files are statically served on the web</li>
<li>These files are kept in a persistent volume so they don't disappear when things restart</li>
</ol>
<p>I can't seem to figure out a viable combination of storage class + containers to make this work. I'd definitely appreciate any advice!</p>
<hr>
<h2>Update</h2>
<p>What I didn't realize is that two containers running in the same pod can both have the same <code>gcePersistentDisk</code> mounted as read/write. So my solution in the end looks like one nginx container running in the same pod as an sshd container that can write to the nginx webroot. It's been working great so far.</p>
| <p>I think you're trying to fit a square peg into a round hole here.</p>
<p>Essentially, you're building an FTP server (albeit with scp rather than FTP). </p>
<p>Kubernetes is designed to orchestrate containers. </p>
<p>The two don't really overlap at all.</p>
<p>Now, if you're really intent on doing this, you could hack something together by creating a docker container running an ssh daemon, plus nginx running under supervisor. The layer you need to be concentrating on is getting your existing VM setup replicated in a docker container. You can then run it on Kubernetes and attach a persistent volume. </p>
|
<p>I was able to get it working following NFS example in Kubernetes.</p>
<p><a href="https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs</a></p>
<p>However, when I want to automate all the steps, I need to find the IP and update <code>nfs-pv.yaml</code> PV file with the hard coded IP address as mentioned in the example link page.</p>
<blockquote>
<p>Replace the invalid IP in the nfs PV. (In the future, we'll be able to
tie these together using the service names, but for now, you have to
hardcode the IP.)</p>
</blockquote>
<p>Now, I wonder that <strong>how can we tie these together using the services names</strong>?</p>
<p>Or, <strong>it is not possible</strong> at the latest version of Kubernetes (as of today, the latest stable version is <a href="https://github.com/kubernetes/kubernetes/releases" rel="nofollow noreferrer">v1.6.2</a>) ?</p>
| <p>I got it working after I add <strong>kube-dns</strong> address to the each minion|node where <strong>Kubernetes</strong> is running. After login each minion, update <strong>resolv.conf</strong> file as the following;</p>
<pre><code>cat /etc/resolv.conf
# Generated by NetworkManager
search openstacklocal localdomai
nameserver 10.0.0.10 # I added this line
nameserver 159.107.164.10
nameserver 153.88.112.200
....
</code></pre>
<p>I am not sure is it the best way but this works.</p>
<p>Any better solution is welcome.</p>
|
<p>i have a problem. i want to install helm on the kubernetes but when i want run this command <code>helm init --upgrade</code> but i have this error :</p>
<pre><code>Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/repository/repositories.yaml
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not
a valid chart repository or cannot be reached: Get https://kubernetes-
charts.storage.googleapis.com/index.yaml: dial tcp 216.58.197.80:443: i/o
timeout
</code></pre>
<p>I suppose that the proxy settings didn't set, but I do not find how to do it.</p>
<p>an idea ?</p>
<p>thank's for your help,</p>
<p>sincerely,</p>
<p>Killer_Minet</p>
| <p>Maybe you have an internal proxy? If so, you want to set the <code>https_proxy</code> environment as <code>https_proxy=<your proxy> helm init --upgrade</code>. You can also set it globally as <code>export https_proxy=<your proxy></code>.</p>
|
<p>I have 3 kube masters running and 5 agent nodes. When deploying one of the pod, it fails to start up with the below message</p>
<pre><code>2017-03-23T01:47:25.164033000Z I0323 01:47:25.160242 1 main.go:41] Starting NGINX Ingress controller Version 0.7.0
2017-03-23T01:47:25.165148000Z F0323 01:47:25.164609 1 main.go:55] Failed to create client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory.
</code></pre>
<p>How do i generate certs for 3 masters? I tried on one of the master and copied the files to other 2 servers, but kube-apiserver failed to startup</p>
<pre><code>./make-ca-cert.sh master1_ip IP:master2_ip ,IP:master3_ip,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local
/etc/kubernenets/apiserver config
KUBE_API_ARGS="--client-ca-file=/srv/kubernetes/ca.crt --tls-cert- file=/srv/kubernetes/server.cert --tls-private-key- file=/srv/kubernetes/server.key"
/etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/srv/kubernetes/ca.crt -- service-account-private-key-file=/srv/kubernetes/server.key"
/srv/kubernetes files
kubernetes]# ls -ltr
total 28
-rw-rw----. 1 root root 1216 Mar 21 15:12 ca.crt
-rw-rw----. 1 root root 1704 Mar 21 15:12 server.key
-rw-rw----. 1 root root 4870 Mar 21 15:12 server.cert
-rw-------. 1 root root 1704 Mar 21 15:12 kubecfg.key
-rw-------. 1 root root 4466 Mar 21 15:12 kubecfg.crt
# kubectl get serviceaccounts
NAME SECRETS AGE
default 0 11d
</code></pre>
| <p>You generate certificates on one machine and then copy over to the others. What you have done is the right thing. </p>
<p>But when you generate the server certificates make sure you put the IP address or the hostnames of the machines.</p>
<p><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/02-certificate-authority.md" rel="nofollow noreferrer">Here</a> is an awesome tutorial that you can follow to do that. It's not from the official docs but has the credibility of official docs.</p>
|
<p>I’m trying to install a Helm chart from a Google Storage URL <code>(https://storage.cloud.google.com/bucket-name/php-1.5.tgz)</code>, as per the example <code>A full URL (helm install https://example.com/charts/foo-1.2.3.tgz)</code> <a href="https://github.com/kubernetes/helm/blob/master/docs/using_helm.md#more-installation-methods" rel="nofollow noreferrer">in the documentation here</a>, but I’m getting the error <code>Error: gzip: invalid header</code>.</p>
| <p>I found that the link format <code>https://storage.cloud.google.com/bucket-name/php-1.5.tgz</code> was only for browsers and redirected, a direct link to the same file can be achieved using the format <code>https://storage.googleapis.com/bucket-name/php-1.5.tgz</code>.</p>
|
<p>I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.</p>
<p>I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??</p>
<p>Or is this not the right way to use Helm from a CI/CD server? </p>
| <p>Helm works by using the installed kubectl to talk to your cluster. That means that if you can access your cluster via kubectl, you can use helm with that cluster. </p>
<p>Don't forget to make sure you're using to proper context in case you have more than one cluster in you kubcfg file. You can check that by running <code>kubectl config current-context</code> and comparing that to the cluster details in the kubecfg.</p>
<p>You can find more details in Helm's docs, check the <a href="https://github.com/kubernetes/helm/blob/master/docs/quickstart.md" rel="nofollow noreferrer">quick start guide</a> for more information.</p>
|
<p>I have kubernetes pod where I am mounting my app source code as git volume. I tried various setups for how to put the source code into the pod and git volume was the one I ended up with in the end.</p>
<p>But now I have an issue with data. My app has a files directory in it(empty) and I need to mount a volume(fuse) in there. But since the destination is located on the git volume I cannot do this. So I am wondering how should I redesign my app?</p>
<p>Should I build the app's source code directly into the image so that I can then mount the data volume into it or is there some way I can mount volume into another so that I do not have to adjust anything?</p>
<p>I cannot move the target directory elsewhere since it has to be accessible by the app in its directory and also from web.</p>
| <p>What I usually do is add the sources to the docker image when building the image. This is a straight forward process and you can always see the images as a black box in terms of deployment. What this achieves is effectively decoupling the preparation of the image and deploying/updating at run time as two different processes. </p>
<p>I believe this is the reason why kubernetes makes it easy to perform rolling upgrades for rolling out new software versions by exchanging a complete image, rather than trying to fix up the contents of a container. It is as easy as using the following command: </p>
<p><code>kubectl set image deployment/my-nginx-deployment my-nginx-image=TagXX</code></p>
<p>Replacing images also ensures that any debris is cleaned up (e.g. growing logs, temporary files etc.) and it allows you to bring along way more changes instead of just changing sources (e.g. upgrading server software versions).</p>
<p>It also allows you to perform testing/staging based on the exact images and not only a code deployment on servers that may not be identical to production servers.</p>
<p>You can read up on it at <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">this page</a> under <code>Updating a Deployment</code>.</p>
|
<p>I am trying to install Kubernetes on Ubuntu 16.04 VM, I tried this <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a>, but the API server does not start.</p>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
<p>Is there a good procedure on how to install Kubernetes on Ubuntu VM</p>
| <p>You probably haven't set up the credentials for <code>kubectl</code>.</p>
<pre><code>sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf; if ! fgrep -q KUBECONFIG= $HOME/.bashrc; then echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc; fi;. $HOME/.bashrc
</code></pre>
<p>It takes <code>/etc/kubernetes/admin.conf</code> to the home directory and makes it readable by current user. Also adjusts <code>.bashrc</code> to set the <code>KUBECONFIG</code> environment variable to point to that <code>admin.conf</code>.</p>
|
<p>I am exploring the <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/KubernetesClient.java#L104" rel="nofollow noreferrer">(undocumented?) <code>events()</code> API</a> in <a href="https://github.com/fabric8io/kubernetes-client" rel="nofollow noreferrer">Fabric8's Kubernetes client project</a>.</p>
<p>Specifically, I see that I can do something like the following:</p>
<pre><code>client.events().inAnyNamespace().watch(new Watcher<Something>() {
@Override
public final void eventReceived(final Action action, final Something something) {
}
@Override
public final void onClose(final KubernetesClientException kubernetesClientException) {
if (kubernetesClientException != null) {
// log? throw?
}
}
});
</code></pre>
<p>What are the permitted values of <code>something</code> and <code>Something</code> for something useful to happen? I'm assuming they are <em>supposed</em> to be things like Pods, Services, etc. but I'm not sure.</p>
<p><a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Watcher.java#L20" rel="nofollow noreferrer"><code>Watcher</code>'s sole type parameter is declared as <code><T></code></a>, so it would appear I could create a new <code>Watcher<Integer></code>, but I'm willing to bet money that will never be called. This suggests that there is actually a bound in practice on <code><T></code>, but I don't know what it is, or why it would have been omitted if so.</p>
<p>If I had to guess, I'd guess from the parameter name, <code>resource</code>, that it would be something like <code>T extends</code><a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/dsl/Resource.java" rel="nofollow noreferrer"><code>Resource</code></a><code><?, ?></code> but again, that's only a guess.</p>
<p>Thanks for any pointers, particularly to other documentation I'm sure I've missed.</p>
<p><strong>Update #1</strong>: From banging around in the source code, I can see that the only place that a <code>Watcher.Action</code>'s <code>eventReceived()</code> method is called <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/dsl/internal/WatchConnectionManager.java#L207-L217" rel="nofollow noreferrer">forces the payload to be considered to be a <code>HasMetadata</code> object</a>. Maybe that's my answer?</p>
| <p>You can watch a particular pod or particular job for example. The T type in that case is Pod or Job respectively. Try </p>
<pre><code>kube.extensions().jobs().createNew()...done().watch(new Watcher<Job>(){...})
</code></pre>
|
<p>I have a Kubernetes Ingress resource where I'm trying to redirect all non-www traffic to www subdomain for url canonicalization. So all traffic on <code>example.com</code> should be rewritten to <code>www.example.com</code>. I can't seem to figure out how to use the <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/rewrite" rel="nofollow noreferrer">Ingress rewrite example</a> properly to achieve this.</p>
<p>My Ingress (JSON format):</p>
<pre><code>{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"name": "example-staging",
"annotations": {
"ingress.kubernetes.io/rewrite-target": "/",
"kubernetes.io/ingress.global-static-ip-name": "example-static-ip"
}
},
"spec": {
"rules": [
{
"host": "www.example.nl",
"http": {
"paths": [
{
"path": "/",
"backend": {
"serviceName": "example-service",
"servicePort": 80
}
}
]
}
}
]
}
}
</code></pre>
| <p>The <code>ingress.kubernetes.io/rewrite-target</code> is used for rewriting the request URI, not the host.</p>
<p>It looks like from your link you're using the nginx ingress controller. You can get the effect you want by adding a second <code>Ingress</code> for <code>example.nl</code> that uses the <code>ingress.kubernetes.io/configuration-snippet</code> annotation to add the 301.</p>
<pre><code>{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"name": "example-staging-wwwredir",
"annotations": {
"ingress.kubernetes.io/rewrite-target": "/",
"ingress.kubernetes.io/configuration-snippet": "return 301 $scheme://www.example.nl$request_uri;"
}
},
"spec": {
"rules": [
{
"host": "example.nl",
"http": {
"paths": [
{
"path": "/",
"backend": {
"serviceName": "example-service",
"servicePort": 80
}
}
]
}
}
]
}
}
</code></pre>
|
<p>Following snippet is from kubernetes official documentation (<a href="http://kubernetes.io/docs/user-guide/volumes/#gitrepo" rel="nofollow">http://kubernetes.io/docs/user-guide/volumes/#gitrepo</a>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
</code></pre>
<p>The above will mount complete git repo inside the container at /mypath. Is there a way to mount only specific sub-directory inside of the git repo inside the container at /mypath?</p>
| <p>Yes there is a way to do it. See following example.
I have <a href="https://github.com/surajssd/hitcounter" rel="noreferrer">git repo</a> which has sub directory <a href="https://github.com/surajssd/hitcounter/tree/master/configs" rel="noreferrer"><code>configs</code></a>.</p>
<p>So here is pod file which I am using:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath/
name: git-volume
subPath: "hitcounter/configs"
volumes:
- name: git-volume
gitRepo:
repository: "https://github.com/surajssd/hitcounter"
revision: "9fd11822b822c94853b1c74ceb53adb8e1d2cfc8"
</code></pre>
<p>Note the field <code>subPath</code> in <code>containers</code> in <code>volumeMounts</code>. There you specify what sub-directory from the volume you want to mount at <code>/mypath</code> inside your container.</p>
<p>docs say:</p>
<pre><code>$ kubectl explain pod.spec.containers.volumeMounts.subPath
FIELD: subPath <string>
DESCRIPTION:
Path within the volume from which the container's volume should be mounted.
Defaults to "" (volume's root).
</code></pre>
<p><strong>OR</strong></p>
<p>create pod config file like this</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath/
name: git-volume
subPath: "configs"
volumes:
- name: git-volume
gitRepo:
repository: "https://github.com/surajssd/hitcounter"
revision: "9fd11822b822c94853b1c74ceb53adb8e1d2cfc8"
directory: "."
</code></pre>
<p>The difference here is I have specified <code>directory: "."</code> this ensures that mount point <code>/mypath</code> will become git repo and also the <code>subPath: "configs"</code> has changed since there is no extra directory <code>hitcounter</code>.</p>
<pre><code>$ kubectl explain pod.spec.volumes.gitRepo.directory
FIELD: directory <string>
DESCRIPTION:
Target directory name. Must not contain or start with '..'. If '.' is
supplied, the volume directory will be the git repository. Otherwise, if
specified, the volume will contain the git repository in the subdirectory
with the given name.
</code></pre>
<p>HTH</p>
|
<p>I am working in set up a kubernetes cluster using the following stuff:</p>
<ul>
<li><strong>AWS</strong> as a cloud provider</li>
<li><strong>kops (Version 1.6.0-alpha, just to test)</strong> as a cli tool to create and manage cluster</li>
<li><strong>kubectl (server : v1.6.2 and client : 1.6.0 )</strong> to control my cluster</li>
<li>Ubuntu 16 as a local OS</li>
</ul>
<p>I have a simple k8s cluster with the following stuff:</p>
<ul>
<li><strong>AWS region</strong> : us-west-2</li>
<li>One <strong>master</strong> over : t2.medium
/ k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09</li>
<li>One <strong>node</strong> onver : t2.medium
/ k8s-1.5-debian-jessie-amd64-hvm-ebs-2017-01-09</li>
</ul>
<p>I also have some pods deployed over the cluster and I created jmeter stress test to generate artificial traffic.</p>
<p>My question is <strong>How can I create a auto-scaling node on a k8s cluster using kops over aws?</strong></p>
<p>I just found the following ad-don <a href="https://github.com/kubernetes/kops/tree/master/addons/cluster-autoscaler" rel="nofollow noreferrer">kops addons</a> in kops repository. I deployed as the docs says and it is available.</p>
<p>My parameters were:</p>
<pre><code>CLOUD_PROVIDER=aws
IMAGE=gcr.io/google_containers/cluster-autoscaler:v0.4.0
MIN_NODES=1
MAX_NODES=3
AWS_REGION=us-east-2
GROUP_NAME="<the-auto-scaling-group-Name>"
SSL_CERT_PATH="/etc/ssl/certs/ca-certificates.crt" # (/etc/ssl/certs for gce)
$ kubectl get deployments --namespace=kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
cluster-autoscaler 1 1 1 1 3h
dns-controller 1 1 1 1 3h
kube-dns 2 2 2 2 3h
kube-dns-autoscaler 1 1 1 1 3h
kubernetes-dashboard 1 1 1 1 3h
</code></pre>
<p>However, after stress my node using a pod with stress containers nothing happens (100% cpu utilization) and my auto-scaling group is not modified.</p>
<p><a href="https://i.stack.imgur.com/BXbib.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BXbib.png" alt="cpu utilization"></a></p>
<p><a href="https://i.stack.imgur.com/DGS7f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGS7f.png" alt="auto-scaling group"></a></p>
<p>In the other hand, I export the kops output in terraform but there ia not auto scaling policies to generate auto-scaling base on cpu utilization.</p>
<p>Finally, I could find an <a href="http://blog.kubernetes.io/2016/07/autoscaling-in-kubernetes.html" rel="nofollow noreferrer">entry</a> in the k8s blog which indicates that it will be support in the future by AWS but there is not other announcement about it. </p>
<p>Any suggestion or experience with that task in AWS and kops?. Next I will try to generate auto-scaling policies manually just to test.</p>
| <p>Firstly you <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#releases" rel="nofollow noreferrer">should use</a> autoscaler <code>gcr.io/google_containers/cluster-autoscaler:v0.5.0</code> when using Kubernetes <code>1.6.x</code> .</p>
<p>Secondly from my understanding the autoscaler itself only scales the ASG if there is a pod in <code>Pending</code> state because it can't fit in any existing node.</p>
<p>For your use-case, <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a> will scale up your application (which is being stressed) when under high load, make sure to mention the <code>requests</code> portion in the podspec. Once the autoscaler sees newly scaled pods don't fit a node, it will launch new node.</p>
<p>Disclaimer: I haven't played with Horizontal Pod Autoscaling yet.</p>
|
<p>I have Kubernetes working well in two different environments, namely in my local environment (MacBook running minikube) and as well as on Google's Container Engine (GCE, Kubernetes on Google Cloud). I use the MacBook/local environment to develop and test my YAML files and then, upon completion, try them on GCE. </p>
<p>Currently I need to work with each environment individually: I need to edit the YAML files in my local environment and, when ready, (git) clone them to a GCE environment and then use/deploy them. This is a somewhat cumbersome process.</p>
<p>Ideally, I would like to use kubectl from my Macbook to easily switch between the local minikube or GCE Kubernetes environments and to easily determine where the YAML files are used. Is there a simple way to switch contexts to do this?</p>
| <p>You can switch from local (minikube) to gcloud and back with:</p>
<p><code>kubectl config use-context CONTEXT_NAME</code></p>
<p>to list all contexts:</p>
<p><code>kubectl config get-contexts</code></p>
<p>You can create different enviroments for local and gcloud and put it in separate yaml files. </p>
|
<p>I found that <strong>startup scripts</strong> can be added to <strong>Google compute instances</strong> using either the <strong>console</strong> or <strong>cli(gcloud).</strong> I want to add the <strong>startup scripts</strong> to <strong>google container engine.</strong> </p>
<p>The <strong>goal</strong> is to <strong>notify</strong> me when the google container engine has changed its state to <strong>Running</strong>. I though one efficient way is to use startup scripts in container engine, as these scripts will only be executed when the container's status is changed to running.</p>
<p>Any idea how to add startup scripts to container engine or any other way of notifying when the container's status changes to running.</p>
| <p>First of all your question is fairly complicated. The concept of startup scripts do not belong to the containers world. <strong>As far as I know you can't add startup scripts in Google Container Engine</strong>. This is because Container Engine instances are immutable (e.g. you can't or you are not supposed to modify the operating system, you should just run containers).</p>
<p>If you're trying to run scripts when a container starts/stops you need to forget about startup scripts concept in the Compute Engine world. You can use <strong>container lifecycle hooks</strong> in Kubernetes (the orchestrator running in Container Engine). </p>
<p>Here's documentation and tutorial about it:
<a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/</a>
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</a></p>
|
<p>The <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">kubernetes documentation</a> describes the two types of components that live within a kubernetes cluster: master components and node components. I wasn't able to find diagrams that accurately and completely described the components as described in the docs. The <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/design/architecture.md" rel="nofollow noreferrer">only official diagram</a> I found hasn't been updated for 1.5 years.</p>
<p>Personally, I find diagrams a very useful resource to understand concepts. Therefore, I created my own diagrams and would like to know, if I'm missing anything.</p>
<p>Thank you.</p>
<p><strong>Kubernetes High-Level Architecture</strong>
<a href="https://i.stack.imgur.com/2ZbC7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ZbC7.png" alt="Kubernetes High-Level Architecture"></a></p>
<p><strong>Kubernetes Master Component</strong>
<a href="https://i.stack.imgur.com/rUvry.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rUvry.png" alt="Kubernetes Master Component"></a></p>
<p><strong>Kubernetes Node Component</strong>
<a href="https://i.stack.imgur.com/jbOe3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jbOe3.png" alt="Kubernetes Node Component"></a></p>
| <p>My advices:</p>
<ul>
<li>There could be any number of master nodes, so I would make this visible somehow on the diagram.</li>
<li>kube-proxy and kubelet runs on every node, even on master nodes.</li>
<li>docker can be interchanged with rkt as well, I would show it up as well.</li>
<li>fluentd is not part of the core architecture.</li>
</ul>
|
<p>I have 2 namespaces and 1 pod, 1 service running in each.</p>
<p>Example</p>
<pre><code>Namespace 1: default
Pod: pod1
Service: pod1service
Namespace 2: test
Pod: pod1
Service: pod1service
</code></pre>
<p>I can actually make HTTP request from namespace2 pod to namespace1 pod.</p>
<pre><code>curl -H "Content-Type: application/json" -X GET http://pod1service.default.svc.cluster.local/some/api
</code></pre>
<p>How do i disable communication between 2 differet namespaces?</p>
| <p>You need to configure <a href="https://kubernetes.io/docs/concepts/services-networking/networkpolicies/" rel="nofollow noreferrer">network policies</a>. For that to work you also need to use a <a href="https://kubernetes.io/docs/concepts/cluster-administration/addons/" rel="nofollow noreferrer">network addon</a> that supports policies.</p>
|
<p>I am exploring <a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a> Cluster orchestration and I am familiar with docker based containerization techniques.</p>
<p>Normally when starting docker containers, we pass different CLI Arguments(port options + Env variables) something like below</p>
<pre><code>docker run --name myService -p 8080:8080 -v /var/lib/otds:/usr/local/otds -e VIRTUAL_PORT=8080 myImage
</code></pre>
<p>When I am trying to up the same on Kubernetes Cluster(using its CLI - kuberctl) I am seeing errors saying that these arguments are not recognized</p>
<p>I am trying something like below</p>
<pre><code>kuberctl run myService -p 8080:8080 -v /var/lib/otds:/usr/local/otds -e VIRTUAL_PORT=8080 --image==myImage
</code></pre>
<p>I am looking for help on how to pass docker's CLI Arguments to KuberCTL </p>
| <p><code>kubectl run</code> is just a shorthand convenience method. Normally you should be writing pod specs in YAML/JSON.</p>
<p>Based on your unfamiliarity with basics, I would highly recommend sitting down and following through some of the training material at <a href="https://kubernetes.io/docs/tutorials/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/</a></p>
<p>As for your question, in a pod spec, the <code>command</code>/<code>args</code> field is what you're looking for and it is documented here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/define-command-argument-container/</a></p>
<p>Here's a sample:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- name: foo
image: alpine
command: ["date"]
</code></pre>
|
<p>I was able to get it working following NFS example in Kubernetes.</p>
<p><a href="https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs</a></p>
<p>However, when I want to automate all the steps, I need to find the IP and update <code>nfs-pv.yaml</code> PV file with the hard coded IP address as mentioned in the example link page.</p>
<blockquote>
<p>Replace the invalid IP in the nfs PV. (In the future, we'll be able to
tie these together using the service names, but for now, you have to
hardcode the IP.)</p>
</blockquote>
<p>Now, I wonder that <strong>how can we tie these together using the services names</strong>?</p>
<p>Or, <strong>it is not possible</strong> at the latest version of Kubernetes (as of today, the latest stable version is <a href="https://github.com/kubernetes/kubernetes/releases" rel="nofollow noreferrer">v1.6.2</a>) ?</p>
| <p>You can use do this with the help of <code>kube-dns</code>,
check whether it's service running or not,</p>
<p><code>
kubectl get svc --namespace=kube-system
</code></p>
<p>and kube-dns pod also,</p>
<p><code>
kubectl get pods --namespace=kube-system
</code></p>
<p>you have to add respected name-server according to kube-dns on each node in cluster,
For more troubleshooting, follow this document,
<a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
|
<p>I'm trying to figure out the best way to work with Docker containers in my dev environment, but deploy versioned images of my app to Kubernetes. Here's what I have so far:</p>
<p>I'm using the following Docker image as a base - <a href="https://github.com/shufo/docker-phoenix" rel="nofollow noreferrer">https://github.com/shufo/docker-phoenix</a>. When I create a container for my dev environment (following the instructions) I run:</p>
<pre><code>docker run -d -p 4000:4000 -v $(pwd):/app -w /app shufo/phoenix
</code></pre>
<p>As far as I understand, for Kubernetes, my app needs to be contained in a versioned image. In other words, the code needs to be in the container that's generated, rather than being passed up to the container in a volume?</p>
<p>So I have a Dockerfile that looks like this:</p>
<pre><code>FROM shufo/phoenix
MAINTAINER Hamish Murphy <[email protected]>
COPY . /app
WORKDIR /app
</code></pre>
<p>After building the image (with a version number), I can use this to create a Deployment in Kubernetes.</p>
<p>So this is a home baked solution. Is there a common pattern for this?</p>
| <p>In my opinion you are on the right track. Including the code in the image is probably considered best practice. I have recently written an <a href="https://stackoverflow.com/questions/43626583/how-to-mount-volume-into-the-source-code-of-the-app/43627520#43627520">answer to another question</a> describing some of the benefits.</p>
<p>In practice people are using all kinds of ways to serve the application to a container. It is possible to use attached volumes or have a Git in the container pull/update the code when deployed but I believe you would need some good reason (that I can't think of) for that being preferable.</p>
|
<p>I have a a 3 nodejs grpc server pods and a headless kubernetes service for the grpc service (returns all 3 pod ips with dns tested with getent hosts from within the pod). However all grpc client request always end up at a single server.</p>
<p>According to <a href="https://stackoverflow.com/a/39756233/2952128">https://stackoverflow.com/a/39756233/2952128</a> (last paragraph) round robin per call should be possible Q1 2017. I am using grpc 1.1.2</p>
<p>I tried to give <code>{"loadBalancingPolicy": "round-robin"}</code> as options for <code>new Client(address, credentials, options)</code> and use <code>dns:///service:port</code> as address. If I understand documentation/code correctly this should be handed down to the c-core and use the newly implemented round robin channel creation. (<a href="https://github.com/grpc/grpc/blob/master/doc/service_config.md" rel="noreferrer">https://github.com/grpc/grpc/blob/master/doc/service_config.md</a>)</p>
<p>Is this how round-robin load balancer is supposed to work now? Is it already released with grpc 1.1.2?</p>
| <p>After diving deep into Grpc-c core code and the nodejs adapter I found that it works by using the option key <code>"grpc.lb_policy_name"</code>. Therefore, constructing the gRPC client with</p>
<pre><code>new Client(address, credentials, {"grpc.lb_policy_name": "round_robin"})
</code></pre>
<p>works.
Note that in my original question I also used <code>round-robin</code> instead of the correct <code>round_robin</code></p>
<p>I am still not completely sure how to set the <code>serviceConfig</code> from the service side with nodejs instead of using client (channel) option override.</p>
|
<p>Based on this (<a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a>) step I have installed Kubernetes in Centos 7 box and ram the kubeadm init command.</p>
<p>But node is not in ready status. When I looked the /var/log/messages. getting below message.</p>
<pre><code>Apr 30 22:19:38 master kubelet: W0430 22:19:38.226441 2372 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 30 22:19:38 master kubelet: E0430 22:19:38.226587 2372 kubelet.go:2067] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
</code></pre>
<p>My kubelet running with these arguments.</p>
<pre><code> /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=systemd
</code></pre>
<p>in my server I didn't see /etc/cni/net.d directory. in /opt/cin/bin directory I see these files.</p>
<pre><code># ls /opt/cni/bin
bridge cnitool dhcp flannel host-local ipvlan loopback macvlan noop ptp tuning
</code></pre>
<p>How can I clear this error message?</p>
| <p>Looks like you've chosen flannel as CNI-networking.
Pls check if you've specified --pod-network-cidr 10.244.0.0/16 while kubeadm init.</p>
<p>Also check if you've ConfigMaps created for flannel as in here @ <a href="https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml" rel="noreferrer">https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml</a></p>
|
<p>I have setup this 3 nodes cluster (<a href="http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/vagrant/" rel="nofollow noreferrer">http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/vagrant/</a>).</p>
<p>after restarting my nodes. KubeDNS service is not starting. log didn't show much information.</p>
<p>getting bellow message</p>
<pre><code>$ kubectl logs --namespace=kube-system kube-dns-v19-sqx9q -c kubedns
Error from server (BadRequest): container "kubedns" in pod "kube-dns-v19-sqx9q" is waiting to start: ContainerCreating
</code></pre>
<p>nodes are running.</p>
<pre><code>$ kubectl get nodes
NAME STATUS AGE VERSION
172.18.18.101 Ready,SchedulingDisabled 2d v1.6.0
172.18.18.102 Ready 2d v1.6.0
172.18.18.103 Ready 2d v1.6.0
$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-node-6rhb9 2/2 Running 4 2d
calico-node-mbhk7 2/2 Running 93 2d
calico-node-w9sjq 2/2 Running 6 2d
calico-policy-controller-2425378810-rd9h7 1/1 Running 0 25m
kube-dns-v19-sqx9q 0/3 ContainerCreating 0 25m
kubernetes-dashboard-2457468166-rs0tn 0/1 ContainerCreating 0 25m
</code></pre>
<p>How can I find what is wrong with DNS service?</p>
<p>Thanks
SR</p>
<p><strong>some more details</strong></p>
<pre><code>Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
31m 31m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 87bd5c4bc5b9d81468170cc840ba9203988bb259aa0c025372ee02303d9e8d4b"
31m 31m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d091593b55eb9e16e09c5bc47f4701015839d83d23546c4c6adc070bc37ad60d"
30m 30m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 69a1fa33f26b851664b2ad10def1eb37b5e5391ca33dad2551a2f98c52e05d0d
30m 30m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: c3b7c06df3bea90e4d12c0b7f1a03077edf5836407206038223967488b279d3d"
28m 28m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 467d54496eb5665c5c7c20b1adb0cc0f01987a83901e4b54c1dc9ccb4860f16d"
28m 28m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1cd8022c9309205e61d7e593bc7ff3248af17d731e2a4d55e74b488cbc115162
27m 27m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1ed4174aba86124055981b7888c9d048d784e98cef5f2763fd1352532a0ba85d
26m 26m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 444693b4ce06eb25f3dbd00aebef922b72b291598fec11083cb233a0f9d5e92d"
25m 25m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 736df24a9a6640300d62d542e5098e03a5a9fde4f361926e2672880b43384516
8m 8m 1 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 8424dbdf92b16602c7d5a4f61d21cd602c5da449c6ec3449dafbff80ff5e72c4
2h 1m 49 kubelet, 172.18.18.102 Warning FailedSync (events with common reason combined)
2h 2s 361 kubelet, 172.18.18.102 Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-v19-sqx9q_kube-system\" network: the server has asked for the client to provide credentials (get pods kube-dns-v19-sqx9q)"
2h 1s 406 kubelet, 172.18.18.102 Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
</code></pre>
<p><strong>pod describe output</strong></p>
<pre><code>Name: kube-dns-v19-sqx9q
Namespace: kube-system
Node: 172.18.18.102/172.18.18.102
Start Time: Mon, 24 Apr 2017 17:34:22 -0400
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
version=v19
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kube-dns-v19","uid":"dac3d892-278c-11e7-b2b5-0800...
scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status: Pending
IP:
Controllers: ReplicationController/kube-dns-v19
Containers:
kubedns:
Container ID:
Image: gcr.io/google_containers/kubedns-amd64:1.7
Image ID:
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local
--dns-port=10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
dnsmasq:
Container ID:
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
Image ID:
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
healthz:
Container ID:
Image: gcr.io/google_containers/exechealthz-amd64:1.1
Image ID:
Port: 8080/TCP
Args:
-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
-port=8080
-quiet
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 10m
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-r5xws:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r5xws
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
</code></pre>
| <p>service account mount <code>/var/run/secrets/kubernetes.io/serviceaccount</code> from secret <code>default-token-r5xws</code> failed. Check the logs for this secret creation failure.</p>
|
<p>I am new to cloud and have 5 distributed instances on AWS running the same images. I have attached an image of the CPU usage CloudWatch monitor here:</p>
<p><a href="https://i.stack.imgur.com/5KdIl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5KdIl.png" alt="enter image description here"></a></p>
<p>2 of my instances are running at 100% but 3 of them drop to 0%. There are still tasks being processed and I would like the other 3 instances to run without idling until all the tasks are completed. What is going on here and how can I fix it?</p>
| <p>Are you using T2 Instances ? </p>
<p>T2 instances are designed to provide moderate baseline performance and the capability to burst to significantly higher performance as required by your workload</p>
<p>You can get more information from below link for Initial CPU credit available for t2 series of instances. </p>
<p><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html" rel="nofollow noreferrer">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html</a></p>
|
<p>I have sample apps running on my cluster. I have a webapp pod which has three containers. Each running as a separate springboot webservice. employee, test1 and test2. The service exposing this is shown below</p>
<p><code>apiVersion: v1
kind: Service
metadata:
labels:
name: webapp
name: webappservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
name: webapp</code></p>
<p>The pod spec is below - UPDATED to have whole context</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
name: webapp
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: kube/employee
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
-
resources:
limits:
cpu: 0.5
image: kube/test1
imagePullPolicy: IfNotPresent
name: wstest1
ports:
- containerPort: 8081
name: wstest1
imagePullSecrets:
- name: myregistrykey
</code></pre>
<p>My assumption was that the webservice runs on 30062 on the node and depending on the mapping, I'll be able to access the webservice. eg <a href="http://11.168.24.221:30062/employee" rel="nofollow noreferrer">http://11.168.24.221:30062/employee</a> and <a href="http://11.168.24.221:30062/test1/" rel="nofollow noreferrer">http://11.168.24.221:30062/test1/</a></p>
<p>Separate logs from employee container and test1 container below.</p>
<pre><code>s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/employee],methods=[GET]}" onto public java.util.List<employee.model.Employee> employee.controller.EmployeeController.getAll()
s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/test1/],methods=[GET]}" onto public java.util.List<model.Test1> test1.controller.Test1Controller.getAll()
</code></pre>
<p>Issue is <a href="http://11.168.24.221:30062/employee" rel="nofollow noreferrer">http://11.168.24.221:30062/employee</a> hits the webservice properly. but when I hit <a href="http://11.168.24.221:30062/test1/" rel="nofollow noreferrer">http://11.168.24.221:30062/test1/</a> , it says the test1/mapping is not available when above in the logs, the mapping is clearly avaialble. Error message is Whitelabel Error Page/.This application has no explicit mapping for /error, so you are seeing this as a fallback.</p>
<p>Anything I am doing wrong?</p>
| <p>Your service yaml shows clearly that you are only exposing port 8080 as NodePort 30062. It is possible to simply add another <code>-port:8081 nodePort:30063</code> to your existing configuration, but - Since your two services are separate containers anyway you may prefer to create two separate deployments and services in kubernetes. One for the employee and one for your test1 service. That will allow you to develop, deploy and test them separately. And it is generally not recommended to use multiple containers in a POD (with some exceptions) - See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/communicate-containers-same-pod/" rel="nofollow noreferrer">this</a>.</p>
<p>Here are the two yamls for the services. Note that I changed the names, labels, and selectors.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: employeeservice
name: employeeservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
app: employeeservice
</code></pre>
<hr>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: test1service
name: test1service
spec:
ports:
- port: 8081
nodePort: 30063
type: NodePort
selector:
app: test1service
</code></pre>
<p>You are not using deployments at all but that is not recommended and you won't benefit from kubernetes self healing abilities, e.g. the pods getting replaced automatically when they become unhealthy.</p>
<p>Creating a deployment is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">easy</a>. Here are two yamls for deployments that include your POD specs. Note that I changed the names to match the selectors from the services above. I have set the replica count to 1, so only one POD will be maintained per deployment, but you can easily scale it up by setting it to a higher number.</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: employeeservice-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: employeeservice
spec:
containers:
-resources:
limits:
cpu: 0.5
image: kube/employee
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
imagePullSecrets:
- name: myregistrykey
</code></pre>
<hr>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test1service-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: test1service
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: kube/test1
imagePullPolicy: IfNotPresent
name: wstest1
ports:
- containerPort: 8081
name: wstest1
imagePullSecrets:
- name: myregistrykey
</code></pre>
<p>Also note that your service is reachable by name through the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS</a>. So if you use above yamls you should be able to query a service from within the cluster at <code>http://employeeservice/employee</code> instead of using the Nodes ip addresses. For access from outside the cluster you can use the NodePorts as specified and typically would do that through some kind of load balancer that routes to all the nodes. </p>
|
<p>Question is similar to following SO question. But I am not looking forward to create classic load balancer. </p>
<p><a href="https://stackoverflow.com/questions/31611503/how-to-create-kubernetes-load-balancer-on-aws">How to create Kubernetes load balancer on aws</a> </p>
<p>AWS now provide 2 types of loadbalancer, classic load balancer and application load balancer. Please read following document for more information,</p>
<p><a href="https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/</a></p>
<p>I already know how classic load balancer work with kubernetes. I wonder if there is any flag/ tool exist so that we can also configure application loadbalancer.</p>
| <p>An AWS ALB Ingress Controller has been built which you can find on GitHub: <a href="https://github.com/coreos/alb-ingress-controller" rel="nofollow noreferrer">https://github.com/coreos/alb-ingress-controller</a></p>
|
<p>I would like to deploy a kubernetes cluster with weave virtual network in a node that is behind a NAT. (for example, using a floating IP in openstack)</p>
<p>Here is an example:</p>
<p><em>Kube Master</em>: weave pod running here</p>
<ul>
<li>Internal IP: 192.168.0.10</li>
<li>External IP: 172.10.0.10</li>
</ul>
<p><em>Kube nodes (worker)</em></p>
<ul>
<li>Internal IP: 172.10.0.11</li>
</ul>
<p>The logs in the pod running on the kube node (worker) looks like the following:</p>
<pre><code>$ docker logs -f <id-of-weaveworks/weave-kube>
INFO: 2017/04/28 15:31:00.627655 Command line options: map[ipalloc-range:10.32.0.0/12 nickname:rpi3-kube status-addr:0.0.0.0:6782 docker-api: datapath:datapath http-addr:127.0.0.1:6784 ipalloc-init:consensus=3 no-dns:true port:6783 conn-limit:30]
INFO: 2017/04/28 15:31:00.628107 Communication between peers is unencrypted.
INFO: 2017/04/28 15:31:00.888331 Our name is 8e:0e:19:5d:4e:5e(rpi3-kube)
INFO: 2017/04/28 15:31:00.889315 Launch detected - using supplied peer list: [192.168.0.12 192.168.0.10 172.10.0.11]
INFO: 2017/04/28 15:31:00.890130 Checking for pre-existing addresses on weave bridge
INFO: 2017/04/28 15:31:00.903121 [allocator 8e:0e:19:5d:4e:5e] No valid persisted data
INFO: 2017/04/28 15:31:00.999661 [allocator 8e:0e:19:5d:4e:5e] Initialising via deferred consensus
INFO: 2017/04/28 15:31:01.001342 Sniffing traffic on datapath (via ODP)
INFO: 2017/04/28 15:31:01.004907 ->[192.168.0.10:6783] attempting connection
INFO: 2017/04/28 15:31:01.005712 ->[192.168.0.12:6783] attempting connection
INFO: 2017/04/28 15:31:01.007297 ->[172.10.0.11:6783] attempting connection
INFO: 2017/04/28 15:31:01.010285 ->[172.10.0.11:38918] connection accepted
INFO: 2017/04/28 15:31:01.019476 ->[147.214.99.166:6783|8e:0e:19:5d:4e:5e(rpi3-kube)]: connection shutting down due to error: cannot connect to ourself
INFO: 2017/04/28 15:31:01.021265 ->[147.214.99.166:38918|8e:0e:19:5d:4e:5e(rpi3-kube)]: connection shutting down due to error: cannot connect to ourself
INFO: 2017/04/28 15:31:01.042197 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2017/04/28 15:31:01.051546 Listening for metrics requests on 0.0.0.0:6782
INFO: 2017/04/28 15:33:08.225310 ->[192.168.0.10:6783] error during connection attempt: dial tcp4 :0->192.168.0.10:6783: getsockopt: connection timed out
</code></pre>
<p>The kubernetes cluster is propperly configured and the advertise address is: 172.10.0.10</p>
| <p>My current workaround is to fix the IPs of the nodes in the weave configuration file:</p>
<pre><code>wget https://git.io/weave-kube-1.6
vim weave-kube-1.6
</code></pre>
<p>and modify it with your IPs</p>
<pre><code> containers:
- name: weave
env:
- name: KUBE_PEERS
value: "172.10.0.10 172.10.0.11"
</code></pre>
|
<p>I have a super simple container with cron scheduled:</p>
<pre><code>* * * * * root /bin/bash /alive.sh
</code></pre>
<p>alive.sh:</p>
<pre><code>#!/bin/bash
/bin/echo "I'm alive"
/bin/echo $(/bin/date) >> /tmp/alive.log
</code></pre>
<p>I build the docker image locally and run it:</p>
<pre><code>docker build -t orian/crondemo:v0 .
docker run --rm -it --name crondemo orian/crondemo:v0
</code></pre>
<p>And after a minute or so I can check that a new file is being created:</p>
<pre><code>docker exec crondemo ls /tmp
</code></pre>
<p>The I tag and push the image to Google Container Registry:</p>
<pre><code>TAG=eu.gcr.io/<PROJECT_ID>/crondemo:v0
docker tag orian/crondemo:v0 $TAG
kubectl docker -- push $TAG
</code></pre>
<p>Starting a pod manually:</p>
<pre><code>kubectl run crondemo --image=$TAG --replicas=1 --restart=Never
</code></pre>
<p>And verifying that it works:</p>
<pre><code>kubectl exec crondemo ls /tmp
</code></pre>
<p>And here is a problem, a <code>/tmp/alive.log</code> file is not being written to. Where is the problem?</p>
<p>I've prepared a repo with sample: <a href="https://github.com/orian/k8s-cron-demo" rel="nofollow noreferrer">https://github.com/orian/k8s-cron-demo</a></p>
<h3>Notice</h3>
<ul>
<li>I've also tested overwriting <code>/var/spool/cron/crontabs/root</code> but it didn't solve the problem.</li>
<li>I'm using docker image: <code>openjdk:8-jre</code>. Before switching I used <code>alpine</code> and <code>crond</code>. It's seemed to work then.</li>
</ul>
<h3>Edit2 - found (this is crazy):</h3>
<ul>
<li><a href="https://forums.docker.com/t/running-cronjob-in-debian-jessie-container/17527/2" rel="nofollow noreferrer">https://forums.docker.com/t/running-cronjob-in-debian-jessie-container/17527/2</a></li>
<li><a href="https://stackoverflow.com/questions/21926465/issues-running-cron-in-docker-on-different-hosts">Issues running cron in Docker on different hosts</a></li>
<li><a href="https://stackoverflow.com/questions/21391142/why-is-it-needed-to-set-pam-loginuid-to-its-optional-value-with-docker">Why is it needed to set `pam_loginuid` to its `optional` value with docker?</a></li>
</ul>
| <p>I've followed a <a href="https://stackoverflow.com/a/21928878/436754">https://stackoverflow.com/a/21928878/436754</a> on enabling logs.</p>
<p>Running: <code>/var/log/syslog</code> </p>
<pre><code>May 4 12:33:05 crondemo rsyslogd: [origin software="rsyslogd" swVersion="8.4.2" x-pid="14" x-info="http://www.rsyslog.com"] start
May 4 12:33:05 crondemo rsyslogd: imklog: cannot open kernel log(/proc/kmsg): Operation not permitted.
May 4 12:33:05 crondemo rsyslogd-2145: activation of module imklog failed [try http://www.rsyslog.com/e/2145 ]
May 4 12:33:08 crondemo cron[38]: (CRON) INFO (pidfile fd = 3)
May 4 12:33:08 crondemo cron[39]: (CRON) STARTUP (fork ok)
May 4 12:33:08 crondemo cron[39]: (*system*) NUMBER OF HARD LINKS > 1 (/etc/crontab)
May 4 12:33:08 crondemo cron[39]: (*system*crondemo) NUMBER OF HARD LINKS > 1 (/etc/cron.d/crondemo)
May 4 12:33:08 crondemo cron[39]: (CRON) INFO (Running @reboot jobs)
May 4 12:34:01 crondemo cron[39]: (*system*) NUMBER OF HARD LINKS > 1 (/etc/crontab)
May 4 12:34:01 crondemo cron[39]: (*system*crondemo) NUMBER OF HARD LINKS > 1 (/etc/cron.d/crondemo)
</code></pre>
<p>This made me Google for <code>cron "NUMBER OF HARD LINKS > 1"</code> and I've found: <a href="https://github.com/phusion/baseimage-docker/issues/198" rel="nofollow noreferrer">https://github.com/phusion/baseimage-docker/issues/198</a></p>
<p>The workaround is to modify a <code>Dockerfile</code> to overwrite the cron file on start instead being mounted by Docker.</p>
<ul>
<li>Dockerfile <code>COPY cronfile /cronfile</code></li>
<li><code>docker-entrypoint.sh</code>: <code>cp /cronfile /etc/cron.d/crondemo</code></li>
</ul>
<p>A branch with workaround: <a href="https://github.com/orian/k8s-cron-demo/tree/with-rsyslog" rel="nofollow noreferrer">https://github.com/orian/k8s-cron-demo/tree/with-rsyslog</a></p>
|
<p>We have a rest API that is written in Java (hosted in Wildfly). Our service is running in kubernetes (GKE). We want to leverage Cloud Endpoints to track usage and responsiveness of our API. The API is not new, we have been shipping software that interacts with it for years. It is also quite large (thousands of public methods). We have Swagger documentation for our API, and have no validation errors. When I try to deploy our Swagger using:</p>
<pre><code>gcloud beta service-management deploy swagger.yaml
</code></pre>
<p>It is not successful. I get the following error repeated 237 times:</p>
<pre><code>ERROR: unknown location: http: body field path 'body' must be a non-repeated message.
</code></pre>
<p>I have tracked it down to 237 methods that include a json array in a body parameter. In our API these are methods that either accept or return a list of objects.
Is there any way I can get this accepted by <code>service-management deploy</code>? Changing our API isn't an option, but we would really like to be able to use endpoints.</p>
<p>For example, this method signature:</p>
<pre><code>@PUT
@Path ("/foobars/undelete")
@Consumes (MediaType.APPLICATION_JSON)
@Produces (MediaType.APPLICATION_JSON)
@ApiOperation (value = "Undelete foobars")
@ApiResponses (value =
{
@ApiResponse (
code = 200,
message = "foobars undeleted",
response = FooBar.class,
responseContainer = "List"
) , @ApiResponse (
code = 206,
message = "Not all foobars undeleted",
response = FooBar.class,
responseContainer = "List"
) , @ApiResponse (
code = 410,
message = "Not found"
) , @ApiResponse (
code = 500,
message = "Server Error"
)
})
public Response undeleteFooBars (@ApiParam (value = "FooBar ID List") List<UUID> entityIds)
</code></pre>
<p>generates this swagger snippet:</p>
<pre><code>"/foobars/undelete":
put:
tags:
- foo
summary: Undelete FooBars
description: ''
operationId: undeleteFooBars
consumes:
- application/json
produces:
- application/json
parameters:
- in: body
name: body
description: FooBar ID List
required: false
schema:
type: array
items:
type: string
format: uuid
responses:
'200':
description: Foo Bars undeleted
schema:
type: array
items:
"$ref": "#/definitions/FooBar"
'206':
description: Not all FooBars undeleted
schema:
type: array
items:
"$ref": "#/definitions/FooBar"
'410':
description: Not found
'500':
description: Server Error
</code></pre>
| <p>I have had the exact same problem with Endpoints, where it does not seem to think that passing an array of objects is valid as a body parameter. I worked around this by just using a generic object and a decent description. The description will not programatically fix anything, but using a generic object allows Endpoints to work and the description gives information to the consumer of the API for what is expected.</p>
<pre><code>parameters:
- in: body
name: body
description: Array of FooBar objects
required: false
schema:
type: object
</code></pre>
<p>This seems like an oversight on the part of the Endpoints team IMHO as using an array of objects in the body fits fine within the OpenApi spec and works with tools like <a href="http://editor.swagger.io/" rel="noreferrer">http://editor.swagger.io/</a></p>
<p><strong>Edit</strong>: I should also add that it is generally bad practice to use just a raw array as a request body or response body as it can cause a contract breaking change if additional properties are desired in the future, like say a count or pagination information.<br>
If this is an existing API and you are just documenting the existing contract, then this solution will work to get the job done, but if you are designing a new API, then a better definition would be: </p>
<pre><code>parameters:
- in: body
name: body
description: All the FooBar objects
required: false
schema:
type: object
properties:
items:
type: array
items:
$ref: '#/definitions/FooBarResource'
</code></pre>
<p>Since this could later be extended to add additional properties like </p>
<pre><code>parameters:
- in: body
name: body
description: All the FooBar objects
required: false
schema:
type: object
properties:
count:
type: integer
description: The total count of resources
callbackUrl:
type: string
description: The URL to trigger once creation is complete
items:
type: array
items:
$ref: '#/definitions/FooBarResource'
description: The resources to create
</code></pre>
|
<p>Say I have, my-namespace -> my-pod -> my-container
and I have a file located at my-container:/opt/tomcat/logs/catalina.2017-05-02.log. I have applied the below command to copy the file which isn't working,</p>
<pre><code>kubectl cp my-namepace/my-pod:/opt/tomcat/logs/catalina.2017-05-02.log -c my-container .
</code></pre>
<p>Note: I have the tar binary on my container</p>
<p>Error:</p>
<pre><code>tar: Removing leading `/' from member names
error: open .: is a directory
</code></pre>
| <p>What you are asking <code>kubectl</code> to do is copy the file catalina.2017-05-02.log to the current context, but the current context is a directory. The error is stating that you can not copy a file to have the name of a directory.</p>
<p>Try giving the copied version of the file a name:</p>
<p><code>kubectl cp my-namepace/my-pod:/opt/tomcat/logs/catalina.2017-05-02.log -c my-container ./catalina.2017-05-02.log</code>.</p>
|
<p>I'm a newbie on Spring Cloud, and I'm a little confused about it. Kubernetes and Spring Cloud are both micro-services framework. If I have Kubernetes(or mesos) already, why do I need use Spring Cloud? I notice that many projects use them both.
What's the difference between Kubernetes and Spring Cloud? They both can provide service discovery, load balance and so on.
I'm really confused.</p>
| <p>Kubernetes and Spring Cloud address a lot of the same concerns with Microservices, but with different approaches and technologies. Redhat wrote a great article explaining this. Here the main takeaways:</p>
<blockquote>
<p>Spring Cloud has a rich set of well integrated Java libraries to
address all runtime concerns as part of the application stack. As a
result, the Microservices themselves have libraries and runtime agents
to do client side service discovery, load balancing, configuration
update, metrics tracking, etc. Patterns such as singleton clustered
services, batch jobs are managed in the JVM too.</p>
<p>Kubernetes is
polyglot, doesn’t target only the Java platform, and addresses the
distributed computing challenges in a generic for all languages way.
It provides services for configuration management, service discovery,
load balancing, tracing, metrics, singletons, scheduled jobs on the
platform level, outside of the application stack. The application
doesn’t need any library or agents for client side logic and it can be
written in any language.</p>
<p>In some areas both platforms rely on similar
third party tools. For example the ELK and EFK stacks, tracing
libraries, etc.</p>
<p>Some libraries such as Hystrix, Spring Boot are useful
equally well on both environments. There are areas where both
platforms are complementary and can be combined together to create a
more powerful solution (KubeFlix and Spring Cloud Kubernetes are such
examples).</p>
<p><em>Source: <a href="https://developers.redhat.com/blog/2016/12/09/spring-cloud-for-microservices-compared-to-kubernetes/" rel="nofollow noreferrer">https://developers.redhat.com/blog/2016/12/09/spring-cloud-for-microservices-compared-to-kubernetes/</a></em></p>
</blockquote>
<p>To understand the differences and similarities in more detail I would recommend to the read the full <a href="https://developers.redhat.com/blog/2016/12/09/spring-cloud-for-microservices-compared-to-kubernetes/" rel="nofollow noreferrer">article</a>.</p>
|
<h3>Main question:</h3>
<p>Can we exclude a path from the cloud endpoint statistics/monitoring while still allowing traffic to our actual backend?</p>
<h3>Explanation:</h3>
<p>We have a backend running on Kubernetes and are now trying out Google Cloud Endpoints. We added the EPS container to the pod in front of the backend container. As we do everywhere else, we also use health checks in Kubernetes and from the Google (L7) LoadBalancer in front. In order to have the health check reach our backend, it has to be defined in the openapi yaml file used by the EPS container, e.g.:</p>
<pre><code>...
paths:
"/_ah/health":
get:
operationId: "OkStatus"
security: []
responses:
200:
description: "Ok message"
...
</code></pre>
<p>The issue with this is that these requests muddle the monitoring/tracing/statistics for our actual API. The latency numbers registered by the cloud endpoint are useless: they show a 50th percentile of 2ms, and then a 95th percentile of 20s because of the high fraction of health-check traffic. The actual requests taking 20+ seconds are shown as a marginal fraction of requests since the health checks do requests multiple times each second, each taking 2ms. Since these health checks are steady traffic being 90% of all requests, the actual relevant requests are shown as the 'exceptions' in the margin.</p>
<p>Therefore, we'd like to exclude this health traffic from the endpoint statistics, but keep the health check functional.</p>
<p>I have not found anything for this in the documentation, nor any solution on the web somewhere else.</p>
<h3>Possible alternate solution</h3>
<p>We can add an extra service to our Kubernetes setup reaching directly our backend only used for the health check. Problems with this are:</p>
<ul>
<li>Extra k8s service, configuration, firewall rules ... required</li>
<li>We do not health check the actual setup. If the EPS container fails to direct traffic to our backend, this will go unnoticed.</li>
<li>We encrypt traffic between the loadbalancer and backends with SSL, but our actual backend should now need an extra ssl-aware webserver in between for this. For this health check without actual data, this is a minor issue, but still would mean an exception to the rule.</li>
<li><p>We could add an additional health check for the EPS container as well. But since this should not show up in the stats, it should be like doing a request for a non-defined path and checking that the reponse is the EPS reponse for that case:</p>
<pre><code>{"code": 5,
"message": "Method does not exist.",
"details": [{
"@type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}]
}
</code></pre>
<p>This is not ideal either. It does check if the container is running at the very least, but it's more of a 'it's not down' rather than a 'it's working' approach, so a lot of other issues will go unnoticed.</p></li>
</ul>
| <p>Google Cloud Endpoints doesn't support excluding a path from reporting statistics/monitoring yet. It's something that is on the radar and being actively looked at.</p>
<p>In the meantime, your alternate solution would work as a stop-gap, but with the downsides that you posted.</p>
|
<p>I try to preserve the client IP with proxy protocol. Unfortunately it does not work.</p>
<p>Azure LB => nginx Ingress => Service</p>
<p>I end up with the Ingress Service Pod IP.</p>
<p>Ingress Controller Deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
# hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
# however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
# that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
# like with kubeadm
# hostNetwork: true
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5
name: nginx-ingress-controller
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=default/nginx-ingress-controller
</code></pre>
<p>Ingress Controller Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
annotations:
service.beta.kubernetes.io/external-traffic: "OnlyLocal"
spec:
type: LoadBalancer
ports:
- port: 80
name: http
- port: 443
name: https
selector:
k8s-app: nginx-ingress-lb
</code></pre>
<p>nginx config map:</p>
<pre><code>apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap
</code></pre>
| <p>Got it to work.</p>
<p>In <code>Ingress Controller Deployment</code> I changed the image to </p>
<pre><code>gcr.io/google_containers/nginx-ingress-controller:0.8.3
</code></pre>
<p>and removed the <code>configmap</code>.</p>
<p>I am using ingress to forward to a pod with a dotnet core api.</p>
<p>Adding</p>
<pre><code> var options = new ForwardedHeadersOptions()
{
ForwardedHeaders = Microsoft.AspNetCore.HttpOverrides.ForwardedHeaders.All,
RequireHeaderSymmetry = false,
ForwardLimit = null
};
//add known proxy network(s) here
options.KnownNetworks.Add(network)
app.UseForwardedHeaders(options);
</code></pre>
<p>to <code>Startup</code> did the trick</p>
|
<p>I'm currently exploring more and more of Google Cloud Platform and try to evaluate how I can change my default stack. I like the idea of using all the built-in tooling instead of having to go to various Saas products. One of the more open questions has been how to make use of Stackdriver Monitoring. I understand that the built-in monitoring uses Heapster with the Google Cloud Sink. However, how do I expose my own custom metrics to Stackdriver for use? I don't mean the logs-based metrics, but the real thing?</p>
<p>I know I could always install the agent myself onto the cluster, but it appears to me that as part of Google Container Engine, it is already running, so how could I push metrics to it?</p>
| <p>The fluentd pod pre-installed in GKE is in kube-system namespace and managed by k8s, so you shouldn't mess with it. Currently it seems only proxying stdout/stderr to Stackdriver. But according to <a href="https://stackoverflow.com/a/41875676/5039117">this post</a>, k8s team is working on providing accessible node-wide fluentd port out of the box.</p>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">This document</a> has details about setting up custom logging.</p>
|
<p>Add label to nodes: </p>
<pre><code>$ kubectl label nodes 10.xx.xx.xx key1=val1
</code></pre>
<p>If I want to delete <code>label(key1=val1)</code> on node(10.xx.xx.xx), how can I delete by kubectl command and API?</p>
| <p><strong><em>create labels for the nodes:</em></strong></p>
<p><code>kubectl label node <nodename> <labelname>=allow</code></p>
<p><strong><em>delete above labels from its respecitve nodes:</em></strong></p>
<p><code>kubectl label node <nodename> <labelname>-</code></p>
|
<p>For some reason Kubernetes 1.6.2 does not trigger autoscaling on Google Container Engine.</p>
<p>I have a <code>someservice</code> definition with the following resources and rolling update:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: someservice
labels:
layer: backend
spec:
minReadySeconds: 160
replicas: 1
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
name: someservice
layer: backend
spec:
containers:
- name: someservice
image: eu.gcr.io/XXXXXX/someservice:v1
imagePullPolicy: Always
resources:
limits:
cpu: 2
memory: 20Gi
requests:
cpu: 400m
memory: 18Gi
<.....>
</code></pre>
<p>After changing image version, the new instance cannot start:</p>
<pre><code>$ kubectl -n dev get pods -l name=someservice
NAME READY STATUS RESTARTS AGE
someservice-2595684989-h8c5d 0/1 Pending 0 42m
someservice-804061866-f2trc 1/1 Running 0 1h
$ kubectl -n dev describe pod someservice-2595684989-h8c5d
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
43m 43m 4 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (4), Insufficient memory (3).
43m 42m 6 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3), Insufficient memory (3).
41m 41m 2 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (2), Insufficient memory (3).
40m 36s 136 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (1), Insufficient memory (3).
43m 2s 243 cluster-autoscaler Normal NotTriggerScaleUp pod didn't trigger scale-up (it wouldn't fit if a new node is added)
</code></pre>
<p>My node pool is set to autoscale with <code>min: 2</code>, <code>max: 5</code>. And machines (<code>n1-highmem-8</code>) in node pool are large enough (52GB) to accommodate this service. But somehow nothing happens:</p>
<pre><code>$ kubectl get nodes
NAME STATUS AGE VERSION
gke-dev-default-pool-efca0068-4qq1 Ready 2d v1.6.2
gke-dev-default-pool-efca0068-597s Ready 2d v1.6.2
gke-dev-default-pool-efca0068-6srl Ready 2d v1.6.2
gke-dev-default-pool-efca0068-hb1z Ready 2d v1.6.2
$ kubectl describe nodes | grep -A 4 'Allocated resources'
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
7060m (88%) 15510m (193%) 39238591744 (71%) 48582818048 (88%)
--
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
6330m (79%) 22200m (277%) 48930Mi (93%) 66344Mi (126%)
--
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
7360m (92%) 13200m (165%) 49046Mi (93%) 44518Mi (85%)
--
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
7988m (99%) 11538m (144%) 32967256Ki (61%) 21690968Ki (40%)
$ gcloud container node-pools describe default-pool --cluster=dev
autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
config:
diskSizeGb: 100
imageType: COS
machineType: n1-highmem-8
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/datastore
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/devstorage.read_write
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/sqlservice
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
serviceAccount: default
initialNodeCount: 2
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/XXXXXX/zones/europe-west1-b/instanceGroupManagers/gke-dev-default-pool-efca0068-grp
management:
autoRepair: true
name: default-pool
selfLink: https://container.googleapis.com/v1/projects/XXXXXX/zones/europe-west1-b/clusters/dev/nodePools/default-pool
status: RUNNING
version: 1.6.2
$ kubectl -n dev get pods -l name=someservice
NAME READY STATUS RESTARTS AGE
someservice-2595684989-h8c5d 0/1 Pending 0 42m
someservice-804061866-f2trc 1/1 Running 0 1h
$ kubectl -n dev describe pod someservice-2595684989-h8c5d
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
43m 43m 4 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (4), Insufficient memory (3).
43m 42m 6 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3), Insufficient memory (3).
41m 41m 2 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (2), Insufficient memory (3).
40m 36s 136 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (1), Insufficient memory (3).
43m 2s 243 cluster-autoscaler Normal NotTriggerScaleUp pod didn't trigger scale-up (it wouldn't fit if a new node is added)
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>So it seems that this is a bug with Kubernetes 1.6.2. According to GKE support engineer:</p>
<blockquote>
<p>From the messages "No nodes are available that match all of the
following predicates", this seems to be a known issue and the
engineers managed to track down the root cause. It was an issue in
cluster autoscaler version 0.5.1 that is currently used in GKE 1.6 (up
to 1.6.2). This issue had been fixed already in cluster autoscaler
0.5.2, which is included in head for the 1.6 branch.</p>
</blockquote>
|
<p><strong>What I Have:</strong> A Kubernetes Cluster on GCE with three Nodes. Lets suppose the master has the IP <code><MasterIP></code>. Additionally I have a Service within the cluster of Type NodePort which listens to the port <code><PORT></code>. I can access the service using <code><NodeIP>:<PORT></code></p>
<p><strong>What I would like to do:</strong> Access the service with <code><MasterIP>:<PORT></code> How can I forward the port from <code><MasterIP></code> to within the cluster? in other words: <code><MasterIP>:<PORT> --> <NodeIP>:<PORT></code></p>
<p>The reason why I would like to do this is simply I don't want to rely on a specific <code>NodeIP</code> since the Pod can be rescheduled to another Node.</p>
<p>Thank you</p>
| <p>A Kubernetes <code>Service</code> with <code>type: NodePort</code> opens the same port on every Node and sends the traffic to wherever the pod is currently scheduled using internal IP routing. You should be able to access the service at <code><AnyNodeIP>:<PORT></code>.</p>
<p>In Google's Kubernetes cluster, the master machines are not available for routing traffic. The way to reliably expose your services is to use a <code>Service</code> with <code>type: LoadBalancer</code> which will provide a single IP that resolves to your service regardless of which Nodes are up or what their IPs are.</p>
|
<p>I'm trying to execute command in a contianer (in a Kubernetes POD on GKE with kubernetes 1.1.2). </p>
<p>Reading documentation I understood that I can use GET or POST query to open websocket connection on API endpoint to execute command. When I use GET, it does not work completly, returns error. When I try to use POST, something like that could work probably (but it's not):</p>
<pre><code>curl 'https://admin:xxx@IP/api/v1/namespaces/default/pods/hello-whue1/exec?stdout=1&stderr=1&command=ls' -H "Connection: upgrade" -k -X POST -H 'Upgrade: websocket'
</code></pre>
<p>repsponse for that is</p>
<pre><code>unable to upgrade: missing upgrade headers in request: http.Header{"User-Agent":[]string{"curl/7.44.0"}, "Content-Length":[]string{"0"}, "Accept":[]string{"*/*"}, "Authorization":[]string{"Basic xxx=="}, "Connection":[]string{"upgrade"}, "Upgrade":[]string{"websocket"}}
</code></pre>
<p>Looks like that should be enough to upgrade post request and start using websocket streams, right? What I'm missing?</p>
<p>I was also pointed that opening websocket with POST is probably violation of websocket protocol (only GET should work?).</p>
<p>Also</p>
| <p>Use websocket client it's work.</p>
<p>In my local kuberenetes cluster, the connection metadata like this:</p>
<pre><code>ApiServer = "172.21.1.11:8080"
Namespace = "default"
PodName = "my-nginx-3855515330-l1uqk"
ContainerName = "my-nginx"
Commands = "/bin/bash"
</code></pre>
<p>the connect url:</p>
<pre><code>"ws://172.21.1.11:8080/api/v1/namespaces/default/pods/my-nginx-3855515330-l1uqk/exec?container=my-nginx&stdin=1&stdout=1&stderr=1&tty=1&command=%2Fbin%2Fbash"
</code></pre>
<p>On maxos, a wsclient CLI tool: <a href="https://blog.grandcentrix.net/a-command-line-websocket-client/" rel="nofollow noreferrer">wscat</a>, you can use it as a test tool:</p>
<p><a href="https://i.stack.imgur.com/Q3NOz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q3NOz.jpg" alt="enter image description here"></a></p>
<p>You can access the websocket example: "<a href="https://github.com/lth2015/container-terminal" rel="nofollow noreferrer">https://github.com/lth2015/container-terminal</a>"</p>
|
<p>When starting up a Kubernetes cluster, I load etcd plus the core kubernetes processes - kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler - as static pods from a private registry. This has worked in the past by ensuring that the $HOME environment variable is set to "/root" for kubelet, and then having /root/.docker/config.json defined with the credentials for the private docker registry.</p>
<p>When attempting to run Kubernetes 1.6, with CRI enabled, I get errors in the kubelet log saying it cannot pull the pause:3.0 container from my private docker registry due to no authentication.</p>
<p>Setting --enable-cri=false on the kubelet command line works, but when CRI is enabled, it doesn't seem to use the /root/.docker/config file for authentication.</p>
<p>Is there some new way to provide the docker credentials needed to load static pods when running with CRI enabled?</p>
| <p>In 1.6, I managed to make it work with the following recipe in <a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod</a></p>
<pre><code>$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
</code></pre>
<p>You need to specify newly created <strong>myregistrykey</strong> as the credential under <strong>imagePullSecrets</strong> field in the pod spec.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
</code></pre>
|
<p>I have following service configuration:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: web-srv
spec:
type: NodePort
selector:
app: userapp
tier: web
ports:
- protocol: TCP
port: 8090
targetPort: 80
nodePort: 31000
</code></pre>
<p>and an nginx container is behind this service. Although I can access to the service via <code>nodePort</code>, service is not accessible via <code>port</code> field. I'm able to see the configs with <code>kubectl</code> and Kubernetes dashboard but <code>curl</code>ing to that port (e.g. <code>curl http://192.168.0.100:8090</code>) raises a <strong>Connection Refused</strong> error.</p>
<p>I'm not sure what is the problem here. Do I need to make sure any proxy services is running inside the Node or Container?</p>
| <p>Get the IP of the kubernetes service and then hit 8090; it will work.
nodePort implies that the service is bound to the node at port 31000.</p>
<p>These are the 3 things that will work:</p>
<pre><code>curl <node-ip>:<node-port> # curl <node-ip>:31000
curl <service-ip>:<service-port> # curl <svc-ip>:8090
curl <pod-ip>:<target-port> # curl <pod-ip>:80
</code></pre>
<p>So now, let's look at 3 situations:</p>
<p><strong>1. You are inside the kubernetes cluster (you are a pod)</strong></p>
<p><code><service-ip></code> and <code><pod-ip></code> and <code><node-ip></code> will work.</p>
<p><strong>2. You are on the node</strong></p>
<p><code><service-ip></code> and <code><pod-ip></code> and <code><node-ip></code> will work.</p>
<p><strong>3. You are outside the node</strong></p>
<p>Only <code><node-ip></code> will work assuming that <code><node-ip></code> is reachable.</p>
|
<p>I am trying to create a service using following yaml. As you can see I am trying to restrict access to the service from 10.0.0.0/8 range.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
# the port that this service should serve on
- port: 443
targetPort: 443
# label keys and values that must match in order to receive traffic for this service
selector:
name: nginx
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
</code></pre>
<p>There are a few Kubernetes documents (listed below) that discuss how to use loadBalancerSourceRanges annotation to control service access. </p>
<p><a href="http://kubernetes.io/docs/user-guide/services-firewalls/" rel="noreferrer">http://kubernetes.io/docs/user-guide/services-firewalls/</a></p>
<p>However when I try to create this service, I get an error as follows</p>
<blockquote>
<p>error validating "sdp-cluster.yaml": error validating data: found
invalid field loadBalancerSourceRanges for v1.ServiceSpec; if you
choose to ignore these errors, turn validation off with
--validate=false</p>
</blockquote>
<p>I looked at the v1.ServiceSpec and could not find it there too. </p>
<p>Am I missing something? How can I restrict traffic to a service in Kubernetes?</p>
| <p>This is now supported on GCE, GKE and AWS. If the provider does not support it, it'll be ignored.<a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/" rel="noreferrer">Kubernetes Doc</a></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
</code></pre>
|
<p>I set up <strong><a href="https://github.com/kubernetes/kubernetes" rel="nofollow noreferrer">Kubernetes</a></strong> cluster on <strong>aws</strong> using <strong><a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a></strong> using <strong><a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">this tutorial</a></strong> on official kubernetes github branch.</p>
<p>The cluster set up successfully on AWS, but when I try to run</p>
<pre><code>kubectl get nodes
</code></pre>
<p>or</p>
<pre><code>kops validate cluster
</code></pre>
<p>it says </p>
<pre><code>[user@ip-70-0-0-7 ~]$ kubectl cluster-info
Unable to connect to the server: x509: certificate is valid for *.secure.hosting prod.com, not api.subdomain.mydomain.com
</code></pre>
<p>This is definitely problem with my <strong>x509 certificates</strong>. Just need a gentle push to right direction. Thank you for your precious time and help!</p>
<p><code>NOTE: I am running these commands from outside the cluster from a machine from where I did set up of cluster.</code></p>
| <blockquote>
<p>Unable to connect to the server: x509: certificate is valid for *.secure.hosting prod.com, not api.subdomain.mydomain.com</p>
</blockquote>
<p>I can't tell if those names you listed are examples, or the actual values that <code>kubectl</code> is giving you, but I'll use them as you've written just to keep things simple</p>
<p>If the kubernetes cluster you installed is really accessible via <code>api.secure.hostingprod.com</code>, then updating your <code>$HOME/.kube/config</code> to say <code>https://api.secure.hostingprod.com</code> where it currently says <code>https://api.subdomain.mydomain.com</code> should get things back in order.</p>
<p>Alternatively, if <code>api.secure.hosting prod.com</code> is not an actual domain you can use (for example, if your certificate really does have a space in the hostname), then you have a couple of options.</p>
<p>The cheapest, but least correct, might be to just tell <code>kubectl</code> "I know what I'm doing, don't check the cert" by setting the <code>insecure-skip-tls-verify</code> option under the <code>cluster</code> entry in the <code>$HOME/.kube/config</code> file:</p>
<pre><code>- cluster:
insecure-skip-tls-verify: true
</code></pre>
<p>The more bothersome, but also most correct, would be to re-issue the certificate for the API server using the actual hostname (apparently <code>api.subdomain.mydomain.com</code>). If you know how, extra-super-best is to also add "Subject Alternate Names" (abbreviated as "SAN") to the certificate, so that in-cluster members can refer to it as <code>https://kubernetes</code> and/or <code>https://kubernetes.default.svc.cluster.local</code>, along with the <code>Service</code> IP address assigned to the <code>kubernetes</code> <code>Service</code> in the <code>default</code> namespace. It's extremely likely that your current certificate has those values, which <code>openssl x509 -in /path/to/your.crt -noout -text</code> will show you what they currently are. If you need help with the openssl bits, CoreOS Kubernetes has <a href="https://github.com/coreos/coreos-kubernetes/blob/v0.8.6/lib/init-ssl" rel="nofollow noreferrer">a shell script</a> they use, which might work as written or if nothing else provide very concrete guidance.</p>
<p>I do recognize that's a lot of words, and also a lot of work, but certificates are super important so getting them as correct as possible will really save everyone heartache down the line.</p>
|
<p>I understand can use docker images, but do I need Kubernetes to create a cluster? There are instructions available for model serving, but what about model training on Kubernetes?</p>
| <p>You can use Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> to run batch compute tasks. But currently (circa v1.6) it's not easy to set up data pipelines in Kubernetes.</p>
<p>You might want to look at <a href="http://pachyderm.io/" rel="nofollow noreferrer">Pachyderm</a>, which is a data processing framework built on top of Kubernetes. It adds some nice data packing/versioning tools.</p>
|
<p>I have 4 Kubernetes/Helm deployments (web, emailworker, jobworker, sync) which all need to share exactly the same <code>spec.template.spec.containers[].env</code> key. The env keys are quite large and I'd like to avoid copy/pasting it in each deployment, e.g.:</p>
<pre><code># ...
env:
- name: NODE_ENV
value: "{{ .Values.node_env }}"
- name: BASEURL
value: "{{ .Values.base_url }}"
- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: secret-redis
key: host
- name: KUE_PREFIX
value: "{{ .Values.kue_prefix }}"
- name: DATABASE_NAME
value: "{{ .Values.database_name }}"
- name: DATABASE_HOST
valueFrom:
secretKeyRef:
name: secret-postgres
key: host
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: secret-postgres
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: secret-postgres
key: password
- name: AWS_KEY
valueFrom:
secretKeyRef:
name: secret-bucket
key: key
- name: AWS_SECRET
valueFrom:
secretKeyRef:
name: secret-bucket
key: secret
- name: AWS_S3_BUCKET
valueFrom:
secretKeyRef:
name: secret-bucket
key: bucket
- name: AWS_S3_ENDPOINT
value: "{{ .Values.s3_endpoint }}"
- name: INSTAGRAM_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret-instagram
key: clientID
# ...
</code></pre>
<p>Is this possible to achieve with either yaml, Helm or Kubernetes?</p>
| <p>So I found a solution with Helm named templates: <a href="https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/named_templates.md" rel="nofollow noreferrer">https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/named_templates.md</a></p>
<p>I created a file <code>templates/_env.yaml</code> with the following content:</p>
<pre><code>{{ define "env" }}
- name: NODE_ENV
value: "{{ .Values.node_env }}"
- name: BASEURL
value: "{{ .Values.base_url }}"
- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: secret-redis
key: host
- name: KUE_PREFIX
value: "{{ .Values.kue_prefix }}"
- name: DATABASE_NAME
value: "{{ .Values.database_name }}"
- name: DATABASE_HOST
valueFrom:
secretKeyRef:
name: secret-postgres
key: host
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: secret-postgres
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: secret-postgres
key: password
- name: AWS_KEY
valueFrom:
secretKeyRef:
name: secret-bucket
key: key
- name: AWS_SECRET
valueFrom:
secretKeyRef:
name: secret-bucket
key: secret
- name: AWS_S3_BUCKET
valueFrom:
secretKeyRef:
name: secret-bucket
key: bucket
- name: AWS_S3_ENDPOINT
value: "{{ .Values.s3_endpoint }}"
- name: INSTAGRAM_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret-instagram
key: clientID
{{ end }}
</code></pre>
<p>And here's how I use it in a <code>templates/deployment.yaml</code> files:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: somedeployment
# ...
spec:
template:
# ...
metadata:
name: somedeployment
spec:
# ...
containers:
- name: container-name
image: someimage
# ...
env:
{{- template "env" . }}
</code></pre>
|
<p>I have set up a single node K8S cluster using kubeadm by following the instructions <a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="nofollow noreferrer">here</a>:</p>
<p>The cluster is up and all system pods are running fine:</p>
<pre><code>[root@umeshworkstation hostpath-provisioner]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-n988r 1/1 Running 10 6h
calico-node-n1wmk 2/2 Running 10 6h
calico-policy-controller-1777954159-bd8rn 1/1 Running 0 6h
etcd-umeshworkstation 1/1 Running 1 6h
kube-apiserver-umeshworkstation 1/1 Running 1 6h
kube-controller-manager-umeshworkstation 1/1 Running 1 6h
kube-dns-3913472980-2ptjj 0/3 Pending 0 6h
kube-proxy-1d84l 1/1 Running 1 6h
kube-scheduler-umeshworkstation 1/1 Running 1 6h
</code></pre>
<p>I then downloaded Hostpath external provisioner code from <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/docs/demo/hostpath-provisioner" rel="nofollow noreferrer">kubernetes-incubator</a> and built it locally on the same node. The docker image for provisioner built got successfully and I could even instantiate the provisioner pod using pod.yaml from same location. The pod is running fine:</p>
<pre><code>[root@umeshworkstation hostpath-provisioner]# kubectl describe pod hostpath-provisioner
Name: hostpath-provisioner
Namespace: default
Node: umeshworkstation/172.17.24.123
Start Time: Tue, 09 May 2017 23:44:41 -0400
Labels: <none>
Annotations: <none>
Status: Running
IP: 192.168.8.65
Controllers: <none>
Containers:
hostpath-provisioner:
Container ID: docker://c600cfa7a2f5f958ad24e83372a1276a91b41cb67773b9605af4a0ae021ec914
Image: hostpath-provisioner:latest
Image ID: docker://sha256:f6def41ba7c096701c65bf0c0aba6ff31e030573e1a900e378432491ecc5c556
Port:
State: Running
Started: Tue, 09 May 2017 23:44:45 -0400
Ready: True
Restart Count: 0
Environment:
NODE_NAME: (v1:spec.nodeName)
Mounts:
/tmp/hostpath-provisioner from pv-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7wwvj (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
pv-volume:
Type: HostPath (bare host directory volume)
Path: /tmp/hostpath-provisioner
default-token-7wwvj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7wwvj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events: <none>
</code></pre>
<p>I then created the storage class as per the instructions of project home, and storage class is created fine:</p>
<pre><code>[root@umeshworkstation hostpath-provisioner]# kubectl describe sc example-hostpath
Name: example-hostpath
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/hostpath
Parameters: <none>
Events: <none>
</code></pre>
<p>The next step was to create a PVC using claim.yaml from same location, but PVC is remaining in Pending state, and describe shows its not able to locate the provisioner example.com/hostpath:</p>
<pre><code>[root@umeshworkstation hostpath-provisioner]# kubectl describe pvc
Name: hostpath
Namespace: default
StorageClass: example-hostpath
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-class=example-hostpath
volume.beta.kubernetes.io/storage-provisioner=example.com/hostpath
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2h 11s 874 persistentvolume-controller Normal ExternalProvisioning cannot find provisioner "example.com/hostpath", expecting that a volume for the claim is provisioned either manually or via external software
</code></pre>
<p>The PVC has remained forever in Pending state because of this.</p>
<p>Am I missing something?</p>
| <p>I have figured out the issue.. Thanks @jaxxstorm for helping me move towards in the right direction.</p>
<p>When I inspected provisioner pod logs I could see that its unable to access the API server to list StorageClass, PVC or PVs as it was created with default service account, which does not have the privileges to access these APIs.</p>
<p>The solution was to create a separate service account, pod security policy, cluster role and cluster role binding, as explained for <a href="https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/docs/authorization.md" rel="nofollow noreferrer">NFS external provisioner here</a></p>
<p>After this I could see my PVC getting binded to the volume and hostpath showing the mount</p>
<pre><code>[root@umeshworkstation hostpath-provisioner]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
hostpath Bound pvc-8179c8d6-36db-11e7-9ed4-005056a21a50 1Mi RWX example-hostpath 1m
[root@umeshworkstation hostpath-provisioner]# ls /tmp/hostpath-provisioner/
pvc-8179c8d6-36db-11e7-9ed4-005056a21a50
</code></pre>
|
<p>Where does GKE log RBAC permission events?</p>
<p>On Google Container Engine (GKE) clusters with kubernetes version v1.6 enable RBAC authorization per default. Apparently ABAC is enabled as fallback authorization as well in order to ease the transition of existing clusters to the new authorization scheme. The idea is that first RBAC is tried to authorize an action. <em>If that fails</em>, this should be logged <em>somewhere</em> and then ABAC is consulted to allow the action. This should enabled cluster admins to inspect the logs for missed RBAC permissions before finally switching off ABAC.</p>
<p>We have some clusters that disable GCP logging/monitoring, instead use an own ELK stack. Just to be sure I've created a test cluster with GCP's cloud logging and monitoring, but still can's find any RBAC events anywhere. The test pod is a prometheus server that discovers and scrapes other pods and nodes. </p>
| <p>To make this more comprehensive. From <a href="https://kubernetes.io/docs/admin/authorization/rbac/#parallel-authorizers" rel="noreferrer">Using RBAC Authorization</a>:</p>
<blockquote>
<p>When run with a log level of 2 or higher (--v=2), you can see RBAC denials in the apiserver log (prefixed with RBAC DENY:).</p>
</blockquote>
<p>In GKE the apiservers logs can be accessed via HTTP like:</p>
<pre><code>kubectl proxy &
curl -s http://localhost:8001/logs/kube-apiserver.log
</code></pre>
|
<p>I have a parent helm chart with some child chart defined as a dependency in requirements.yaml. Child chart is packaged and uploaded to some-repo. </p>
<p>Currently I'm doing: </p>
<ul>
<li>run <code>helm package parent-chart</code></li>
<li>upload parent-chart.tgz to some-repo</li>
</ul>
<p>And when I'm trying to install via <code>helm install some-repo/parent-chart</code>
I get only parent chart installed but not the child chart.</p>
<p>How do I need to package parent chart to be able to install it with the child chart together?</p>
| <p>One additional step should be added:</p>
<pre><code>helm dep update parent-chart
</code></pre>
<p>it places child-chart.tgz into chart folder of parent chart and then packaging works</p>
|
<p>We are using Kubernetes to deploy our application docker images.</p>
<p>We would like to be able to take the application logs and push it to cloudwatch.</p>
<p>The application logs are generated using log4j or log4js depending on what language the microservice was built.</p>
<p>What is the right way to do this?</p>
| <p>Build containers with the Cloudwatch Agent installed; <a href="https://docs.docker.com/engine/reference/builder/" rel="nofollow noreferrer">to do this you will need a <code>Dockerfile</code></a>. <a href="http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_cloudwatch_logs.html" rel="nofollow noreferrer">Amazon even has docs specifically for this</a>.</p>
<p>You will need to make sure your base container is either Debian or RHEL based (Amazon docs seem to only support these types of distros with the agent); for example, Debian based systems will have the agent installed with:</p>
<pre><code>curl https://s3.amazonaws.com//aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
</code></pre>
<p>So, you will need to execute the above when you build the container.</p>
<p>Details for installation are <a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html" rel="nofollow noreferrer">here</a>.</p>
<p>You mentioned IAM policy concerns; <a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/EC2NewInstanceCWL.html" rel="nofollow noreferrer">Amazons example policy is below</a>; you will need to make sure that your containers have access.</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::myawsbucket/*"
]
}
]
}
</code></pre>
<p>Someone on <a href="https://github.com/SergeyZh/docker-awslogs/blob/master/Dockerfile#L5-L9" rel="nofollow noreferrer">GitHub has done this already</a>:</p>
<pre><code>FROM ubuntu:latest
MAINTAINER Ryuta Otaki <[email protected]>, Sergey Zhukov <[email protected]>
...
RUN apt-get install -q -y python python-pip wget
RUN cd / ; wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py
</code></pre>
<p>I highly suggest you follow their lead; use Ubuntu and follow the docs. Don't re-invent the wheel.</p>
|
<p>I made a fresh install of gcloud for ubuntu as instructed <a href="https://cloud.google.com/sdk/" rel="noreferrer">here</a>. I want to use the additional components offered by gcloud like <strong>kubectl</strong> and <strong>docker</strong>.</p>
<p>So, when I tried typing <code>gcloud components install kubectl</code>, I get an error saying that <strong>The component manager is disabled for this installation</strong>. Here is the full error message:</p>
<p><a href="https://i.stack.imgur.com/k8boq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/k8boq.png" alt="gcloud component install error"></a></p>
| <p>This is because you installed <code>google-cloud-sdk</code> with a package manager like <code>apt-get</code> or <code>yum</code>. </p>
<p><strong>kubectl</strong>:
If you look <a href="https://cloud.google.com/sdk/downloads#apt-get" rel="noreferrer">here</a> you can see how to install additional components. Basically <code>sudo apt-get install kubectl</code>. </p>
<p>If by <strong>docker</strong> you mean the <code>docker-credential-gcr</code> then I don't know if there's a way to install using a package manager, can't seem to find it. Perhaps ou can try the <a href="https://github.com/GoogleCloudPlatform/docker-credential-gcr" rel="noreferrer">github repo</a>. Mind you, you don't need this for commands like <code>gcloud docker -- push gcr.io/your-project/your-image:version</code>.<br>
<em>If you mean actual docker for building images and running them locally, that's standalone software which you need to install separately, <a href="https://docs.docker.com/engine/installation/" rel="noreferrer">instructions here</a>.</em> </p>
<p>Alternatively, you can uninstall <code>google-cloud-sdk</code> with <code>apt-get</code> and then reinstall with <a href="https://cloud.google.com/sdk/downloads#interactive" rel="noreferrer">interactive installer</a>, which will support the suggested <code>gcloud components install *</code></p>
|
<p>I'm a bit confused about what version of kubernetes I need and what method to use to deploy it. </p>
<p>I have deployed 1.5 the manual way. But there is a fix we need (PR-41597). This fix doesn't seem to have been merge in 1.5 but it is in 1.6.</p>
<p>But I can't find any way to install 1.6 without kubeadm. The documentation clearly states that kubeadm should not be used in production. And the kubeadm way does not allow for upgrades anyway. So I would prefer to stay away from kubeadm. </p>
<p>So I either have to get that fix merged in 1.5 or find a way to install 1.6 without kubeadm. Am I missing something here? Any help would be much appreciated. Thanks.</p>
| <p>There are plenty of ways to install Kubernetes 1.6:</p>
<p><a href="https://kubernetes.io/docs/getting-started-guides" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides</a></p>
<p>For example, CoreOS's CloudFormation installer supports 1.6: <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html#announcement-to-regular-users-of-kube-aws" rel="nofollow noreferrer">https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html#announcement-to-regular-users-of-kube-aws</a></p>
<p>As does Canonical's Juju templates: <a href="https://jujucharms.com/canonical-kubernetes/" rel="nofollow noreferrer">https://jujucharms.com/canonical-kubernetes/</a></p>
<p>If you need more specific assistance, please share more about your target environment (cloud/bare metal, OS, etc.).</p>
<p>A fairly low-level set of instructions can be found in <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way</a>; this may help you to tailor your own setup process.</p>
<p>For CentOS specifically, your best bet might be <a href="https://github.com/kubernetes-incubator/kargo" rel="nofollow noreferrer">Kargo</a>. This doesn't yet support 1.6, but it is active, so it should receive a 1.6 patch soon.</p>
|
<p>I installed <code>minikube</code> on local.</p>
<p>Dashboard is 192.168.99.100:30000</p>
<p>I installed Jenkins by helm:</p>
<pre><code>$ helm install stable/jenkins
</code></pre>
<p>Then the service always pending:</p>
<pre><code>$ kubectl get services --namespace=default -w wandering-buffoon-jenkins
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wandering-buffoon-jenkins 10.0.0.153 <pending> 8080:31326/TCP,50000:31090/TCP 26m
</code></pre>
<p>Why? So can't use external-ip to access it.</p>
| <p>I'm guessing that you didn't update the parameters to use <code>NodePort</code> instead of the default <code>LoadBalancer</code>. The minikube cluster doesn't support the <code>LoadBalancer</code> type so Kubernetes is looping trying to create a load balancer to get an external IP.</p>
<p>Use helm to see the options for the stable/jenkins chart:</p>
<pre><code>$ helm inspect values stable/jenkins
# Default values for jenkins.
...
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
ServiceType: LoadBalancer
...
</code></pre>
<p>You can set this by doing something like this:</p>
<pre><code>$ echo $'Master:\n ServiceType: NodePort' > config.yaml
$ helm install -f config.yaml stable/jenkins
</code></pre>
|
<p>I am trying to deploy my app to <em>Kubernetes</em> running in <em>Google Container
Engine</em>.</p>
<p>The app can be found at: <a href="https://github.com/Industrial/docker-znc" rel="noreferrer">https://github.com/Industrial/docker-znc</a>.</p>
<p>The <em>Dockerfile</em> is built into an image on <em>Google Container Registry</em>.</p>
<p>I have deployed the app in <em>Kubernetes</em> via the + button. I don't have the YAML
for this.</p>
<p>I have inserted a <em>Secret</em> in <em>Kubernetes</em> for the PEM file required by the app.</p>
<ol>
<li>How do I get the YAML for the <em>Deployment</em>, <em>Service</em> and <em>Pod</em> created by
Kubernetes by filling in the form?</li>
<li>How do I get the <em>Secret</em> into my <em>Pod</em> for usage?</li>
</ol>
| <p>To get the yaml for a deployment (service, pod, secret, etc):</p>
<pre><code>kubectl get deploy deploymentname -o yaml
</code></pre>
|
<p>How to allow only one pod of a type on a node in Kubernetes. Daemon-sets doesn't fit into this use-case.</p>
<p>For e.g. - Restricting scheduling of only one Elasticsearch pod on a node, to prevent data loss in case the node goes down.</p>
<p>It can be achieved by carefully planning CPU/memory resource of pod and machine type of cluster.</p>
<p>Is there any other way to do so?</p>
| <p>Kubernetes 1.4 introduced <code>Inter-pod affinity and anti-affinity</code>. From the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature" rel="noreferrer">documentation</a>: <code>Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to schedule on based on labels on pods that are already running on the node</code>.</p>
<p>That won't prevent a pod to be scheduled on a node, but at least the pod will be scheduled on the node if and only if the scheduler has no choice.</p>
|
<p>Is there way to specify a custom <strong>NodePort</strong> port in a kubernetes service YAML definition?
I need to be able to define the port explicitly in my configuration file.</p>
| <p>You can set the type <code>NodePort</code> in your <code>Service</code> Deployment. Note that there is a <code>Node Port Range</code> configured for your API server with the option <code>--service-node-port-range</code> (by default <code>30000-32767</code>). You can also specify a port in that range specifically by setting the <code>nodePort</code> attribute under the <code>Port</code> object, or the system will chose a port in that range for you.</p>
<p>So a <code>Service</code> example with specified <code>NodePort</code> would look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
</code></pre>
<p>For more information on NodePort, see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">this doc</a>. For configuring API Server Node Port range please see <a href="https://kubernetes.io/docs/admin/kube-apiserver/" rel="noreferrer">this</a>.</p>
|
<p>I have a Docker container that expose a health check that is protected by a basic authentication. I've read the documentation on liveness probes <a href="http://kubernetes.io/v1.0/docs/user-guide/liveness/README.html" rel="noreferrer">here</a> but I cannot find any details of how to specify basic auth credentials. Is this not supported by Kubernetes? Are there any workarounds?</p>
| <p>It is now possible to add headers for liveness probes:</p>
<pre><code>livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Authorization
value: Basic aGE6aGE=
</code></pre>
<p>It may be worth noting that:</p>
<blockquote>
<p>if the browser uses Aladdin as the username and OpenSesame as the
password, then the field's value is the base64-encoding of
Aladdin:OpenSesame, or QWxhZGRpbjpPcGVuU2VzYW1l. Then the
Authorization header will appear as:</p>
<p>Authorization: Basic QWxhZGRpbjpPcGVuU2VzYW1l</p>
</blockquote>
<p>Source: <a href="https://en.wikipedia.org/wiki/Basic_access_authentication" rel="noreferrer">https://en.wikipedia.org/wiki/Basic_access_authentication</a></p>
<p>You can use the command <code>base64</code> in your shell to create this string:</p>
<pre><code>echo -n "Aladdin:OpenSesame" | base64
</code></pre>
|
<p>I am trying to run a small app in a g1 GKE instance (g1 instance has 1 vCPU, or 1000 millicores), and having issues with CPU request limits when scheduling pods. There are 4 pods, each being a different part of the app: Django web application, SQL service, and two helper Python processes.</p>
<p>The pods have been set up in the default namespace, so 100m are allocated for each by default. Turns out that Kube-system takes up 730 millicores on the node, so I have 270m left to distribute between the pods, and that's why only two pods start up and others are left hanging in the pending state. To get all the pods started I need to reduce each of their CPU quota (or reconsider the design).</p>
<p>I can guess roughly which pod would require more or less CPU. What would be a reasonable way to estimate the minimal millicore requirement for each of the pods?</p>
| <p>If you have <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">Heapster</a> deployed in Kubernetes then you should be able to issue <code>kubectl top pods</code> straight after launching a pod. Append <code>-n kube-system</code> to view pods in the kube-system namespace.</p>
<p>This displays pod metrics in the following format:</p>
<pre><code>NAME CPU(cores) MEMORY(bytes)
------------15186790-1swfm 0m 44Mi
------------88929288-0nqb1 0m 12Mi
------------22666682-c6cb5 0m 43Mi
------------85400619-k5vhh 6m 74Mi
</code></pre>
<p>However, do remember that these metrics will change depending on the load and may vary quite a bit.</p>
|
<ol>
<li>I have Kafka deployed and running in Kubernetes cluster. I am using this image from docker hub - <a href="https://hub.docker.com/r/cloudtrackinc/kubernetes-kafka/" rel="nofollow noreferrer">https://hub.docker.com/r/cloudtrackinc/kubernetes-kafka/</a></li>
<li>I have 3 kube-nodes in my kubernetes cluster. I have 3 Kafka and 3 zookeeper applications running and I have services zoo1,zoo2,zoo3 and kafka-1, kafka-2 and kafka-3 running corresponding to them. I am able to publish/consume from inside kubernetes cluster but I am not able to publish/consume from outside of kubernetes cluster i.e., from external machine not part of kubernetes cluster. </li>
<li>I am able to reach the kube-nodes from external machine - basically I can ping them using name/ip. </li>
<li>I am not using any external load balancer but I have a DNS that can resolve both my external machine and kube-nodes. </li>
<li>Using NodePort or ExternalIP to expose the Kafka service does not work in this case.</li>
<li>Setting <code>KAFKA_ADVERTISED_HOST_NAME</code> or <code>KAFKA_ADVERTISED_LISTENERS</code> in Kafka RC YML that ultimately set <code>ADVERTISED_HOST_NAME</code>/<code>ADVERTISED_LISTENERS</code> properties in <code>server.properties</code> either does not help accessing kafka from outside of kubernetes cluster. </li>
</ol>
<p>Please suggest how can I publish/consume from outside of kubernetes cluster. Thanks much! </p>
| <p>I had the same problem with accessing kafka from outside of k8s cluster on AWS. I manage to solve this issue by using kafka listeners feature which from version 0.10.2 supports multiple interfaces. </p>
<p>here is how I configured kafka container.</p>
<pre><code> ports:
- containerPort: 9092
- containerPort: 9093
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INTERNAL_PLAINTEXT:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INTERNAL_PLAINTEXT://kafka-internal-service:9092,EXTERNAL_PLAINTEXT://123.us-east-2.elb.amazonaws.com:9093"
- name: KAFKA_LISTENERS
value: "INTERNAL_PLAINTEXT://0.0.0.0:9092,EXTERNAL_PLAINTEXT://0.0.0.0:9093"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INTERNAL_PLAINTEXT"
</code></pre>
<p>Apart from that I configured two Services. One for internal(Headless) & one for external(LoadBalancer) communication.</p>
<p>Hopefully this will save people's time.</p>
|
<p>I'm running a jenkins instance on GCE inside a docker container and would like to execute a multibranch pipeline from this Jenkinsfile and Github. I'm using the <a href="https://cloud.google.com/solutions/continuous-delivery-jenkins-container-engine" rel="nofollow noreferrer">GCE jenkins</a> tutorial for this. Here is my <code>Jenkinsfile</code> </p>
<pre><code>node {
def project = 'xxxxxx'
def appName = 'gceme'
def feSvcName = "${appName}-frontend"
def imageTag = "eu.gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
checkout scm
sh("echo Build image")
stage 'Build image'
sh("docker build -t ${imageTag} .")
sh("echo Run Go tests")
stage 'Run Go tests'
sh("docker run ${imageTag} go test")
sh("echo Push image to registry")
stage 'Push image to registry'
sh("gcloud docker push ${imageTag}")
sh("echo Deploy Application")
stage "Deploy Application"
switch (env.BRANCH_NAME) {
// Roll out to canary environment
case "canary":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/canary/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/canary/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out to production
case "master":
// Change deployed image in canary to the one we just built
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/production/*.yaml")
sh("kubectl --namespace=production apply -f k8s/services/")
sh("kubectl --namespace=production apply -f k8s/production/")
sh("echo http://`kubectl --namespace=production get service/${feSvcName} --output=json | jq -r '.status.loadBalancer.ingress[0].ip'` > ${feSvcName}")
break
// Roll out a dev environment
default:
// Create namespace if it doesn't exist
sh("kubectl get ns ${env.BRANCH_NAME} || kubectl create ns ${env.BRANCH_NAME}")
// Don't use public load balancing for development branches
sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml")
sh("sed -i.bak 's#eu.gcr.io/cloud-solutions-images/gceme:1.0.0#${imageTag}#' ./k8s/dev/*.yaml")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/services/")
sh("kubectl --namespace=${env.BRANCH_NAME} apply -f k8s/dev/")
echo 'To access your environment run `kubectl proxy`'
echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${feSvcName}:80/"
}
}
</code></pre>
<p>I always get an error <code>docker not found</code>:</p>
<pre><code>[apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ] Running shell script
+ docker build -t eu.gcr.io/xxxxx/apiservice:master.1 .
/var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ@tmp/durable-b4503ecc/script.sh: 2: /var/jenkins_home/workspace/apiservice_master-GJCRJX6ZJPDVVSEUHIS6VBX7OYMFS5WKRVRKCSF4PSO76ZGZPKFQ@tmp/durable-b4503ecc/script.sh: docker: not found
</code></pre>
<p>What do I have to change to make docker work inside jenkins?</p>
| <p>That looks like DiD (Docker in Docker), which this <a href="https://forums.docker.com/t/docker-not-found-in-jenkins-pipeline/31683" rel="nofollow noreferrer">recent issue</a> points out as problematic.<br>
See "<a href="http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/" rel="nofollow noreferrer">Using Docker-in-Docker for your CI or testing environment? Think twice.</a>"</p>
<p>That same issue recommends to run in privilege mode.<br>
And make sure your docker container in which you are executing does have docker installed.</p>
|
<p>Could we use to host Java and PHP runtime (2 application) on the same server using Docker or Kubernates.</p>
| <p>what you're asking doesn't quite make sense.</p>
<p>Kubernetes is a production management and orchestration system for Docker containers.</p>
|
<p>I am creating a grafana dashboard to show metrices.
Is there any keyword which i can use in my query to check my service is running or not.
I am using prometheus to retrive data from my api for the metrics creation.</p>
| <p>You can create a query like so </p>
<pre><code>count_scalar(container_last_seen{name=<container_name>})
</code></pre>
<p>That will give you the count of how many containers are running with that name</p>
|
<p>I am using Kubernetes to run a python script as a cron job. The issue is that I am not seeing the output of the script (Which can take a while to run) until after the job finishes. I suspect this is due to the logging level (--v option) but I cannot for the life of my find either the documentation for it (it default to --v=0). IF I want to increase the verbosity of what is outputted, does anyone know the value of 'INFO' or 'TRACE' (or what the values are/where they are defined)? Thank for any help in advance.</p>
<p>Edit: has anyone successfully gotten a python file to log to a Kubernetes pod while the pod was running? If so did you use print() or a different logging framework?</p>
| <p>According to Kubernetes <a href="https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ" rel="nofollow noreferrer">docs</a>, </p>
<pre><code>If you don't see much useful in the logs, you could try turning on
verbose logging on the Kubernetes component you suspect has a problem
using --v or --vmodule, to at least level 4. See
https://github.com/golang/glog for more details.
</code></pre>
|
<p>I'm trying to setup auto deploy with Kubernetes on GitLab. I've successfully enabled Kubernetes integration in my project settings. </p>
<p>Well, the integration icon is green and when I click "Test Settings" I see "We sent a request to the provided URL":</p>
<p><a href="https://i.stack.imgur.com/5jjm1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5jjm1.png" alt="Kubernetes Integration"></a></p>
<p>My deployment environment is the Google Container Engine.</p>
<p>Here's the auto deploy section in my <code>gitlab-ci.yml</code> config:</p>
<pre><code>deploy:
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
stage: deploy
script:
- export
- echo CI_PROJECT_ID=$CI_PROJECT_ID
- echo KUBE_URL=$KUBE_URL
- echo KUBE_CA_PEM_FILE=$KUBE_CA_PEM_FILE
- echo KUBE_TOKEN=$KUBE_TOKEN
- echo KUBE_NAMESPACE=$KUBE_NAMESPACE
- kubectl config set-cluster "$CI_PROJECT_ID" --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM_FILE"
- kubectl config set-credentials "$CI_PROJECT_ID" --token="$KUBE_TOKEN"
- kubectl config set-context "$CI_PROJECT_ID" --cluster="$CI_PROJECT_ID" --user="$CI_PROJECT_ID" --namespace="$KUBE_NAMESPACE"
- kubectl config use-context "$CI_PROJECT_ID"
</code></pre>
<p>When I look at the results, the deploy phase fails. This is because all the <code>KUBE</code> variables are empty. </p>
<p>I'm not having much luck with the Kubernetes services beyond this point. Am I missing something?</p>
| <p>As it turns out, the Deployment Variables will not materialise unless you have configured and referenced an Environment.</p>
<p>Here's what the <code>.gitlab-ci.yaml</code> file looks like with the <code>environment</code> keyword:</p>
<pre><code>deploy:
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
stage: deploy
environment: production
script:
- export
- echo CI_PROJECT_ID=$CI_PROJECT_ID
- echo KUBE_URL=$KUBE_URL
- echo KUBE_CA_PEM_FILE=$KUBE_CA_PEM_FILE
- echo KUBE_TOKEN=$KUBE_TOKEN
- echo KUBE_NAMESPACE=$KUBE_NAMESPACE
- kubectl config set-cluster "$CI_PROJECT_ID" --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM_FILE"
- kubectl config set-credentials "$CI_PROJECT_ID" --token="$KUBE_TOKEN"
- kubectl config set-context "$CI_PROJECT_ID" --cluster="$CI_PROJECT_ID" --user="$CI_PROJECT_ID" --namespace="$KUBE_NAMESPACE"
- kubectl config use-context "$CI_PROJECT_ID"
</code></pre>
|
<p>In kubernetes, Deployments can have labels. But in my situation, I need the actual containers in a pod to have certain labels that triggers some action to be carried out by an agent on the node.</p>
<p>Is there a way to add specific labels to containers in Kubernetes?</p>
| <p>You can't add a label to individual containers, only to the pod in which they are running. I'd suggest rather than using labels (which is for identifying info) to leverage pod <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/" rel="nofollow noreferrer">annotations</a>, potentially along with the <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">downward API</a> for triggering actions.</p>
|
<p>In a default openshift install, there is an unused project titled <code>kube-system</code>. It seems like <code>openshift-infra</code> is for things like metrics, <code>default</code> is for the router and registry, and <code>openshift</code> is for global templates. </p>
<p>What is the <code>kube-system</code> project used for though? I can't find any docs on it.</p>
| <p><code>kube-system</code> is the namespace for objects created by the Kubernetes system. </p>
<p>Typically, this would contain pods like <code>kube-dns</code>, <code>kube-proxy</code>, <code>kubernetes-dashboard</code> and stuff like fluentd, heapster, ingresses and so on.</p>
|
<p>I am installing Kubernetes in a non-internet environment. I want to use Helm and want to set up a custom chart repository.</p>
<p><code>helm init</code> barfs after creating <code>~/.helm/repository/repositories.yaml</code> as it can't reach the default Google repo, so I will end up installing manually via kubectl - what is the format of this chart repository if I want to set up my own?</p>
<p>I will run <code>helm init --dry-run --debug</code> in order to get the manifest and amend this to point at a Docker registry that I have access to then install via <code>kubectl</code>.</p>
| <p>I didn't see the section in the docs here: <a href="https://github.com/kubernetes/helm/blob/master/docs/chart_repository.md" rel="nofollow noreferrer">https://github.com/kubernetes/helm/blob/master/docs/chart_repository.md</a></p>
<p>It's a web server.</p>
|
<p>I have two coreos machines with CoreOS beta (1185.2.0).</p>
<p>I install kuberentes with rkt containers using a modified script, the original script is at <a href="https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic" rel="nofollow noreferrer">https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic</a>. the modified version is at <a href="https://github.com/kfirufk/coreos-kubernetes-multi-node-generic-install-script" rel="nofollow noreferrer">https://github.com/kfirufk/coreos-kubernetes-multi-node-generic-install-script</a>.</p>
<p>the environment variables that I set for the script are:</p>
<pre><code>ADVERTISE_IP=10.79.218.2
ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
K8S_VER=v1.4.3_coreos.0
HYPERKUBE_IMAGE_REPO=quay.io/coreos/hyperkube
POD_NETWORK=10.2.0.0/16
SERVICE_IP_RANGE=10.3.0.0/24
K8S_SERVICE_IP=10.3.0.1
DNS_SERVICE_IP=10.3.0.10
USE_CALICO=true
CONTAINER_RUNTIME=rkt
ETCD_CERT_FILE="/etc/ssl/etcd/etcd1.pem"
ETCD_KEY_FILE="/etc/ssl/etcd/etcd1-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/ssl/etcd/ca.pem"
ETCD_CLIENT_CERT_AUTH=true
OVERWRITE_ALL_FILES=true
CONTROLLER_HOSTNAME="coreos-2.tux-in.com"
ETCD_CERT_ROOT_DIR="/etc/ssl/etcd"
ETCD_SCHEME="https"
ETCD_AUTHORITY="coreos-2.tux-in.com:2379"
IS_MASK_UPDATE_ENGINE=false
</code></pre>
<p>the most noted changes is added support for etcd2 tls certificates and kubeconfig yaml usage instead of depreated <code>--api-server</code>.</p>
<p>currently I'm trying to install using the controller script for coreos-2.tux-in.com.</p>
<p>the kubeconfig yaml for the controller node contains:</p>
<pre><code>current-context: tuxin-coreos-context
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://coreos-2.tux-in.com:443
name: tuxin-coreos-cluster
contexts:
- context:
cluster: tuxin-coreos-cluster
name: tuxin-coreos-context
kind: Config
preferences:
colors: true
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/apiserver.pem
client-key: /etc/kubernetes/ssl/apiserver-key.pem
</code></pre>
<p>the generated <code>kubelet.service</code>file contains</p>
<pre><code>[Service]
Environment=KUBELET_VERSION=v1.4.3_coreos.0
Environment=KUBELET_ACI=quay.io/coreos/hyperkube
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf --mount volume=dns,target=/etc/resolv.conf --volume rkt,kind=host,source=/opt/bin/host-rkt --mount volume=rkt,target=/usr/bin/rkt --volume var-lib-rkt,kind=host,source=/var/lib/rkt --mount volume=var-lib-rkt,target=/var/lib/rkt --volume stage,kind=host,source=/tmp --mount volume=stage,target=/tmp --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStart=/usr/lib/coreos/kubelet-wrapper --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml --register-schedulable=false --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=cni --container-runtime=rkt --rkt-path=/usr/bin/rkt --rkt-stage1-image=coreos.com/rkt/stage1-coreos --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=10.79.218.2 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
</code></pre>
<p>now.. I'm pretty sure it's related to using <code>--kubeconfig</code> instead of <code>--api-server</code> cause I started getting this error only after this change.</p>
<p>the kubelet log output is at <a href="http://pastebin.com/eD8TrMJJ" rel="nofollow noreferrer">http://pastebin.com/eD8TrMJJ</a></p>
<p>kubelet is not installed properly now, on my desktop when I run <code>kubectl get nodes</code> it returns an empty list.</p>
<p>any ideas?</p>
<h1>update</h1>
<p>output of <code>kubectl get nodes --v=8</code> at <a href="http://pastebin.com/gDBbn0rn" rel="nofollow noreferrer">http://pastebin.com/gDBbn0rn</a></p>
<h1>update</h1>
<p><code>etcdctl ls /registry/minions</code> output:</p>
<pre><code>Error: 100: Key not found (/registry/minions) [42662]
</code></pre>
<p><code>ps -aef | grep kubelet</code> on controller</p>
<pre><code>root 2054 1 3 12:49 ? 00:18:06 /kubelet --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml --register-schedulable=false --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=cni --container-runtime=rkt --rkt-path=/usr/bin/rkt --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=10.79.218.2 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
root 2605 1 0 12:51 ? 00:00:00 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --register=true --link-journal=try-guest --keep-unit --quiet --uuid=b7008337-7b90-4fd7-8f1f-7bc45f056685 --machine=rkt-b7008337-7b90-4fd7-8f1f-7bc45f056685 --directory=stage1/rootfs --bind=/var/lib/kubelet/pods/52646008312b398ac0d3031ad8b9e280/containers/kube-scheduler/ce639294-9f68-11e6-a3bd-1c6f653e6f72:/opt/stage2/kube-scheduler/rootfs/dev/termination-log --bind=/var/lib/kubelet/pods/52646008312b398ac0d3031ad8b9e280/containers/kube-scheduler/etc-hosts:/opt/stage2/kube-scheduler/rootfs/etc/hosts --bind=/var/lib/kubelet/pods/52646008312b398ac0d3031ad8b9e280/containers/kube-scheduler/etc-resolv-conf:/opt/stage2/kube-scheduler/rootfs/etc/resolv.conf --capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0
root 2734 1 0 12:51 ? 00:00:00 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --register=true --link-journal=try-guest --keep-unit --quiet --uuid=ee6be263-c4ed-4a70-879c-57e2dde4ab7a --machine=rkt-ee6be263-c4ed-4a70-879c-57e2dde4ab7a --directory=stage1/rootfs --bind=/var/lib/kubelet/pods/0c997ab29f8d032a29a952f578d9014c/containers/kube-apiserver/ceb3598e-9f68-11e6-a3bd-1c6f653e6f72:/opt/stage2/kube-apiserver/rootfs/dev/termination-log --bind=/var/lib/kubelet/pods/0c997ab29f8d032a29a952f578d9014c/containers/kube-apiserver/etc-hosts:/opt/stage2/kube-apiserver/rootfs/etc/hosts --bind=/var/lib/kubelet/pods/0c997ab29f8d032a29a952f578d9014c/containers/kube-apiserver/etc-resolv-conf:/opt/stage2/kube-apiserver/rootfs/etc/resolv.conf --bind-ro=/etc/ssl/etcd:/opt/stage2/kube-apiserver/rootfs/etc/ssl/etcd --bind-ro=/etc/kubernetes/ssl:/opt/stage2/kube-apiserver/rootfs/etc/kubernetes/ssl --bind-ro=/usr/share/ca-certificates:/opt/stage2/kube-apiserver/rootfs/etc/ssl/certs --capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0
root 2760 1 0 12:51 ? 00:00:00 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --register=true --link-journal=try-guest --keep-unit --quiet --uuid=6a9e6598-3c1d-4563-bbdf-4ca1774f8f83 --machine=rkt-6a9e6598-3c1d-4563-bbdf-4ca1774f8f83 --directory=stage1/rootfs --bind-ro=/etc/kubernetes/ssl:/opt/stage2/kube-controller-manager/rootfs/etc/kubernetes/ssl --bind-ro=/usr/share/ca-certificates:/opt/stage2/kube-controller-manager/rootfs/etc/ssl/certs --bind=/var/lib/kubelet/pods/11d558df35524947fb7ed66cf7bed0eb/containers/kube-controller-manager/cebd2d3d-9f68-11e6-a3bd-1c6f653e6f72:/opt/stage2/kube-controller-manager/rootfs/dev/termination-log --bind=/var/lib/kubelet/pods/11d558df35524947fb7ed66cf7bed0eb/containers/kube-controller-manager/etc-hosts:/opt/stage2/kube-controller-manager/rootfs/etc/hosts --bind=/var/lib/kubelet/pods/11d558df35524947fb7ed66cf7bed0eb/containers/kube-controller-manager/etc-resolv-conf:/opt/stage2/kube-controller-manager/rootfs/etc/resolv.conf --capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0
root 3861 1 0 12:53 ? 00:00:00 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --register=true --link-journal=try-guest --keep-unit --quiet --uuid=3dad014c-b31f-4e11-afb7-59214a7a4de9 --machine=rkt-3dad014c-b31f-4e11-afb7-59214a7a4de9 --directory=stage1/rootfs --bind=/var/lib/kubelet/pods/7889fbb0a1c86d9bfdb12908938dee20/containers/kube-policy-controller/etc-hosts:/opt/stage2/kube-policy-controller/rootfs/etc/hosts --bind=/var/lib/kubelet/pods/7889fbb0a1c86d9bfdb12908938dee20/containers/kube-policy-controller/etc-resolv-conf:/opt/stage2/kube-policy-controller/rootfs/etc/resolv.conf --bind-ro=/etc/ssl/etcd:/opt/stage2/kube-policy-controller/rootfs/etc/ssl/etcd --bind=/var/lib/kubelet/pods/7889fbb0a1c86d9bfdb12908938dee20/containers/kube-policy-controller/dfd7a7dc-9f68-11e6-a3bd-1c6f653e6f72:/opt/stage2/kube-policy-controller/rootfs/dev/termination-log --capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT --bind=/var/lib/kubelet/pods/7889fbb0a1c86d9bfdb12908938dee20/containers/leader-elector/etc-hosts:/opt/stage2/leader-elector/rootfs/etc/hosts --bind=/var/lib/kubelet/pods/7889fbb0a1c86d9bfdb12908938dee20/containers/leader-elector/etc-resolv-conf:/opt/stage2/leader-elector/rootfs/etc/resolv.conf --bind=/var/lib/kubelet/pods/7889fbb0a1c86d9bfdb12908938dee20/containers/leader-elector/f9e65e21-9f68-11e6-a3bd-1c6f653e6f72:/opt/stage2/leader-elector/rootfs/dev/termination-log --capability=CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FSETID,CAP_FOWNER,CAP_KILL,CAP_MKNOD,CAP_NET_RAW,CAP_NET_BIND_SERVICE,CAP_SETUID,CAP_SETGID,CAP_SETPCAP,CAP_SETFCAP,CAP_SYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0
</code></pre>
<p><code>ps -aef | grep kubelet</code> on worker</p>
<pre><code>root 2092 1 0 12:56 ? 00:03:56 /kubelet --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=cni --container-runtime=rkt --rkt-path=/usr/bin/rkt --register-node=true --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=10.79.218.3 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml --tls-cert-file=/etc/kubernetes/ssl/worker.pem --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
</code></pre>
<h1>update</h1>
<p>when I run <code>journalctl -f -u kubelet</code> I notice that every 10 seconds I get the following message:</p>
<pre><code>Nov 02 13:01:54 coreos-2.tux-in.com kubelet-wrapper[1751]: I1102 13:01:54.360929 1751 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
</code></pre>
<p>to which service this message is related? maybe something is restarting itself every 10 seconds because of some sort of failure.</p>
| <p>The option <code>--require-kubeconfig</code> for <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">kubelet</a> should help.</p>
|
<p>I am trying to add --admission-control=ServiceAccount to my kube-apiserver call to be able to host a https connection from the kubernetes-ui and the apiserver. I am getting this on the controller manager.</p>
<p><code>
Mar 25 18:39:51 master kube-controller-manager[1388]: I0325 18:39:51.425556 1388 event.go:211] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"nginx4-3088538572", UID:"aefae1a6-f2b8-11e5-8269-0401bd450a01", APIVersion:"extensions", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "nginx4-3088538572-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
</code></p>
<p>RIght now my default serviceaccount looks like this</p>
<pre><code>cesco@desktop: ~/code/go/src/bitbucket.org/cescoferraro/cluster/terraform on master [+!?]
$ kubectl get serviceaccount default -o wide
NAME SECRETS AGE
default 0 2m
cesco@desktop: ~/code/go/src/bitbucket.org/cescoferraro/cluster/terraform on master [+!?]
$ kubectl get serviceaccount default -o json
{
"kind": "ServiceAccount",
"apiVersion": "v1",
"metadata": {
"name": "default",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/serviceaccounts/default",
"uid": "eaa3c6e1-f2cd-11e5-973f-0401bd52ec01",
"resourceVersion": "30",
"creationTimestamp": "2016-03-25T21:09:52Z"
}
}
</code></pre>
<p>I am using a token to authenticate to kubernetes and the full cluster is works on https.</p>
<h3>CONTROLLER-MANAGER</h3>
<pre><code>ExecStart=/opt/bin/kube-controller-manager \
--address=0.0.0.0 \
--root-ca-file=/home/core/ssl/ca.pem \
--service-account-private-key-file=/home/core/ssl/kube-key.pem \
--master=https://${COREOS_PRIVATE_IPV4}:6443 \
--logtostderr=true \
--kubeconfig=/home/core/.kube/config \
--cluster-cidr=10.132.0.0/16 \
--register-retry-count 100
</code></pre>
<h3>APISERVER</h3>
<pre><code>ExecStart=/opt/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--logtostderr=true \
--insecure-bind-address=${MASTER_PRIVATE} \
--insecure-port=8080 \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--runtime-config=api/v1 \
--allow-privileged=true \
--service-cluster-ip-range=10.100.0.0/16 \
--advertise-address=${MASTER_PUBLIC} \
--token-auth-file=/data/kubernetes/token.csv \
--etcd-cafile=/home/core/ssl/ca.pem \
--etcd-certfile=/home/core/ssl/etcd1.pem \
--etcd-keyfile=/home/core/ssl/etcd1-key.pem \
--etcd-servers=https://${MASTER_PRIVATE}:2379,https://${DATABASE_PRIVATE}:2379 \
--cert-dir=/home/core/ssl \
--client-ca-file=/home/core/ssl/ca.pem \
--tls-cert-file=/home/core/ssl/kubelet.pem \
--tls-private-key-file=/home/core/ssl/kubelet-key.pem \
--kubelet-certificate-authority=/home/core/ssl/ca.pem \
--kubelet-client-certificate=/home/core/ssl/kubelet.pem \
--kubelet-client-key=/home/core/ssl/kubelet-key.pem \
--kubelet-https=true
</code></pre>
<h3>.kube/config</h3>
<pre><code>ExecStart=/opt/bin/kubectl config set-cluster CLUSTER \
--server=https://${MASTER_PRIVATE}:6443 \
--certificate-authority=/home/core/ssl/ca.pem
ExecStart=/opt/bin/kubectl config set-credentials admin \
--token=elezxaMiqXVcXXU7lRYZ4akrlAtxY5Za \
--certificate-authority=/home/core/ssl/ca.pem \
--client-key=/home/core/ssl/kubelet-key.pem \
--client-certificate=/home/core/ssl/kubelet.pem
ExecStart=/opt/bin/kubectl config set-context default-system \
--cluster=CLUSTER \
--user=admin
ExecStart=/opt/bin/kubectl config use-context default-system
</code></pre>
<h1>UPDATE 1</h1>
<p>Per @Jordan Liggitt answer, I added --service-account-key-file=/home/core/ssl/kubelet-key.pem to the apiserver call but now I am getting </p>
<pre><code>Mar 26 11:19:30 master kube-apiserver[1874]: F0326 11:19:30.556591 1874 server.go:410] Invalid Authentication Config: asn1: structure error: tags don't match (16 vs {class:0 tag:2 length:1 isCompound:false}) {optional:false explicit:false application:false defaultValue:<nil> tag:<nil> stringType:0 set:false omitEmpty:false} tbsCertificate @2
</code></pre>
| <p>With 1.6 version, you can auto mount the token if you mention it while creating the service account like this:</p>
<p><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sysdig
automountServiceAccountToken: true
</code></p>
|
<p>I am trying to install Helm (v2.4.1) Tiller into a Kubernetes cluster (v1.5.7). This needs to be able to be done in a non-internet environment, so I want to get the manifest for the Tiller deployment from <code>helm init --dry-run --debug</code>. However, when I copy the manifest into a file called <code>tiller.yaml</code> and then run <code>kubectl create -f tiller.yaml</code> I get the validation error shown below. What's wrong with the file please?</p>
<pre><code>error validating "tiller.yaml": error validating data: [found invalid field labels for v1beta1.Deployment, found invalid field name for v1beta1.Deployment, found invalid field namespace for v1beta1.Deployment, found invalid field Spec for v1beta1.Deployment, found invalid field Status for v1beta1.Deployment, found invalid field creationTimestamp for v1beta1.Deployment]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>tiller.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
Spec:
MinReadySeconds: 0
Paused: false
ProgressDeadlineSeconds: null
Replicas: 1
RevisionHistoryLimit: null
RollbackTo: null
Selector: null
Strategy:
RollingUpdate: null
Type: ""
Template:
Spec:
ActiveDeadlineSeconds: null
Affinity: null
AutomountServiceAccountToken: null
Containers:
- Args: null
Command: null
Env:
- Name: TILLER_NAMESPACE
Value: kube-system
ValueFrom: null
EnvFrom: null
Image: gcr.io/kubernetes-helm/tiller:v2.4.1
ImagePullPolicy: IfNotPresent
Lifecycle: null
LivenessProbe:
Exec: null
FailureThreshold: 0
HTTPGet:
HTTPHeaders: null
Host: ""
Path: /liveness
Port: 44135
Scheme: ""
InitialDelaySeconds: 1
PeriodSeconds: 0
SuccessThreshold: 0
TCPSocket: null
TimeoutSeconds: 1
Name: tiller
Ports:
- ContainerPort: 44134
HostIP: ""
HostPort: 0
Name: tiller
Protocol: ""
ReadinessProbe:
Exec: null
FailureThreshold: 0
HTTPGet:
HTTPHeaders: null
Host: ""
Path: /readiness
Port: 44135
Scheme: ""
InitialDelaySeconds: 1
PeriodSeconds: 0
SuccessThreshold: 0
TCPSocket: null
TimeoutSeconds: 1
Resources:
Limits: null
Requests: null
SecurityContext: null
Stdin: false
StdinOnce: false
TTY: false
TerminationMessagePath: ""
TerminationMessagePolicy: ""
VolumeMounts: null
WorkingDir: ""
DNSPolicy: ""
Hostname: ""
ImagePullSecrets: null
InitContainers: null
NodeName: ""
NodeSelector: null
RestartPolicy: ""
SchedulerName: ""
SecurityContext:
FSGroup: null
HostIPC: false
HostNetwork: false
HostPID: false
RunAsNonRoot: null
RunAsUser: null
SELinuxOptions: null
SupplementalGroups: null
ServiceAccountName: ""
Subdomain: ""
TerminationGracePeriodSeconds: null
Tolerations: null
Volumes: null
creationTimestamp: null
labels:
app: helm
name: tiller
Status:
AvailableReplicas: 0
Conditions: null
ObservedGeneration: 0
ReadyReplicas: 0
Replicas: 0
UnavailableReplicas: 0
UpdatedReplicas: 0
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
---
apiVersion: v1
kind: Service
Spec:
ClusterIP: ""
ExternalIPs: null
ExternalName: ""
LoadBalancerIP: ""
LoadBalancerSourceRanges: null
Ports:
- Name: tiller
NodePort: 0
Port: 44134
Protocol: ""
TargetPort: tiller
Selector:
app: helm
name: tiller
SessionAffinity: ""
Type: ClusterIP
Status:
LoadBalancer:
Ingress: null
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
</code></pre>
| <p>All your attributes should start with a lowercase letter, and creationTimestamp/labels/etc should all appear within a "metadata" stanza. How was this manifest formed?</p>
|
<p>I am trying to connect Google Container Engine from my local machine using <code>gcloud</code> sdk but i am getting below error.</p>
<pre><code>C:\Program Files (x86)\Google\Cloud SDK>gcloud container clusters get-credential
s cluster-2 --zone us-central1-a --project myapp-00000
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) environment variable HOME or
KUBECONFIG must be set to store credentials for kubectl
</code></pre>
<p>I have check at HOME location there is not .kube folder created and not environment variable set by default, so i have created <code>KUBECONFIG</code> environment by myself after that i am getting below error :- </p>
<pre><code>ERROR: gcloud crashed (OSError): [Errno 13] Permission denied: 'C:\\Tool\\config'
</code></pre>
<p>i have started <code>gcloud</code> sdk as admin and it have all the correct Permission.</p>
<p><strong>EDIT</strong></p>
<p>I am using below version (which are latest as of today)</p>
<pre><code>Google Cloud SDK 129.0.0
kubectl
kubectl-windows-x86_64 1.4.0
C:\Program Files (x86)\Google\Cloud SDK>kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0",
</code></pre>
| <p>I had the same issue.
It turns out that KUBECONFIG refers to a file and not to a directory.</p>
<p>So if you set up KUBECONFIG to 'C:\Tool\config\kubectl.cfg' it should work fine.</p>
|
<p>I have Kubernetes set up and running a grpc service in a pod. I am successfully hitting an endpoint on the service, which has a print() statement in it, but I see no logs in the log file. I have seen this before when I was running a (cron) job in Kubernetes and the logs only appeared after the job was done (as opposed to when the job was running). Is there a way to make kubernetes write to the log file right away? Any setting that I can put (either cluster-level or just for the pod)? Thanks for any help in advance!</p>
| <p>Found the root cause. Specifically, found it at <a href="https://stackoverflow.com/questions/29663459/python-app-does-not-print-anything-when-running-detached-in-docker">Python app does not print anything when running detached in docker</a> . The solution is to set the following environmental variable: PYTHONUNBUFFERED=0 . It was not that the print statement was not being displayed, it was that the print statement was being buffered. Doing the above will solve the issue.</p>
|
<p>Based on this document <a href="https://github.com/kubernetes/ingress/tree/master/examples/deployment/nginx/kubeadm" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/tree/master/examples/deployment/nginx/kubeadm</a>
I am creating nginx controller. but my controller is not starting. its giving below error message. </p>
<pre><code>2017-05-21T17:15:45.274300000Z I0521 17:15:45.259441 1 launch.go:101] &{NGINX 0.9.0-beta.5 git-83cb03b5 [email protected]:ixdy/kubernetes-ingress.git}
2017-05-21T17:15:45.274448000Z I0521 17:15:45.259460 1 launch.go:104] Watching for ingress class: nginx
2017-05-21T17:15:45.274563000Z I0521 17:15:45.259620 1 launch.go:257] Creating API server client for https://10.96.0.1:443
2017-05-21T17:15:45.274670000Z I0521 17:15:45.258931 1 nginx.go:180] starting NGINX process...
2017-05-21T17:15:45.310531000Z F0521 17:15:45.303209 1 launch.go:118] no service with name kube-system/default-http-backend found: User "system:serviceaccount:kube-system:default" cannot get services in the namespace "kube-system". (get services default-http-backend)
</code></pre>
<p>I see default backend service running.</p>
<pre><code>$ kubectl --kubeconfig=/c/software/k612_centos/admin.conf -n kube-system get po
NAME READY STATUS RESTARTS AGE
default-http-backend-2198840601-zt8gt 1/1 Running 0 6m
nginx-ingress-controller-4108150732-q2rb2 0/1 CrashLoopBackOff 6 6m
</code></pre>
<p>How to clear this error message?</p>
<p>Thanks
SR</p>
| <p>which kubernetes version are you using?</p>
<p>If you are using kubenetes 1.6.x you need to define RBAC rules for the controller to access the default-http-backend service and other required components.</p>
<p>Please refer to this issue</p>
<p><a href="https://github.com/kubernetes/ingress/issues/575" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/issues/575</a></p>
<p>The manifest file in the first comment worked fine for me.</p>
|
<p>I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2,
I have deployed nginx with 3 replica, YAML file is below,</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
revisionHistoryLimit: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
</code></pre>
<p>and now I want to expose its port 80 on port 30062 of node, for that I created a service below,</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: nginx-ils-service
spec:
ports:
- name: http
port: 80
nodePort: 30062
selector:
app: nginx
type: LoadBalancer
</code></pre>
<p>this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal.
<a href="https://i.stack.imgur.com/ix5v1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ix5v1.png" alt="Terminal output" /></a><a href="https://i.stack.imgur.com/TUMOB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TUMOB.png" alt="Dash board status" /></a></p>
| <p>It looks like you are using a custom Kubernetes Cluster (using <code>minikube</code>, <code>kubeadm</code> or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer"><code>NodePort</code></a> or an Ingress Controller.</p>
<p>With the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class" rel="noreferrer">Ingress Controller</a> you can setup a domain name which maps to your pod; you don't need to give your Service the <code>LoadBalancer</code> type if you use an Ingress Controller.</p>
|
<p>I'm following <a href="https://medium.com/@SystemMining/setup-kubenetes-cluster-on-ubuntu-16-04-with-kubeadm-336f4061d929" rel="noreferrer">a blog post</a> to setup a kubernetes cluster with kubeadm. So I have a Virtualbox created with bridge network and simply followed the instructions.</p>
<p>I initially just did <code>kubeadm init</code> and it didn't work (master NotReady). So I figured maybe an older version might work better.</p>
<p>So I did</p>
<pre><code>kubeadm init --kubernetes-version v1.6.2
</code></pre>
<p>It finished quite quickly. But <code>kubeadm get nodes</code> always returns:</p>
<pre><code>master NotReady 4m v1.6.3
</code></pre>
<p>I checked the docker images that was downloaded and they're like this:</p>
<pre><code>gcr.io/google_containers/kube-proxy-amd64 v1.6.2 7a1b61b8f5d4 4 weeks ago 109.2 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.6.2 c7ad09fe3b82 4 weeks ago 132.7 MB
gcr.io/google_containers/kube-apiserver-amd64 v1.6.2 e14b1d5ee474 4 weeks ago 150.5 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.6.2 b55f2a2481b9 4 weeks ago 76.76 MB
gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 12 weeks ago 168.9 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 12 months ago 746.9 kB
</code></pre>
<p>and output of <code>docker ps</code> shows that the following containers are using <code>pause</code></p>
<pre><code>k8s_POD_kube-scheduler-master_kube-system_e4c05f51e4e89758e2fb58eb9c457e66_2
k8s_POD_kube-controller-manager-master_kube-system_14cceb4cae4afafe5d2872cedc46b03f_2
k8s_POD_etcd-master_kube-system_7075157cfd4524dbe0951e00a8e3129e_2
k8s_POD_kube-apiserver-master_kube-system_bd46883c0ce86694060fb2924470cfa7_2
</code></pre>
<p>I'm a bit confused:</p>
<ol>
<li><p>Under what situation would the master node get into <code>Ready</code> state?</p></li>
<li><p>Why the difference between the version I asked (and the docker images tag) and the version reported by <code>kubeadm</code>?</p></li>
</ol>
| <p>It looks like you did not set a Network Overlay. For example, if for flanneld it would be like this</p>
<pre><code> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
</code></pre>
<p>After this, your node should change to Ready. </p>
|
<p>I already intalled minikube the single node Kubernetes cluster, I just want a help of how to deploy a multi-node hadoop cluster inside this kubernetes node, I need a starting point please!?</p>
| <p>For clarification, do you want hadoop to leverage k8s components to run jobs or do you just want it to run as a k8s pod?</p>
<p>Unfortunately I could not find an example of hadoop built as a Kubernetes scheduler. You can probably still run it similar to the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/spark" rel="nofollow noreferrer">spark example</a>.</p>
<p>Update: Spark now ships with better integration for Kubernetes. Information can be found here <a href="https://apache-spark-on-k8s.github.io/userdocs/running-on-kubernetes.html" rel="nofollow noreferrer">here</a></p>
|
<p>I have a cluster of 3 nodes running Kubernetes 1.6.1, each has 2 CPU and 4G RAM.</p>
<p>I am constantly redeploying my application with the same Docker tag by updating pod template hash by replacing environment variable value that is passed to the container.</p>
<p><code>sed "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" nginx-deployment.yml | kubectl replace -f -</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: FOR_GODS_SAKE_PLEASE_REDEPLOY
value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'
</code></pre>
<p>If I do this for a few hundred times, I can't redeploy any more - new pods are in Pending state.
<code>kubectl get events</code> produces the following:
</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 50s 379 default-scheduler Warning
FailedScheduling No nodes are available that match all of the following predicates:: Insufficient pods (3).
</code></pre>
<p>At the same time I can see about 200 Exited nginx containers on every Kube node.</p>
<p>Looking in kube-controller-manager logs I can see that PodGC is trying to delete some pods, but they are not found.
</p>
<pre><code>I0516 12:53:41.137311 1 gc_controller.go:175] Found unscheduled terminating Pod nginx-deployment-2927112463-qczvv not assigned to any Node. Deleting.
I0516 12:53:41.137320 1 gc_controller.go:62] PodGC is force deleting Pod: default:nginx-deployment-2927112463-qczvv
E0516 12:53:41.190592 1 gc_controller.go:177] pods "nginx-deployment-2927112463-qczvv" not found
I0516 12:53:41.195020 1 gc_controller.go:175] Found unscheduled terminating Pod nginx-deployment-3265736979-jrpzb not assigned to any Node. Deleting.
I0516 12:53:41.195048 1 gc_controller.go:62] PodGC is force deleting Pod: default:nginx-deployment-3265736979-jrpzb
E0516 12:53:41.238307 1 gc_controller.go:177] pods "nginx-deployment-3265736979-jrpzb" not found
</code></pre>
<p>Is there anything I can do to prevent that from happening?</p>
| <p>Kubernetes allows you to tweak the garbage collection flags of kubelet. This can be done via changing the flags <code>--maximum-dead-containers</code> or <code>--maximum-dead-containers-per-container</code>. Read more about it in docs here:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="nofollow noreferrer">Configuring kubelet Garbage Collection</a></li>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">Configuring Out Of Resource Handling</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="nofollow noreferrer">Garbage Collection</a></li>
</ul>
|
<p>I have a GKE cluster running in GCE, I was able to build + tag an image derived from ubuntu:16.04:</p>
<pre><code>/ # docker images
REPOSITORY TAG IMAGE ID
CREATED SIZE
eu.gcr.io/my-project/ubuntu-gcloud latest a723e43228ae 7 minutes ago 347MB
ubuntu 16.04 ebcd9d4fca80 7 days ago 118MB
</code></pre>
<p>First I try to log in to registry (as documented in GKE docs) <code>
docker login -u oauth2accesstoken -p `curl -s "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google"|awk -F\" "{ print \$4 }"` eu.gcr.io`
</code></p>
<p>And then the <code>docker push</code> command fails:</p>
<pre><code># docker push eu.gcr.io/my-project/ubuntu-gcloud
The push refers to a repository [eu.gcr.io/my-project/ubuntu-gcloud]
a3a6893ab23f: Preparing
6e390fa7d62c: Preparing
22b8fccbaf84: Preparing
085eeae7a10b: Preparing
b29983dd2306: Preparing
33f1a94ed7fc: Waiting
b27287a6dbce: Waiting
47c2386f248c: Waiting
2be95f0d8a0c: Waiting
2df9b8def18a: Waiting
denied: Unable to create the repository, please check that you have access to do so.
</code></pre>
<p>The token should be valid, in another instance I'm able to <code>gcloud whatever</code> with it; the service account has 'Editor' role on the project.</p>
<p>The weirdest part is when I do <code>docker login</code> with obviously invalid credentials</p>
<pre><code>misko@MacBook ~ $ docker login -u oauth2accesstoken -p somethingverystupidthatisreallynotmypasswordortoken123 eu.gcr.io
Login Succeeded
</code></pre>
<p>login always succeeds.</p>
<p>What shall I do to successfully <code>docker push</code> to gcr.io?</p>
| <p>Try this:</p>
<pre><code>gcloud docker -- push eu.gcr.io/my-project/ubuntu-gcloud
</code></pre>
<p>If you want to use regular docker commands, update your docker configuration with GCR credentials:</p>
<pre><code>gcloud docker -a
</code></pre>
<p>Then you can build and push docker images like this:</p>
<pre><code>docker build -t eu.gcr.io/my-project/ubuntu-gcloud .
docker push eu.gcr.io/my-project/ubuntu-gcloud
</code></pre>
|
<p>We make use of Ingress to create HTTPS load balancers that forward directly to our (typically nodejs) services. However, recently we have wanted more control of traffic in front of nodejs which the Google load balancer doesn't provide.</p>
<ul>
<li>Standardised, custom error pages</li>
<li>Standard rewrite rules (e.g redirect http to https)</li>
<li>Decouple pod readinessProbes from load balancer health checks (so we can still serve custom error pages when there are no healthy pods).</li>
</ul>
<p>We use nginx in other parts of our stack so this seems like a good choice, and I have seen several examples of nginx being used to front services in Kubernetes, typically in one of two configurations.</p>
<ul>
<li>An nginx container in every pod forwarding traffic directly to the application on localhost.</li>
<li>A separate nginx Deployment & Service, scaled independently and forwarding traffic to the appropriate Kubernetes Service.</li>
</ul>
<p>What are the pros/cons of each method and how should I determine which one is most appropriate for our use case?</p>
| <p>Following on from Vincent H, I'd suggest pipelining the Google HTTPS Load Balancer to an nginx ingress controller.</p>
<p>As you've mentioned, this can scale independently; have it's own health checks; and you can standardise your error pages. </p>
<p>We've achieved this by having a single <code>kubernetes.io/ingress.class: "gce"</code> ingress object, which has a default backend of our nginx ingress controller. All our other ingress objects are annotated with <code>kubernetes.io/ingress.class: "nginx"</code>. </p>
<p>We're using the controller documented here: <a href="https://github.com/kubernetes/ingress/tree/master/controllers/nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/tree/master/controllers/nginx</a>. With a <a href="https://github.com/kubernetes/ingress/blob/dd7f8b4a972d9c7346da2b37419ff3529d3b38d0/controllers/nginx/configuration.md" rel="nofollow noreferrer">custom <code>/etc/nginx/template/nginx.tmpl</code></a> allowing complete control over the ingress.</p>
<blockquote>
<p>For complete transparency, we haven't (yet) setup custom error pages in the nginx controller, however the <a href="https://github.com/kubernetes/ingress/tree/dd7f8b4a972d9c7346da2b37419ff3529d3b38d0/examples/customization/custom-errors/nginx" rel="nofollow noreferrer">documentation</a> appears straight forward.</p>
</blockquote>
|
<p>I am fairly new to Kubernetes and I have recently exposed a service using miniKube using <code>NodePort</code> type. I want to test the running of my application but I dont see any external ip but the port only. Here is the output of my:</p>
<pre><code>$kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 1h
kubernetes-bootcamp 10.0.0.253 <nodes> 8080:31180/TCP 20m
$kubectl describe services/kubernetes-bootcamp
Name: kubernetes-bootcamp
Namespace: default
Labels: run=kubernetes-bootcamp
Annotations: <none>
Selector: run=kubernetes-bootcamp
Type: NodePort
IP: 10.0.0.253
Port: <unset> 8080/TCP
NodePort: <unset> 31180/TCP
Endpoints: 172.17.0.2:8080
Session Affinity: None
Events: <none>
</code></pre>
<p>What is the <code>External IP</code> in this case so that I can use <code>curl</code> to get the output of my app exposed, I followed the tutorial while working on my laptop : <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-interactive/" rel="noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-interactive/</a>.</p>
<p>P.S : What does that <code><nodes></code> means in the output of the <code>get service</code> command under <code>External-IP</code>?</p>
| <p>As you are using <code>minikube</code>, the command <code>minikube ip</code> will return the IP you are looking for. </p>
<p>In case you are not using <code>minikube</code>, <code>kubectl get nodes -o yaml</code> will show you, amongst other data, the IP address of the node. </p>
|
<p>Use this guide to install Kubernetes on Vagrant cluster:</p>
<p><a href="https://kubernetes.io/docs/getting-started-guides/kubeadm/" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/kubeadm/</a></p>
<p>At <code>(2/4) Initializing your master</code>, there came some errors:</p>
<pre><code>[root@localhost ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
</code></pre>
<p>I checked the <code>/proc/sys/net/bridge/bridge-nf-call-iptables</code> file content, there is only one <code>0</code> in it.</p>
<p>At <code>(3/4) Installing a pod network</code>, I downloaded <code>kube-flannel</code> file:</p>
<p><a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</a></p>
<p>And run <code>kubectl apply -f kube-flannel.yml</code>, got error:</p>
<pre><code>[root@localhost ~]# kubectl apply -f kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Until here, I don't know how to goon.</p>
<p>My <code>Vagrantfile</code>:</p>
<pre><code> # Master Server
config.vm.define "master", primary: true do |master|
master.vm.network :private_network, ip: "192.168.33.200"
master.vm.network :forwarded_port, guest: 22, host: 1234, id: 'ssh'
end
</code></pre>
| <p>In order to set <code>/proc/sys/net/bridge/bridge-nf-call-iptables</code> by editing <code>/etc/sysctl.conf</code>. There you can add [1]</p>
<pre><code>net.bridge.bridge-nf-call-iptables = 1
</code></pre>
<p>Then execute </p>
<pre><code>sudo sysctl -p
</code></pre>
<p>And the changes will be applied. With this the pre-flight check should pass.</p>
<hr>
<p>[1] <a href="http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf" rel="noreferrer">http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf</a></p>
|
<p>In my Kubernetes cluster I couldn't connect to the internet from my Containers. So after searching, I found a possible solution, that is to turn off "IP Masquerade". But I had no luck turning this off. Whatever I did I cannot get it disabled.</p>
<p>First I change the following,</p>
<pre><code>/etc/kubernetes/cni/docker_opts_cni.env
DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ="false"
</code></pre>
<p>Then tried </p>
<pre><code>/etc/kubernetes/cni/docker_opts_cni.env
DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ="--ip-masq=false"
</code></pre>
<p>I'm using the latest Kubernetes version(v 1.6.3) and I have followed <a href="https://coreos.com/kubernetes/docs/latest/getting-started.html" rel="nofollow noreferrer">this</a> to configure my cluster. I'm using flannel without calico. Can someone guide me on how I can get this disabled? Thanks in advance/</p>
| <p>This is an example configuration for creating new flannel network using cni for docker. </p>
<pre><code>/etc/kubernetes/cni/net.d/10-flannel.conf
{
"cniVersion": "0.2.0",
"name": "mybridge",
"type": "bridge",
"bridge": "cni_bridge1",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.15.30.0/24",
"routes": [
{ "dst": "0.0.0.0/0" },
{ "dst": "1.1.1.1/32", "gw":"10.15.30.1"}
],
"rangeStart": "10.15.30.100",
"rangeEnd": "10.15.30.200",
"gateway": "10.15.30.99"
}
}
</code></pre>
<p>In your configuration file changing <code>ipMasq</code> value for <code>true</code> to <code>false</code> or adding the option if not present should turn off "IP Masquerade" </p>
|
<p>Hi I am running kubernetes cluster where I run mailhog container.</p>
<p>But I need to run it with own docker run parameter. If I would run it in docker directly. I would use command:</p>
<pre><code>docker run mailhog/mailhog -auth-file=./auth.file
</code></pre>
<p>But I need to run it via Kubernetes pod. My pod looks like:</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
ports:
- containerPort: 8025
</code></pre>
<p>How to achieve to run Docker container with parameter -auth-file=./auth.file via kubernetes. Thanks.</p>
<p>I tried adding under <code>containers</code></p>
<pre><code> command: ["-auth-file", "/data/mailhog/auth.file"]
</code></pre>
<p>but then I get </p>
<pre><code> Failed to start container with docker id 7565654 with error: Error response from daemon: Container command '-auth-file' not found or does not exist.
</code></pre>
| <p>thanks to @lang2</p>
<p>here is my deployment.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mailhog
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: mailhog
spec:
volumes:
- name: secrets-volume
secret:
secretName: mailhog-login
containers:
- name: mailhog
image: us.gcr.io/com/mailhog:1.0.0
resources:
limits:
cpu: 70m
memory: 30Mi
requests:
cpu: 50m
memory: 20Mi
volumeMounts:
- name: secrets-volume
mountPath: /data/mailhog
readOnly: true
ports:
- containerPort: 8025
- containerPort: 1025
args:
- "-auth-file=/data/mailhog/auth.file"
</code></pre>
|
<p><em>Foreword:</em> My question is somewhat related to <a href="https://stackoverflow.com/questions/26705201/whats-the-difference-between-apaches-mesos-and-googles-kubernetes">this one</a>, but I'd like to go deeper on the particular aspect of scheduling.</p>
<p>Apart from the fact that Kubernetes's scheduling is <em>centralized</em> and Mesos's scheduling is <em>decentralized</em>, following a two-step process, what are the differences between the scheduling algorithms of both projects? </p>
<p>I've been using Kubernetes for half a year now, and I've never used Mesos in practice. I understand the concept of <em>resource offerings</em> but I cannot establish a comparison between Mesos and Kubernetes scheduling algorithms, mainly because I don't have deep knowledge of both tools' implementations.</p>
| <p>I'm not sure if this is comparable. Kubernetes can be run as a Mesos framework. Its scheduler is described <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/scheduler_algorithm.md" rel="nofollow noreferrer">here</a>. It base on filtering and ranking the nodes.</p>
<p>Mesos two step scheduling is more depend on framework algorithm.</p>
<ol>
<li>Mesos presents the offers to the framework based on <a href="https://cs.stanford.edu/%7Ematei/papers/2011/nsdi_drf.pdf" rel="nofollow noreferrer">DRF algorithm</a>. Frameworks could be prioritized as well by using roles and weights.</li>
<li>Frameworks decide which task to run based on give offer. Every framework can implement its own algorithm for matching task with offer. <a href="https://stackoverflow.com/a/43448790/1387612">This is a NP hard problem</a></li>
</ol>
<p><strong>Appendix</strong> Quote from <a href="https://medium.com/@ArmandGrillet/comparison-of-container-schedulers-c427f4f7421" rel="nofollow noreferrer">https://medium.com/@ArmandGrillet/comparison-of-container-schedulers-c427f4f7421</a></p>
<p><a href="https://i.stack.imgur.com/JZiRp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JZiRp.png" alt="enter image description here" /></a></p>
<blockquote>
<p><strong>Monolithic scheduling</strong></p>
<p>Monolithic schedulers are composed of a single
scheduling agent handling all the requests, they are commonly used in
high-performance computing. A monolithic scheduler generally applies a
single-algorithm implementation for all incoming jobs thus running
different scheduling logic depending on job types is difficult. Apache
Hadoop YARN [55], a popular architecture for Hadoop that delegates
many scheduling functions to per-application components, is a
monolithic scheduler architecture due to the fact that the resource
requests from application masters have to be sent to a single global
scheduler in the resource master.</p>
<p><strong>Two-level scheduling</strong></p>
<p>A two-level
scheduler adjusts the allocation of resources to each scheduler
dynamically using a central coordinator to decide how many resources
each sub-cluster can have, it is used in Mesos [50] and was used for
Hadoop-on-Demand (now replaced by YARN). With this architecture, the
allocator avoids conflicts by offering a given resource to only one
framework at a time and attempts to achieve dominant resource fairness
by choosing the order and the sizes of the resources it offers. Only
one framework is examining a resource at a time thus the concurrency
control is called pessimistic, a strategy that is less error-prone but
slower compared to an optimistic concurrency control offering a
resource to many frameworks at the same time.</p>
<p><strong>Shared-state scheduling</strong></p>
<p>Omega grants each scheduler full access to the entire cluster,
allowing them to compete in a free-for-all manner. There is no central
resource allocator as all of the resource-allocation decisions take
place in the schedulers. There is no central policy-enforcement
engine, individual schedulers are taking decisions in this variant of
the two-level scheme. By supporting independent scheduler
implementations and exposing the entire allocation state of the
schedulers, Omega can scale to many schedulers and works with
different workloads with their own scheduling policies [54].</p>
</blockquote>
|
<p>I would like to run specific command after initialization of deployment is successful.</p>
<p>This is my yaml file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: {{my-service-image}}
env:
- name: NODE_ENV
value: "docker-dev"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 3000
</code></pre>
<p>However, I would like to run command for db migration after (not before) deployment is successfully initialized and pods are running. </p>
<p>I can do it manually for every pod (with kubectl exec), but this is not very scalable.</p>
| <p>I resolved it using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="noreferrer">lifecycles</a>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: {{my-service-image}}
env:
- name: NODE_ENV
value: "docker-dev"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 3000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", {{cmd}}]
</code></pre>
|
<p>I created a physical volume using NFS and PVC for the same volume. However, the PVC always creates a EBS disk storage instead of binding to the PV. Please see the log below:</p>
<pre><code>> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 7s
</code></pre>
<p>create PVC now</p>
<pre><code>> kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynfspvc Bound pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX default 4s
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 50s
pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX Delete Bound default/mynfspvc default 17s
</code></pre>
<p>nfs-pv.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
</code></pre>
<p>nfs-pvc.yaml</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
</code></pre>
| <p>It looks like you have dynamic provisioning and default storageclass feature enabled, and the default class is AWS ebs. You can check your default class with following command:</p>
<pre><code>$ kubectl get storageclasses
NAME TYPE
standard (default) kubernetes.io/aws-ebs
</code></pre>
<p>If this is correct, then I think you'll need to specify storage class to solve you problem.</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nfs-class
provisioner: kubernetes.io/fake-nfs
</code></pre>
<p>Add <code>storageClassName</code> to both of your PV</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
storageClassName: nfs-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
</code></pre>
<p>and PVC</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
storageClassName: nfs-class
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
</code></pre>
<p>You can check out <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1</a> for details.</p>
|
Subsets and Splits