prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>i followed the instructions to install <a href="https://github.com/NVIDIA/nvidia-docker" rel="nofollow noreferrer">nvidia-docker 2</a> and then installed kubernetes 1.10 via kubeadm (on rhel7): i did the following:</p> <pre><code>curl -s -L https://nvidia.github.io/nvidia-docker/rhel7.4/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo yum update yum install docker yum install -y nvidia-container-runtime-hook yum install --downloadonly --downloaddir=/tmp/ nvidia-docker2-2.0.3-1.docker1.13.1.noarch nvidia-container-runtime-2.0.0-1.docker1.13.1.x86_64 rpm -Uhv --replacefiles /tmp/nvidia-container-runtime-2.0.0-1.docker1.13.1.x86_64.rpm /tmp/nvidia-docker2-2.0.3-1.docker1.13.1.noarch.rpm mkdir -p /etc/systemd/system/docker.service.d/ cat &lt;&lt;EOF &gt; /etc/systemd/system/docker.service.d/override.conf [Service] ExecStart= ExecStart=/usr/bin/dockerd-current --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES EOF cat &lt;&lt;EOF &gt; /etc/docker/daemon.json { "default-runtime": "nvidia", "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } } } EOF systemctl restart docker docker run --rm nvidia/cuda nvidia-smi # success! </code></pre> <p>i can even schedule gpu'd containers and see all gpus from within the container.</p> <p>however, when i deploy a container with:</p> <pre><code>resources: limits: nvidia.com/gpu: 1 </code></pre> <p>the pods remain as:</p> <pre><code>jupyter jupyterlab-gpu 0/1 Pending 0 1m &lt;none&gt; &lt;none&gt; </code></pre> <p>describe shows:</p> <pre><code>Name: jupyterlab-gpu Namespace: jupyter Node: &lt;none&gt; Labels: app=jupyterhub component=singleuser-server heritage=jupyterhub hub.jupyter.org/username=me Annotations: &lt;none&gt; Status: Pending IP: Containers: notebook: Image: slaclab/slac-jupyterlab-gpu Port: 8888/TCP Host Port: 0/TCP Limits: cpu: 2 memory: 2147483648 nvidia.com/gpu: 1 Requests: cpu: 500m memory: 536870912 nvidia.com/gpu: 1 Environment: JUPYTERHUB_USER: me JUPYTERLAB_IDLE_TIMEOUT: 43200 JPY_API_TOKEN: 1fca7b3d716e4d54a98d8054d17b16fb CPU_LIMIT: 2.0 JUPYTERHUB_SERVICE_PREFIX: /user/me/ MEM_GUARANTEE: 536870912 JUPYTERHUB_API_URL: http://10.103.19.59:8081/hub/api JUPYTERHUB_OAUTH_CALLBACK_URL: /user/me/oauth_callback JUPYTERHUB_BASE_URL: / JUPYTERHUB_API_TOKEN: 1fca7b3d716e4d54a98d8054d17b16fb CPU_GUARANTEE: 0.5 JUPYTERHUB_CLIENT_ID: user-me MEM_LIMIT: 2147483648 JUPYTERHUB_HOST: Mounts: /home/ from generic-user-home (rw) /var/run/secrets/kubernetes.io/serviceaccount from no-api-access-please (ro) Conditions: Type Status PodScheduled False Volumes: generic-user-home: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: generic-user-home ReadOnly: false no-api-access-please: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: QoS Class: Burstable Node-Selectors: group=gpu Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 14s (x13 over 2m) default-scheduler 0/8 nodes are available: 1 node(s) were not ready, 6 node(s) didn't match node selector, 7 Insufficient nvidia.com/gpu. </code></pre> <p>i am able to schedule containers to the node without the gpu resource limit without issues.</p> <p>is there a way i can validate that kubectl (?) can 'see' the gpus?</p>
<p>You can view nodes detail via <code>kubectl get nodes -oyaml</code>, <code>nvidia.com/gpu</code> resources will be listed under <code>status.allocatable</code> and <code>status.capacity</code> alongside with cpu and memory</p>
<p>recently I've been unable to connect to the bash on my kubernetes cluster. And I'm at a loss as to why. Have anyone else experienced this?</p> <p><strong>What happened</strong>: I can no longer connect to /bin/bash in my running pods. It simply hangs when trying to exec the command. I've verified that bash is installed (<code>/bin/bash --version</code>). I've tried both locally and from the google cloud console</p> <p><strong>What you expected to happen</strong>: That my local terminal successfully connects to the pods bash terminal</p> <p><strong>How to reproduce it (as minimally and precisely as possible)</strong>: I've only tested it on my cluster, but the command I'm running is:</p> <p><code>kubectl exec -i POD_ID --namespace=NAMESPACE -c CONTAINER -- /bin/bash</code></p> <p>I've also run it with DEBUG=1 which results in the following:</p> <pre><code>DEBUG=1 kubectl exec -i POD_ID --namespace=NAMESPACE -c CONTAINER -- /bin/bash I0412 10:52:14.560443 2675 logs.go:41] (0xc4200aed10) (0xc420242140) Create stream I0412 10:52:14.560486 2675 logs.go:41] (0xc4200aed10) (0xc420242140) Stream added, broadcasting: 1 I0412 10:52:14.611561 2675 logs.go:41] (0xc4200aed10) Reply frame received for 1 I0412 10:52:14.611658 2675 logs.go:41] (0xc4200aed10) (0xc4203e26e0) Create stream I0412 10:52:14.611692 2675 logs.go:41] (0xc4200aed10) (0xc4203e26e0) Stream added, broadcasting: 3 I0412 10:52:14.656684 2675 logs.go:41] (0xc4200aed10) Reply frame received for 3 I0412 10:52:14.656725 2675 logs.go:41] (0xc4200aed10) (0xc4202425a0) Create stream I0412 10:52:14.656737 2675 logs.go:41] (0xc4200aed10) (0xc4202425a0) Stream added, broadcasting: 5 I0412 10:52:14.702100 2675 logs.go:41] (0xc4200aed10) Reply frame received for 5 I0412 10:52:14.702140 2675 logs.go:41] (0xc4200aed10) (0xc420659680) Create stream I0412 10:52:14.702151 2675 logs.go:41] (0xc4200aed10) (0xc420659680) Stream added, broadcasting: 7 I0412 10:52:14.746707 2675 logs.go:41] (0xc4200aed10) Reply frame received for 7 </code></pre> <p><strong>Anything else we need to know?</strong>:</p> <p><strong>Environment</strong>: - Kubernetes version</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.6-gke.0", GitCommit:"cb151369f60073317da686a6ce7de36abe2bda8d", GitTreeState:"clean", BuildDate:"2018-03-21T19:01:20Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <ul> <li>Cloud provider or hardware configuration: <code>Google Cloud</code></li> <li><p>OS (e.g. from <code>/etc/os-release</code>):</p> <p>PRETTY_NAME="Debian GNU/Linux 9 (stretch)" NAME="Debian GNU/Linux" VERSION_ID="9" VERSION="9 (stretch)" ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"</p></li> <li><p>Kernel (e.g. <code>uname -a</code>): <code>Linux bounce-deployment-f95687cbc-mfgtg 4.4.111+ #1 SMP Thu Feb 1 22:06:37 PST 2018 x86_64 GNU/Linux</code></p></li> <li>Install tools:</li> <li>Others:</li> </ul>
<p>You have to pass <code>-t</code> to <code>kubectl exec</code> command, just like <code>docker exec -i -t &lt;container_name&gt; bash</code>.</p> <p>From the help command:</p> <blockquote> <p>-t, --tty=false: Stdin is a TTY</p> </blockquote>
<p>I want to install kube-prometheus on my existing kubernetes cluster(v1.10). Before that, the doc says I need to change the ip address of contrller/scheduler from <code>127.0.0.1</code> to <code>0.0.0.0</code>. And it also recommand to use <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/" rel="nofollow noreferrer">kubeadm config upgrade</a> to change these features:</p> <pre><code>controllerManagerExtraArgs: address: 0.0.0.0 schedulerExtraArgs: address: 0.0.0.0 </code></pre> <p>After reading the doc, i tried with the below command, but it didn't work:</p> <pre><code>kubeadm upgrade --feature-gates controllerManagerExtraArgs.address=0.0.0.0 </code></pre> <p>I know i can use <code>kubectl -n kube-system edit cm kubeadm-config</code> to modify configMap directly, just want to know how to upgrade it from <code>kubeadm upgrade</code></p>
<p>The only way I know of is to use the <code>--config</code> option.</p> <p>Generate a yaml file that looks like this:</p> <pre><code>controllerManagerExtraArgs: address: 0.0.0.0 schedulerExtraArgs: address: 0.0.0.0 </code></pre> <p>and then run:</p> <pre><code>kubeadm upgrade apply --config /etc/kubeadm.yaml </code></pre>
<p>I have a spring boot application with some properties as below in my application.properties.</p> <pre><code>server.ssl.keyStore=/users/admin/certs/appcert.jks server.ssl.keyStorePassword=certpwd server.ssl.trustStore=/users/admin/certs/cacerts server.ssl.trustStorePassword=trustpwd </code></pre> <p>Here the cert paths are hardcoded to some path. But, I dont want to hard code this as the path will not be known in Mesos or Kubernetes world.</p> <p>I have a docker file as follows.</p> <pre><code>FROM docker.com/base/jdk1.8:latest MAINTAINER Application Engineering [ https://docker.com/ ] RUN mkdir -p /opt/docker/svc COPY application/weather-service.war /opt/docker/svc/ CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml </code></pre> <p>Here, I can use the volume mount option in kubernetes so as to place the application.proerties.</p> <p>How can i achieve the same thing for cert files in the application.properties?</p> <p>Here, the cert props are optional for few applications and mandatory for few applications.</p> <p>I need options to integrate within the docker images and having the cert files outside the docker image.</p> <p><strong>Approach 1</strong>. Within the docker image</p> <p>Remove the property "server.ssl.keyStore" from application.properties and pass it as a environment variable like the below one.</p> <blockquote> <p>CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks</p> </blockquote> <p>Now, the cert should be places in secrets and use the volume mount options with kubernetes.</p> <p><strong>Approach 2</strong>. No need to have -Dserver.ssl.keyStore=/certs/appcert.jks in the docker file, but still remove the property "server.ssl.keyStore" from application.properties and do as follows.</p> <p>a. Create secret</p> <blockquote> <p>kubectl create secret generic svc-truststore-cert --from-file=./cacerts</p> </blockquote> <p>b. Create one env variable as below.</p> <blockquote> <p>{ "name": "JAVA_OPTS", "value": "-Dserver.ssl.trustStore=/certs/truststore/cacerts" }</p> </blockquote> <p>c. Create Volume mounts under container for pod.</p> <blockquote> <p>"volumeMounts": [ { "name": "truststore-cert", "mountPath": "/certs/truststore" } ]</p> </blockquote> <p>d. Create a volume under spec.</p> <blockquote> <p>{ "name": "truststore-cert", "secret": { "secretName": "svc-truststore-cert", "items": [ { "key": "cacerts", "path": "cacerts" } ] } }</p> </blockquote> <p><strong>Approach 3</strong>. </p> <p>Using Kubernetes Persistent Volume.</p> <p>Created a persistent volume on Kubernetes .</p> <p>Mount the volume to the Pod of each microservice (changes in the pod script file). Mounted file system accessible via - '/shared/folder/certs' path.</p> <blockquote> <p>CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml -Dserver.ssl.keyStore=/certs/appcert.jks</p> </blockquote> <p>I have taken the second approach. Is this correct? Is there any other better approach?</p> <p>Thanks</p>
<p>Yes, the second approach is the best one, and it is the only way if you are storing some sensitive data like certificates, keys, etc. That topic is covered in the <a href="https://kubernetes.io/docs/concepts/configuration/secret/#overview-of-secrets" rel="nofollow noreferrer">documentation</a>.</p> <p>Moreover, you can <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer">encrypt</a> your secrets to add another level of protection.</p>
<p>I have a Kubernetes cluster running. All pods are running. This is a windows machine with minikube on it.</p> <p>However <code>helm ls --debug</code> gives following error</p> <pre><code>helm ls --debug [debug] Created tunnel using local port: '57209' [debug] SERVER: "127.0.0.1:57209" Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused </code></pre> <p>Cluster information</p> <pre><code>kubectl.exe cluster-info Kubernetes master is running at https://135.250.128.98:8443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre> <p>kubectl service</p> <pre><code>kubectl.exe get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h </code></pre> <p>Dashboard is accessible at <code>http://135.250.128.98:30000</code></p> <p>kube configuration:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: C:\Users\abc\.minikube\ca.crt server: https://135.250.128.98:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: as-user-extra: {} client-certificate: C:\Users\abc\.minikube\client.crt client-key: C:\Users\abc\.minikube\client.key </code></pre> <p>Is there a solution? Most online resource says cluster is misconfigured. But not sure what is misconfigured and how to solve this error?</p>
<p>What worked for me when I was facing the same issue was changing <code>automountServiceAccountToken</code> to <code>true</code>.</p> <p>Use the following command to edit the tiller-deploy</p> <pre><code>kubectl --namespace=kube-system edit deployment/tiller-deploy </code></pre> <p>And change <code>automountServiceAccountToken</code> to <code>true</code></p>
<p>I used the below command to generate a key locally.</p> <blockquote> <p>openssl genrsa -out testsvc.testns.ing.lb.xyz.io.key.pem 2048</p> </blockquote> <p>And the used the below command to generate the CSR(certificate signing request).</p> <blockquote> <p>openssl req -new -sha256 -key testsvc.testns.ing.lb.xyz.io.key.pem -subj "/CN=testsvc.testns.ing.lb.xyz.io"</p> </blockquote> <p>I generated the certificate chain file using the above CSR file and finally got the below file.</p> <blockquote> <p>testsvc.testns.ing.lb.xyz.io.chain.pem</p> </blockquote> <p>I am trying to use them for ingress tls and below is the command for ingress tls.</p> <blockquote> <p>kubectl create secret tls custom-tls-cert --key /path/to/tls.key --cert /path/to/tls.crt</p> </blockquote> <p>Not sure, How can i use the chain.pem file and key.pem file with the above command. Tried generating crt from the chain.pem and getting error on kubectl create secret.</p> <pre><code>"error: failed to load key pair tls: failed to find any PEM data in certificate input" </code></pre> <p>I would like to create the below secret.</p> <pre><code>apiVersion: v1 data: tls.crt: base64 encoded cert tls.key: base64 encoded key kind: Secret metadata: name: testsecret namespace: default type: Opaque </code></pre> <p>Not sure how to generate .crt and .key file with the chain.pem file.</p> <p>Thanks</p>
<p>First, let's clarify what the key, CSR, and certificate are. </p> <p><code>key</code> - locally generated secret file shown/sent to noone (key.pem)<br> <code>csr</code> - file (request.pem) generated by key.pem, need to be sent to CA (certificate authority). (You can have your own CA, but usually, it is managed by someone else).<br> <code>cert</code> - file (cert.pem) created by CA based on request.pem and its own CA private key </p> <p>Now you can use these two files - <code>key.pem</code> and <code>cert.pem</code> - to create a secure connection between your service and a client. </p> <p>I suppose you have only created a key and a request. So, you need to go one step further and get a certificate from CA. </p> <p>For testing purpose, you can create a new key and a self-signed certificate with one command: </p> <blockquote> <p>openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes -subj "/C=US/ST=Florida/L=Miami/O=SomeCompany/OU=ITdepartment/CN=www.mydomain.com"</p> </blockquote> <p>(adjust subject to your needs) </p> <p>There are different types of keys and certificates, and it's easy to find the way to convert one format into another. </p> <p>Using certificate and key in PEM format when creating a Secret should work fine. </p> <p>Just insert the key and the certificate into that command as follows: </p> <pre><code>kubectl create secret tls testsecret --key key.pem --cert cert.pem </code></pre> <p>This command creates a Secret object and encodes <code>key.pem</code> and <code>cert.pem</code> content with base64. </p> <p>You can check the content of the created object with the commands: </p> <pre><code>kubectl get secret testsecret -o yaml echo "tls.crt: content" | base64 --decode </code></pre> <p>for example: </p> <pre><code>echo "LS0t...tLS0tLQo=" | base64 --decode </code></pre> <p>Read more about using and generating certificates here:<br> <a href="https://www.sslshopper.com/article-most-common-openssl-commands.html" rel="noreferrer">https://www.sslshopper.com/article-most-common-openssl-commands.html</a> </p> <p><a href="https://stackoverflow.com/questions/10175812/how-to-create-a-self-signed-certificate-with-openssl">How to create a self-signed certificate with openssl?</a> </p> <p><a href="https://docs.bitnami.com/kubernetes/how-to/secure-kubernetes-services-with-ingress-tls-letsencrypt/" rel="noreferrer">https://docs.bitnami.com/kubernetes/how-to/secure-kubernetes-services-with-ingress-tls-letsencrypt/</a> </p> <p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a> </p>
<p>I have a question about a Helm upgrade. I'm working on a chart foo-1.0.0 which deploys a pod with a docker image bar:4.5.1.</p> <p>I have a release "myrelease" based on this chart foo in version 1.0.0 (with a bar:4.5.1 running inside).</p> <p>Now, I make a fix on bar, rebuild the image <strong>bar:4.5.2</strong>, change the image in the chart but I did not bump its version. It is still foo-1.0.0</p> <p>I launch:</p> <pre><code>$ helm upgrade myrelease repo/foo --version 1.0.0 </code></pre> <p>My problem is that after the upgrade, my pod is still running the bar:4.5.1 instead the 4.5.2</p> <p>Is the a "cache" in tiller? It seems that tiller did not download foo-1.0.0 again. Is there a way to force it to download?</p>
<p>You need to change the <code>tag</code> version in the image section of <code>values.yaml</code>:</p> <pre><code>image: repository: bar tag: 4.5.2 pullPolicy: Always </code></pre> <p>and then run the following command:</p> <pre><code>helm upgrade myrelease repo/foo </code></pre> <p>or just run the following:</p> <pre><code>helm upgrade myrelease repo/foo --set=image.tag=1.2.2 </code></pre> <p>and set the applicable image version.</p>
<p>We have a set of deployments (sets of pods) that are all using same docker image. Examples:</p> <ul> <li>web api</li> <li>web admin</li> <li>web tasks worker nodes</li> <li>data tasks worker nodes</li> <li>...</li> </ul> <p>They all require a set of environment variables that are common, for example location of the database host, secret keys to external services, etc. They also have a set of environment variables that are not common.</p> <p>Is there anyway where one could either:</p> <ol> <li>Reuse a template where environment variables are defined</li> <li>Load environment variables from file and set them on the pods</li> </ol> <p>The optimal solution would be one that is namespace aware, as we separate the test, stage and prod environment using kubernetes namespaces.</p> <p>Something similar to dockers env_file would be nice. But I cannot find any examples or reference related to this. The only thing I can find is setting env via secrets, but that is not clean, way to verbose, as I still need to write all environment variables for each deployment.</p>
<p>You can <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap" rel="nofollow noreferrer">create a ConfigMap</a> with all the common <code>key:value</code> pairs of env variables.</p> <p>Then you can reuse the configmap to declare all the values of <code>configMap</code> as environment in <code>Deployment</code>.</p> <p>Here is an example taken from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-pod-environment-variables" rel="nofollow noreferrer">kubernetes official docs</a>.</p> <p>Create a ConfigMap containing multiple key-value pairs.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: SPECIAL_LEVEL: very SPECIAL_TYPE: charm </code></pre> <p>Use envFrom to define all of the ConfigMap’s data as Pod environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ &quot;/bin/sh&quot;, &quot;-c&quot;, &quot;env&quot; ] envFrom: - configMapRef: name: special-config # All the key-value pair will be taken as environment key-value pair env: - name: uncommon value: &quot;uncommon value&quot; restartPolicy: Never </code></pre> <p>You can specify uncommon env variables in <code>env</code> field.</p> <p>Now, to verify if the environment variables are actually available, see the logs.</p> <pre><code>$ kubectl logs -f test-pod KUBERNETES_PORT=tcp://10.96.0.1:443 SPECIAL_LEVEL=very uncommon=uncommon value SPECIAL_TYPE=charm ... </code></pre> <p>Here, it is visible that all the provided environments are available.</p>
<p>Today I confronted with a misunderstanding about <code>servicePort</code>. </p> <p>I expected that service can be linked with ingress specifying only <code>servicePort: 80</code>, but both <code>servicePort: 80</code> and <code>servicePort: 8080</code> works.</p> <p>Can someone help me to understand why both ports <code>port</code> and <code>targetPort</code> are exposed by Service, not only <code>port</code>?</p> <p><em>Service (app)</em></p> <pre><code>spec: type: ClusterIP ports: - port: 80 targetPort: 8080 </code></pre> <p><em>Ingress (ingress-nginx)</em></p> <pre><code>spec: rules: - host: example.com http: paths: - path: / backend: serviceName: app servicePort: 8080 </code></pre>
<p>As I understand,</p> <p>Each Pod in the cluster has an endpoint which is the <strong>IP</strong> and <strong>targetPort</strong> of Pod.</p> <p>You can list the endpoints with the following commands.</p> <pre><code>kubectl get endpoints </code></pre> <p>Now If you use service to expose your pod then they have <strong>cluster IP</strong> and <strong>Service Port</strong>.</p> <p><code>kubectl get services</code></p> <p>Now you can write the ingress rules to expose your pod via <strong>endpoints</strong> or <strong>cluster ip</strong>. However, There are only a few Ingress-Controller which can do this task. for example <strong>nginx-ingress-controller</strong>.</p> <p>Why do you want to use <strong>endpoints</strong> instead of <strong>cluster-ip or Service</strong> ?</p> <blockquote> <p>The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.</p> </blockquote> <p>Here is the link for further research <a href="https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#why-endpoints-and-not-services" rel="noreferrer">Why endpoints and not services</a></p>
<p>I have a Kubernetes cluster with 2 Slaves. I have 4 docker containers which all use a tomcat image and expose port 8080 and 8443. When I now put each container into a separate pod I get an issue with the ports since I only have 2 worker nodes. What would be the best strategy for my scenario?</p> <p>Current error message is: 1 PodToleratesNodeTaints, 2 PodFitsHostPorts.</p> <p>Put all containers into one pod? This is my current setup (times 4)</p> <pre><code>kind: Deployment apiVersion: apps/v1beta2 metadata: name: myApp1 namespace: appNS labels: app: myApp1 spec: replicas: 1 selector: matchLabels: app: myApp1 template: metadata: labels: app: myApp1 spec: dnsPolicy: ClusterFirstWithHostNet hostNetwork: true containers: - image: myregistry:5000/myApp1:v1 name: myApp1 ports: - name: http-port containerPort: 8080 - name: https-port containerPort: 8443 readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 failureThreshold: 6 --- kind: Service apiVersion: v1 metadata: name: myApp1-srv namespace: appNS labels: version: "v1" app: "myApp1" spec: type: NodePort selector: app: "myApp1" ports: - protocol: TCP name: http-port port: 8080 - protocol: TCP name: https-port port: 8443 </code></pre>
<p>You should not use hostNetwork unless absolutely necessary. Without host network you can have multiple pods listening on the same port number as each will have its own, dedicated network namespace.</p>
<p>If I ssh into a Kubernetes node, how do I figure out the UUID for the node so I can query the master API for information specific to the node?</p> <p>Tried this so far</p> <pre><code>root 13020 2.5 1.0 410112 41660 ? Ssl Jan25 26:04 /usr/bin/kubelet --logtostderr=true --v=0 --api_servers=http://10.32.140.181:8080 --address=0.0.0.0 --port=10250 --allow_privileged=false --maximum-dead-containers=1 --max-pods=14 [achang@p3dlwsbkn50d51 ~]$ curl -Gs http://localhost:10255/pods/ 404 page not found </code></pre>
<p>I found current kubernetes node (I check minikube, gke node pool) has file <code>/etc/machine-id</code>, which <code>cat /etc/machine-id</code> is unique for each kubernetes nodes and match with correspond <code>kubectl get nodes -o json | jq -r .items[].status.nodeInfo.machineID</code>. </p> <p>because it does not need API invocation, I think this way is easier to combine with shell or container. </p>
<p>Lets say I have a Python code on my local machine that listens on localhost and port 8000 as below:</p> <pre><code>import waitress app = hug.API(__name__) app.http.add_middleware(CORSMiddleware(app)) waitress.serve(__hug_wsgi__, host='127.0.0.1', port=8000) </code></pre> <p>This code accepts requests on <code>127.0.0.1:8000</code> and send back some response.</p> <p>Now I want to move this application (with two more related apps) into Docker, and use Kubernetes to orchestrate the communication between them.</p> <p>But for simplicity, I will take this Python node (app) only.</p> <p>First I built the docker image using:</p> <pre><code>docker build -t gcr.io/${PROJECT_ID}/python-app:v1 . </code></pre> <p>Then I pushed it into gcloud docker ( I am using google cloud not docker hub):</p> <pre><code>gcloud docker -- push gcr.io/${PROJECT_ID}/python-app:v1 </code></pre> <p>Now I created the container cluster:</p> <pre><code>gcloud container clusters create my-cluster </code></pre> <p>Deployed the app into kubernates:</p> <pre><code>kubectl run python-app --image=gcr.io/${PROJECT_ID}/python-app:v1 --port 8000 </code></pre> <p>And finally exposed it to internet via:</p> <pre><code>kubectl expose deployment python-app --type=LoadBalancer --port 80 --target-port 8000 </code></pre> <p>Now the output of the command <code>kubectl get services</code> is: </p> <p><a href="https://i.stack.imgur.com/otL5Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/otL5Y.png" alt="kubectl get services"></a></p> <p>Ok, my question is, I want to send a request from another application (lets say a node js app). </p> <ol> <li>How can I do that externally? i.e. from any machine.</li> <li>How can I do that internally? i.e. from another container pod. </li> </ol> <p>How can I let my Python app use those IP addresses and listen on them?</p> <p>This is the Dockerfile of the Python app:</p> <pre><code>FROM python:3 WORKDIR /usr/src/app COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD [ "python", "./app.py" ] </code></pre> <p>Thanks in advance!</p>
<p><strong>Externally</strong></p> <p>By running:</p> <pre><code>kubectl run python-app --image=gcr.io/${PROJECT_ID}/python-app:v1 --port 8000 </code></pre> <p>You are specifying that your pod listen on port 8000.</p> <p>By running:</p> <pre><code>kubectl expose deployment python-app --type=LoadBalancer --port 80 --target-port 8000 </code></pre> <p>You are specifying that your service listens on port 80, and the service sends traffic to TargetPort 8000 (the port the pod listens on).</p> <p>So it could be summarised that with your configuration traffic follows the following path:</p> <pre><code>traffic (port 80) &gt; Load Balancer &gt; Service (Port 80) &gt; TargetPort/Pod (port 8000) </code></pre> <p>Using a service of type Load Balancer (rather than the alternative 'Ingress', which creates a service of type Nodeport, and a HTTP(s) Load Balancer rather than a TCP Load Balancer) you are specifying that traffic that targets the pods should arrive at the LoadBalancer on port 80, and then the service directs this traffic to port 8000 on your App. So if you want to direct traffic to your App from an external source, based on the addresses in your screen shot, you would send traffic to 35.197.94.202:80.</p> <p><strong>Internally</strong></p> <p>As others have pointed out in the comments, the cluster IP can be used to target pods internally. The port you specify as the service port (in your case 80, although this could be any number you choose for the service) can be used alongside the cluster IP to target the pods on that cluster targeted by the service. For example, you could target: </p> <p>10.3.254.16:80</p> <p>However, to target specific pods, you can use the pod IP address and the port the pod listens on. You can discover this by either running a describe command on the pod:</p> <pre><code>kubectl describe pod </code></pre> <p>Or by running:</p> <pre><code>kubectl get endpoints </code></pre> <p>Which generates the pod IP and port it is listing on. </p>
<p>I have three clusters in <code>Google Kubernetes Engine</code>, and I am trying to see Kubernetes dashboard but I get the same <code>access-token</code> for two different clusters.</p> <p>Using <code>kubectl config view</code> command I get:</p> <pre><code>- name: gke_PROJECT_ZONE_A_NAME_A user: auth-provider: config: access-token: TOKEN-A - name: gke_PROJECT_ZONE_B_NAME_B user: auth-provider: config: access-token: TOKEN-B - name: gke_PROJECT_ZONE_C_NAME_C user: auth-provider: config: access-token: TOKEN-B </code></pre> <p>when gke_PROJECT_ZONE_B_NAME_B and gke_PROJECT_ZONE_C_NAME_C share the same access token, hence when I connect via <code>kubectl proxy</code> and insert the token I get the same the dashboard.</p> <p>How I can refresh the access token for cluster B or C so i'll get the desired dashboard?</p> <p>i've tried to use <code>gcloud container clusters get-credentials CLUSTER-C --zone ZONE-C --project MY_PROJECT</code>, which returns </p> <blockquote> <p>Fetching cluster endpoint and auth data. kubeconfig entry generated for CLUSTER-C.</p> </blockquote> <p>and afterwards I don't get any access token for CLUSTER-C</p> <p>thank you</p>
<p>Restarting the UI service by running <code>kubectl proxy</code>, entering to the UI via <code>http://localhost:8001/ui</code> and refreshing the page cause the access token to refresh.</p>
<p>I have got NGINX in place as of now which acts as a load balancer. When i was creating the Ingress for my Nginx Controller, the details that i had provided in Ingress file were updated in the containers nginx.conf file.</p> <p>For example: </p> <pre><code>upstream default-hello-8123 { # Load balance algorithm; empty for round robin, which is the default least_conn; keepalive 32; server x.x.x.x:xx max_fails=0 fail_timeout=0; } </code></pre> <p>I had above details in the Ingress file. Once I deployed my Ingress service / Controller / Ingress. nginx.conf was updated automatically.</p> <p>I was trying to configure JWT for authentication now. But i could not figure if there is a way to that configuration as well such as below to be updated automatically in the nginx.conf instead of writing it manually.</p> <pre><code>server { listen 80; location /products/ { auth_jwt "Products API"; auth_jwt_key_file conf/api_secret.jwk; proxy_pass http://api_server; } } </code></pre>
<p>I assume you have configured your ingress via annotations. If not, read here:</p> <p><a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/jwt" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/jwt</a></p> <p><strong>IMPORTANT</strong>: this whole answer is only applicable by using the <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">NGINX Ingress Controller</a> as opposed to the Kubernetes Ingress Controller. Read about the key differences <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md" rel="nofollow noreferrer">here</a>.</p> <p>Basically, there are two minimum requirements:</p> <p>Start by creating a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secret</a> first. There are multiple ways to do it but here is an example using <code>kubectl</code>:</p> <pre><code>kubectl create secret generic my-jwk --from-file=/path/to/your/key </code></pre> <p>where the key file looks like</p> <pre><code># Example from: https://www.nginx.com/blog/authenticating-api-clients-jwt-nginx-plus/#page-7 {&quot;keys&quot;: [{ &quot;k&quot;:&quot;YOUR_BASE64_ENCODED_SECRET&quot;, &quot;kty&quot;:&quot;oct&quot;, &quot;kid&quot;:&quot;0001&quot; }] } </code></pre> <p>In this case <strong>it is important your jwk file is named <code>jwk</code></strong> because the NGINX Ingress Controller expects the keys to exist in the <code>jwk</code> field and generating Secrets using the <code>--from-file</code> attribute assigns the file contents to a key matching the filename.</p> <p>This fulfills the first hard requirement and the other requirement is to ensure that your authenticating app uses the expected JWT delivery. By default, a JWT is expected in the <code>Authorization</code> header as a Bearer Token.</p> <p>Next, You should reference to that Secret, named <code>my-jwk</code> to the Ingress Controller via annotation:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: nginx.com/jwt-key: &quot;my-jwk&quot; spec: # .. fill in the rest of your spec here. </code></pre> <p>That is the only annotation required.</p> <p>There are other annotations you can use. However, they are optional. Taken from the first README link on this answer:</p> <blockquote> <ul> <li>Optional: nginx.com/jwt-realm: &quot;realm&quot; -- specifies a realm.</li> <li>Optional: nginx.com/jwt-token: &quot;token&quot; -- specifies a variable that<br /> contains JSON Web Token. By default, a JWT is expected in the<br /> Authorization header as a Bearer Token.</li> <li>Optional: nginx.com/jwt-login-url: &quot;url&quot; -- specifies a URL to which a client is redirected in case of an invalid or missing JWT.</li> </ul> </blockquote>
<p>Issue: I am trying to reach a vault cluster which is hosted on my k8's cluster using ingress.Currently using nginx ingress controller <code>0.10.2</code> version.</p> <p>I am using custom generated TLS certs with Ingress which is pointing to the Vault cluster.I have the TLS certs in the same namespace as ingress.</p> <p>Problem: Unable to reach the backend by providing the <code>vault status</code> command with the ca.crt for ingress.</p> <p>Env variables set are</p> <pre><code> VAULT_ADDR=https://vault.ingress.staging.k8s.com VAULT_SKIP_VERIFY=true </code></pre> <p>Unable to get the status i.e the traffic is being stopped at the ingress itself. When I check the logs for the ingress controller it says</p> <p><code>7 backend_ssl.go:146] unexpected error generating SSL certificate with full intermediate chain CA certs: Invalid certificate.</code></p> <p>I have generated the custom TLS certs matching the Common Name of the Ingress resource. So unable to figure out why is this happening. Thought might be due to the wrong ingress annotations usage.</p> <p>My question is there anything going wrong with <code>ingress.kubernetes.io/secure-backends: 'true'</code>, if yes can you provide info about how to use it? </p> <p><strong>Notes:</strong></p> <ul> <li><p>I am using the appropriate ingress class and know that there is no problem with that, for sure.</p></li> <li><p>I have deployed few examples to check, if there is any problem with ingress. Even that is working fine.</p></li> </ul> <p><strong>* Can anyone provide a working example for <code>nginx.ingress.kubernetes.io/secure-backends</code> *</strong></p> <p>Any solution related to this issue would be appreciated!!!</p> <p><strong><em>Ingress.yaml</em></strong></p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 metadata: name: vault namespace: default annotations: ingress.kubernetes.io/secure-backends: 'true' kubernetes.io/ingress.class: "k8s" spec: tls: - hosts: - vault.ingress.staging.k8s.com secretName: vault-server-ingress-tls rules: - host: vault.ingress.staging.k8s.com http: paths: - path: / backend: serviceName: example servicePort: 8200 </code></pre>
<p>You could try with:</p> <pre><code>kubernetes.io/ingress.class: "&lt;your_class&gt;" nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" </code></pre> <p>Please keep in mind, that <code>ssl-passthrough</code> requires that additional command-line parameter has to be added to nginx-ingress-controller deployment (which is disabled by default):</p> <pre><code>--enable-ssl-passthrough </code></pre> <p>Ref. <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#ssl-passthrough" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#ssl-passthrough</a></p>
<p>While creating the kubernetes cluster using kubeadm in Centos 7, its creating one year kubeapi certificate. For me this is short time for the cluster. How can I create 5 year certificate during cluster setup?</p> <pre><code>* SSL connection using TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 * Server certificate: * subject: CN=kube-apiserver * start date: Dec 20 14:32:00 2017 GMT * expire date: Dec 20 14:32:00 2018 GMT * common name: kube-apiserver * issuer: CN=kubernetes </code></pre> <p>I tried this didn't work:</p> <pre><code>openssl genrsa -out ca.key 2048 export MASTER_IP=192.168.16.171 openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt kubeadm reset rm -rf /etc/kubernetes mkdir -p /etc/kubernetes/ca/pki cp ca.key ca.crt /etc/kubernetes/ca/pki/ kubeadm init curl -k -v https://localhost:6443 Server certificate: * subject: CN=kube-apiserver * start date: Apr 15 21:07:24 2018 GMT * expire date: Apr 15 21:07:25 2019 GMT * common name: kube-apiserver * issuer: CN=kubernetes </code></pre> <p>Thanks SR</p>
<p>After looking at the code, there is no option to change the API certificate expire date. It set to 1 year in the code.</p> <p><a href="https://github.com/kubernetes/client-go/blob/master/util/cert/cert.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/master/util/cert/cert.go</a></p> <pre><code>// NewSelfSignedCACert creates a CA certificate func NewSelfSignedCACert(cfg Config, key *rsa.PrivateKey) (*x509.Certificate, error) { now := time.Now() tmpl := x509.Certificate{ SerialNumber: new(big.Int).SetInt64(0), Subject: pkix.Name{ CommonName: cfg.CommonName, Organization: cfg.Organization, }, NotBefore: now.UTC(), NotAfter: now.Add(duration365d * 10).UTC(), KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign, BasicConstraintsValid: true, IsCA: true, } certDERBytes, err := x509.CreateCertificate(cryptorand.Reader, &amp;tmpl, &amp;tmpl, key.Public(), key) if err != nil { return nil, err } return x509.ParseCertificate(certDERBytes) } </code></pre>
<p>I have an ansible script that I am using to build a kubernetes cluster. If I run <code>kubectl</code> via the shell module:</p> <pre><code>- name: Ensure kube-apiserver-to-kubelet ClusterRole is applied shell: "kubectl apply -f kube-apiserver-to-kubelet.yaml" delegate_to: controller1 run_once: true </code></pre> <p>I get the following error "The connection to the server localhost:8080 was refused - did you specify the right host or port?":</p> <pre><code>fatal: [controller1 -&gt; 10.240.0.11]: FAILED! =&gt; {"changed": true, "cmd": "kubectl apply -f kube-apiserver-to-kubelet.yaml", "delta": "0:00:00.116446", "end": "2018-04-15 21:42:51.023786", "msg": "non-zero return code", "rc": 1, "start": "2018-04-15 21:42:50.907340", "stderr": "The connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []} </code></pre> <p>However, if I log into <code>controller1</code> (I use a bastion host as a proxy to the nodes.) and execute the command it runs without issue:</p> <pre><code>kubectl apply -f kube-apiserver-to-kubelet.yaml clusterrole.rbac.authorization.k8s.io "system:kube-apiserver-to-kubelet" configured clusterrolebinding.rbac.authorization.k8s.io "system:kube-apiserver" configured </code></pre> <p>Why does this work on the node directly but not via ansible and what do I need to be doing to make this run without failing?</p>
<p>It's likely not picking up your kubeconfig (localhost:8080 is the unconfigured default)</p> <p>I would either use the builtin "k8s_raw" module or specify the --kubeconfig parameter in your shell script</p>
<p>I am configuring a StatefulSet where I want the number of replicas (<strong>spec.replicas</strong> as shown below) available to somehow pass as a parameter into the application instance. My application needs <strong>spec.replicas</strong> to determine the numer of replicas so it knows what rows to load from a MySQL table. I don't want to hard-code the number of replicas in both <strong>spec.replicas</strong> and the application parameter as that will not work when scaling the number of replicas up or down, since the application parameter needs to adjust when scaling. </p> <p>Here is my StatefulSet config:</p> <pre> apiVersion: apps/v1beta1 kind: StatefulSet metadata: labels: run: my-app name: my-app namespace: my-ns spec: replicas: 3 selector: matchLabels: run: my-app serviceName: my-app podManagementPolicy: Parallel template: metadata: labels: run: my-app spec: containers: - name: my-app image: my-app:latest command: - /bin/sh - /bin/start.sh - dev - 2000m - "0" - "3" **Needs to be replaced with # replicas** - 127.0.0.1 - "32990" imagePullPolicy: Always livenessProbe: httpGet: path: /health port: 8081 initialDelaySeconds: 180 periodSeconds: 10 timeoutSeconds: 3 readinessProbe: failureThreshold: 10 httpGet: path: /ready port: 8081 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 3 ports: - containerPort: 8080 protocol: TCP resources: limits: memory: 2500Mi imagePullSecrets: - name: snapshot-pull restartPolicy: Always </pre> <p>I have read the Kubernetes docs and the <strong>spec.replicas</strong> field is scoped at the pod or container level, never the StatefulSet, at least as far as I have seen.</p> <p>Thanks in advance.</p>
<p>You could use a yaml anchor to do this:</p> <p>Check out: <a href="https://helm.sh/docs/chart_template_guide/yaml_techniques/#yaml-anchors" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/yaml_techniques/#yaml-anchors</a></p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: labels: run: my-app name: my-app namespace: my-ns spec: replicas: &amp;numReplicas 3 selector: matchLabels: run: my-app serviceName: my-app podManagementPolicy: Parallel template: metadata: labels: run: my-app spec: containers: - name: my-app image: my-app:latest command: - /bin/sh - /bin/start.sh - dev - 2000m - &quot;0&quot; - *numReplicas - 127.0.0.1 - &quot;32990&quot; imagePullPolicy: Always livenessProbe: httpGet: path: /health port: 8081 initialDelaySeconds: 180 periodSeconds: 10 timeoutSeconds: 3 readinessProbe: failureThreshold: 10 httpGet: path: /ready port: 8081 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 3 ports: - containerPort: 8080 protocol: TCP resources: limits: memory: 2500Mi imagePullSecrets: - name: snapshot-pull restartPolicy: Always </code></pre>
<p>I am trying to create Kubernetes cluster by using tool kubeadm. When I am exploring I found that by using kubeadm we can create our Kubernetes clusters and also kops is an advanced type tool. When I am reading about Kubernetes, comments are like - Kubeadm is a cluster set up tool. Not a production runtime tool.</p> <p>I found this information in my previous discussion - In this, answer for 5 the point I found. Let me add here:</p> <p><a href="https://stackoverflow.com/questions/49516075/kubernetes-architecture-kubernetes-cluster-management-and-initializing-nodes/49518782#49518782">Kubernetes Architecture - Kubernetes Cluster Management And Initializing Nodes</a></p> <p>I am totally confused about Kubernetes architecture now. Let me add my confusions here:</p> <ol> <li>What is the difference between setup tool and production runtime tool in Kubernetes?</li> <li>If I am using kubeadm for setup, what are my options for production runtime tool for Kubernetes ? (According to the comments that I found that kubeadm is not a production runtime tool)</li> <li>If I am using Kops instead of kubeadm, Is this strategy is same? Do I need to use separate production running tool?</li> <li>Which is the best practice for doing this?</li> </ol> <p>When I am reading each link, I am not properly understanding the concept.</p>
<blockquote> <p>What is the difference between setup tool and production runtime tool in kubernetes?</p> </blockquote> <p>Setup tool is a special tool for creating and initializing a Kubernetes cluster. It is not clear for me what you mean by "production runtime tool," and I've never heard about that in Kubernetes, but Kubeadm and Kops are specially designed for the same functions - create, initialize and upgrade a cluster, so they are setup tools.</p> <blockquote> <p>If I am using kubeadm for setup, what are my options for production runtime tool for kubernetes ? (According to the comments that I found that kubeadm is not a production runtime tool)</p> </blockquote> <p>You don't need any other tools except Kubernetes itself to manage cluster after you created it. Moreover, you can create cluster manually without any special setup tools.</p> <blockquote> <p>If I am using Kops instead of kubeadm, Is this strategy is same? Do I need to use separate production running tool?</p> </blockquote> <p>Yes, the strategy is the same. No, you don't need any other tools.</p> <blockquote> <p>Which is the best practice for doing this?</p> </blockquote> <p>Just use a proper tool to create and initialize a Kubernetes cluster on your platform and start to use it. For AWS - I can recommend Kops (or Amazon EKS), for in-premise - Kubeadm, in Google Cloud you can use GKE.</p> <p><strong>UPD</strong></p> <p>Ok, I got what you meant by "production runtime tool." In that context, the "production runtime tool" is a part of Kubernetes or something which is always running as a part of a cluster (like flannel, etc.). <code>Kubeadm</code> and <code>Kops</code> is not a part of it, those are just optional tools.</p> <blockquote> <p>Can I use kubeadm in aws Ec2 machine instead of going EKS service? Can I choose? Is that also a feasibility ?</p> </blockquote> <p>Yes, you can use it, it's for you to choose. EKS is just a "Kubernetes as a service", but you can create your own installation.</p> <blockquote> <p>If I am choosing kubeadm in AWS EC2 ( I have one Ubuntu 16.04 LTS VM), Can I create another 3 for my 4 node + master cluster requirement?</p> </blockquote> <p>If I understood you right, then yes. You can create as many nodes as you want.</p>
<p>I am running <code>kubernetes 1.9.4</code> on my <code>gke</code> cluster</p> <p>I have two pods , <code>gate</code> which is trying to connect to <code>coolapp</code>, both written in <code>elixir</code></p> <p>I am using <a href="https://github.com/bitwalker/libcluster/issues/54" rel="nofollow noreferrer">libcluster</a> to connect my nodes I get the following error:</p> <p><code>[libcluster:app_name] cannot query kubernetes (unauthorized): endpoints is forbidden: User "system:serviceaccount:staging:default" cannot list endpoints in the namespace "staging": Unknown user "system:serviceaccount:staging:default"</code></p> <p>here is my config in <strong>gate</strong> under <code>config/prod</code>:</p> <pre><code> config :libcluster, topologies: [ app_name: [ strategy: Cluster.Strategy.Kubernetes, config: [ kubernetes_selector: "tier=backend", kubernetes_node_basename: System.get_env("MY_POD_NAMESPACE") || "${MY_POD_NAMESPACE}"]]] </code></pre> <p>here is my configuration:</p> <p><strong>vm-args</strong></p> <pre><code>## Name of the node -name ${MY_POD_NAMESPACE}@${MY_POD_IP} ## Cookie for distributed erlang -setcookie ${ERLANG_COOKIE} # Enable SMP automatically based on availability -smp auto </code></pre> <p><strong>creating the secrets:</strong></p> <pre><code>kubectl create secret generic erlang-config --namespace staging --from-literal=erlang-cookie=xxxxxx kubectl create configmap vm-config --namespace staging --from-file=vm.args </code></pre> <p><strong>gate/deployment.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: gate namespace: staging spec: replicas: 1 revisionHistoryLimit: 1 strategy: type: RollingUpdate template: metadata: labels: app: gate tier: backend spec: securityContext: runAsUser: 0 runAsNonRoot: false containers: - name: gate image: gcr.io/development/gate:0.1.7 args: - foreground ports: - containerPort: 80 volumeMounts: - name: config-volume mountPath: /beamconfig env: - name: MY_POD_NAMESPACE value: staging - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: RELEASE_CONFIG_DIR value: /beamconfig - name: ERLANG_COOKIE valueFrom: secretKeyRef: name: erlang-config key: erlang-cookie volumes: - name: config-volume configMap: name: vm-config </code></pre> <p><strong>coolapp/deployment.yaml:</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: coolapp namespace: staging spec: replicas: 1 revisionHistoryLimit: 1 strategy: type: RollingUpdate template: metadata: labels: app: coolapp tier: backend spec: securityContext: runAsUser: 0 runAsNonRoot: false # volumes volumes: - name: config-volume configMap: name: vm-config containers: - name: coolapp image: gcr.io/development/coolapp:1.0.3 volumeMounts: - name: secrets-volume mountPath: /secrets readOnly: true - name: config-volume mountPath: /beamconfig ports: - containerPort: 80 args: - "foreground" env: - name: MY_POD_NAMESPACE value: staging - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: REPLACE_OS_VARS value: "true" - name: RELEASE_CONFIG_DIR value: /beamconfig - name: ERLANG_COOKIE valueFrom: secretKeyRef: name: erlang-config key: erlang-cookie # proxy_container - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "--dir=/cloudsql", "-instances=staging:us-central1:com-staging=tcp:5432", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true - name: cloudsql mountPath: /cloudsql </code></pre>
<p><strong>The default <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">service account</a> for the <code>staging</code> namespace (in which apparently your Pods using libcluster are running) lacks <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="noreferrer">RBAC</a> permissions to <em>get endpoints</em> in that namespace.</strong></p> <p>Likely your application requires a number of other permissions (that are not mentioned in the above error message) to work correctly; identifying all such permissions is out of scope for SO.</p> <p>A way to resolve this issue is to grant <strong>superuser permissions</strong> that service account. This is not a secure solution but a stop gap fix.</p> <pre><code>$ kubectl create clusterrolebinding make-staging-sa-cluster-admin \ --serviceaccount=staging:default \ --clusterrole=cluster-admin clusterrolebinding "make-staging-sa-cluster-admin" created </code></pre> <p>To grant the <strong>specific permission only</strong> (get endpoints in the staging namespace) you would need to create a <a href="https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole" rel="noreferrer">Role</a> first:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: some-permissions namespace: staging rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch"] </code></pre> <p>And create a <a href="https://kubernetes.io/docs/admin/authorization/rbac/#rolebinding-and-clusterrolebinding" rel="noreferrer">RoleBinding</a> for the default service account in the staging namespace:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: give-default-sa-some-permissions namespace: staging subjects: - kind: ServiceAccount name: default namespace: staging roleRef: kind: Role name: some-permissions apiGroup: rbac.authorization.k8s.io </code></pre>
<p>I'm new to Kubernetes and recently I am using it to deploy hadoop. I want to do a thing that set a specific hostname to pod which is created using <code>statefulSets</code> with <code>hostNetwork = true</code>.</p> <p>Here is my yaml config file.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: test-bbox labels: app: test-bbox spec: clusterIP: None selector: app: test-bbox ports: - name: foo port: 1234 targetPort: 1234 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: test-bbox spec: serviceName: "test-bbox" replicas: 1 selector: matchLabels: app: test-bbox template: metadata: labels: app: test-bbox spec: hostNetwork: true hostname: test-hostname containers: - image: busybox name: busybox env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name command: - sleep - "7200" </code></pre> <p>As the yaml says, the attribute <code>hostnetwork</code> is set to <code>true</code>. Then the pod <code>test-bbox-0</code>'s hostname is the hostname of the Node where it is created.</p> <p>If I set <code>hostnetwork</code> to false, the hostname is auto-generated by <code>statefulSets</code> as a format such as <code>test-bbox-0.test-bbox.default.svc.cluster.local</code>, which is just I need.</p> <p>But in my case I need to set <code>hostnetwork</code> to <code>true</code> and at the same time to customize the hostname to the format mentioned above rather than the Node's hostname. </p> <p>So the question is, <strong>is there any way to customize hostname for Pod ?</strong></p> <p>Kubernetes version used : <code>1.9</code></p>
<p><strong>It is not possible</strong> to set the hostname for a Pod that is using <code>hostNetwork</code>. If you try to change the hostname in such a Pod you'll see that you are changing the hostname of the node too; this is because they are sharing the <a href="https://en.wikipedia.org/wiki/Linux_namespaces#UTS" rel="nofollow noreferrer">UTS namespace</a>, not only the <a href="https://en.wikipedia.org/wiki/Linux_namespaces#Network_(net)" rel="nofollow noreferrer">networking</a> one.</p> <p>Pods managed by a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">StatefulSet</a> are a special case and their hostname is set my StatefulSet and it can't be configured directly. The hostname can be influenced by the name of StatefulSet itself while domainname by naming the Service appropriately:</p> <blockquote> <p>Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the Pod. The pattern for the constructed hostname is <code>$(statefulset name)-$(ordinal)</code>. [...] A StatefulSet can use a Headless Service to control the domain of its Pods. The domain managed by this Service takes the form: <code>$(service name).$(namespace).svc.cluster.local</code>, where <code>cluster.local</code> is the cluster domain. As each Pod is created, it gets a matching DNS subdomain, taking the form: <code>$(podname).$(governing service domain)</code>, where the governing service is defined by the <code>serviceName</code> field on the StatefulSet.</p> </blockquote>
<p>I am trying to create Kubernetes cluster with different number of nodes using same machine. Here I want to create separate VMs and need to create node in those VMs. I am currently exploring about kubeadm and minikube for these tasks.</p> <p>When I am exploring I had the following confusions:</p> <ol> <li>I need to create 4 number of nodes each need to create in different VMs. Can I use kubeadm for these requirement?</li> <li>Also found that Minikube is using for creating the single node structure and also possible to use to creation of VMs. What is the difference between kubeadm and minikube ?</li> <li>If I want to create nodes in different VMs which tool should use along with installation of Kubernetes cluster master?</li> <li>If I am using VMs, then can I directly install VMware workstation / virtualbox in my Ubuntu 16.04 ?</li> <li>In AWS EC2, they already giving the Ubuntu as a virtual machine. So is possible to install VMware workstation on ubuntu? Since it is VMs on another VM.</li> </ol>
<p>Kubeadm should be a good choice for you. It is quite easy to use by just following the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/" rel="nofollow noreferrer">documentation</a>. <strike>Minikube would give you only single node Kubernetes.</strike> As of minikube 1.10.1, it is possible to <a href="https://minikube.sigs.k8s.io/docs/tutorials/multi_node/" rel="nofollow noreferrer">use multi-node clusters</a>.</p> <ol> <li>Kubeadm is a tool to get Kubernetes up and running on already existing machine. It will basically configure and start all required Kubernetes components (for minimum viable cluster). Kubeadm is the right tool to bootstrap the Kubernetes cluster on your virtual machines. But you need to prepare the machines your self (install OS + required software, networking, ...). kubeadm will not do it for you.</li> <li>Minikube is a tool which will allow you to start <strike>locally single node</strike> Kubernetes cluster. This is usually done in a VM - minikube supports VirtualBox KVM and others. It will start for you the virtual machine and take care of everything. But it will not do a 4 node cluster for you.</li> <li>Kubeadm takes care of both. You first setup the master and then use kubeadm on the worker nodes to join the master.</li> <li>When you use Kubeadm, it doesn't really care what do you use for the virtualization. You can choose whatever you want.</li> <li>Why do you want to run virtual machines on top of your EC2 machine? Why not just create more (perhaps smaller) EC2 machines for the cluster? You can use this as an inspiration: <a href="https://github.com/scholzj/terraform-aws-kubernetes" rel="nofollow noreferrer">https://github.com/scholzj/terraform-aws-kubernetes</a>. There are also some more advanced tools for setting up the whole cluster such as (for example) <a href="https://github.com/kubernetes/kops/" rel="nofollow noreferrer">Kops</a>.</li> </ol>
<p>I am using the following snippet to create the deployment </p> <pre><code>oc create -f nginx-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx annotations: openshift.io/scc: privileged spec: securityContext: priviledged: false runAsUser: 0 volumes: - name: static-web-volume hostPath: path: /home/testFolder type: Directory containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html name: static-web-volume </code></pre> <p>I am getting permission denied issue when i try to go inside the html folder</p> <pre><code>$ cd /usr/share/nginx/html $ ls ls: cannot open directory .: Permission denied </code></pre> <p>This is easiest sample code as i have similar requirement where i have to read the files from the mounted drives, but that one is failing as well. </p> <p>I am using kubernetes 1.5 as this is only one available. I am not sure whether the volumes have been mounted or not.<br> all my dir permissions are set to root as well. </p> <p>content of /home/testfolder <code> 0 drwxrwxrwx. 3 root root 52 Apr 15 23:06 . 4 dr-xr-x---. 11 root root 4096 Apr 15 22:58 .. 0 drwxrwxrwx. 2 root root 6 Apr 15 19:56 ind 4 -rwxrwxrwx. 1 root root 14 Apr 15 19:22 index.html 4 -rwxrwxrwx. 1 root root 694 Apr 15 23:06 ordr.yam </code></p>
<p>I remember hitting this one in openshift sometime back. It has something to do with SElinux configuration on the host. </p> <p>Try this at the host server where you mount to your container volume /usr/share/nginx/html. </p> <p>sudo chcon -Rt svirt_sandbox_file_t /</p>
<p>So, I have built my SonarQube server in Kubernetes (We're using it for GoLang code quality). We have everything working and we're happy with it, but in order to have some resiliency, we want to put the Plugins directory on an NFS Share so they are preserved if/when the pod restarts. As it is now, we have to re-install and re-add them every time (not all plugins we have are available from the marketplace so we have to copy them to the plugins directory every time).</p> <p>You can see samples of my manifest files here:</p> <p><a href="https://github.com/Talderon/k8s-sonarqube-golang/tree/master/advanced" rel="nofollow noreferrer">https://github.com/Talderon/k8s-sonarqube-golang/tree/master/advanced</a></p> <p>Here is the issue I am running into:</p> <ol> <li>I copy the .jar files to the share, so they are there.</li> <li>I start up the SQ Pod (via k8s Deployment) and it mounts and the server works.</li> </ol> <p>HOWEVER, it's not picking up any of the plugins.</p> <p>When I open a shell to the pod:</p> <pre><code>kubectl exec -it &lt;POD_NAME&gt; --namespace &lt;POD_NAMESPACE&gt; -- /bin/bash </code></pre> <p>I change to the /opt/sonarqube/extensions/plugins/ directory and the .jar files are there.</p> <p>However, when I log into the server and look in the marketplace, NONE of the plugins are loaded. Even after a server restart. I have torn down the directories and rebuilt them (even changed names) and still nothing.</p> <p>Anyone have any ideas? Does the share folder need certain permissions? Is there another way to preserve everything?</p> <p>Thanks!</p>
<p>It seems that this was an issue caused by something on the NFS Server side so can be closed/removed.</p> <p>Once we fully nail down the issue, I will post the solution on the NFS Side.</p> <p>Thanks!</p>
<p>I ran into issue with expired certificates on k8s cluster. I am running version 1.6.1 for over a year now, meaning that my certificates expired and I have to renew them. In newer versions this is already done automatically, but I currently can not upgrade my cluster to higher version, so I have to create certificates manually.</p> <p>I came across following <a href="https://medium.com/@AnushkaSandaru1/how-to-change-expired-certificates-in-kubernetes-cluster-5d3feb9d9838" rel="nofollow noreferrer">link</a>, where it is described step-by-step, but I am actually already stuck on creating openssl.cnf file, as I am missing parameters. At the same time, this option is using .pem key, while on cluster currently .crt and .key pairs are used.</p> <p>Any suggestion how to move forward with this? I have also tried running <code>kubeadm alpha phase certs selfsign</code> command, which created new certificates, yet I am still running into issue that api-server is refusing TSL handshake.</p> <pre><code>http: TLS handshake error from IP:port: remote error: tls: bad certificate </code></pre> <p>Thank you and best regards,</p> <p>Bostjan</p>
<p><strong>There is a detailed <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/" rel="nofollow noreferrer">guide on how to generate certificates</a>.</strong></p> <p>While you are following that guide look out for a few gotchas:</p> <ul> <li>Make sure your CA certificate is valid for the period you are trying to extend the other certificates to. The validity of any certificates signed by the CA certificate are <em>also</em> limited by the expiration date of the CA certificate.</li> <li>If the validity period of the CA certificate itself is too short you are in a pickle. Replacing that certificate will require modifying all kubeconfigs (operators, cluster components).</li> <li>For the same reason as above, make very sure you don't overwrite the CA key/certificate accidentaly.</li> <li>When replacing the certificate for the apiserver you will need to restart the apiserver. The apiserver does not reread the certificate automatically.</li> </ul>
<p>I installed kubernetes on two Centos7 VMs using kubeadm.</p> <p>I am trying to follow the <a href="https://kubernetes.io/docs/tutorials/stateful-application/cassandra/" rel="nofollow noreferrer">Example: Deploying Cassandra with Stateful Sets</a> or <a href="https://github.com/IBM/Scalable-Cassandra-deployment-on-Kubernetes" rel="nofollow noreferrer">Scalable-Cassandra-deployment-on-kubernetes</a> samples.</p> <p>Creating the local volumes works but <code>kubectl get pvc</code> always results in a status of <strong>Pending</strong>. <code>kubectl descrive pvc &lt;*pvc name*&gt;</code> results in the following warning:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 54s (x16854 over 2d) persistentvolume-controller storageclass.storage.k8s.io "fast" not found </code></pre> <p>I'm uncertain how to create the "fast" storage class to enable the volume to be successfully created and complete the samples.</p>
<p>When you create a persistent Volume you have to make sure that the corresponding <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">storage class</a> exist.</p> <blockquote> <p>A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called “profiles” in other storage systems.</p> </blockquote> <p>For example in the guide you linked at the bottom of the yaml file you find:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: k8s.io/minikube-hostpath parameters: type: pd-ssd </code></pre> <p>This is the definition of the storage class, the api and the provisioner makes sure how it get mapped to the actual storage and depends on the Kubernetes implementation and where it is running.</p> <p>Therefore you should double check if you declared the storage class:</p> <pre><code>$ kubectl get storageclasses --all-namespaces </code></pre> <p>If you do not have a storage class you should create it specifying the correct provisioner or if it merely a test you can consider to create the volume claim of a storage class you already have.</p> <h2>Example</h2> <p>For example running on Google Kubernetes Engine I have by default a <code>standard</code> class. Trying to deploy a claim I have as well a pending error message.</p> <p>Deploying the following yaml file(and note that the provisioned changed) I am able to successfully create the persistent volume claim since now Kubernetes knows what I mean with type "fast":</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd zones: us-central1-a, us-central1-b </code></pre>
<p>In docker-compose I was used to create volumes in this way:</p> <pre><code>volumes: - ./server:/home/app - /home/app/node_modules </code></pre> <p>in order to solve the problem of <code>node_modules</code>.</p> <p>How could I approach the problem in kubernetes?</p> <p>I've created the following config</p> <pre><code> spec: containers: - env: [] image: "image" volumeMounts: - mountPath: /home/app name: vol-app - mountPath: /home/app/node_modules name: vol-node-modules name: demo volumes: - name: vol-app hostPath: path: /Users/me/server - name: vol-node-modules emptyDir: {} </code></pre> <p>but it doesn't work. the node_modules is empty</p>
<p>I just transitioned from Docker Swarm to Kubernetes and had the same question. The solution I settled on makes use of init containers (<a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a>).</p> <p>The general idea is that init containers allow you to copy the <code>node_modules</code> directory from your image into an empty volume, then mount that volume in your application's pod.</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: ... spec: ... template: ... spec: initContainers: - name: frontend-clone image: YOUR_REGISTRY/YOUR_IMAGE command: - cp - -a - /app/node_modules/. - /node_modules/ volumeMounts: - name: node-modules-volume mountPath: /node_modules containers: - name: frontend image: YOUR_REGISTRY/YOUR_IMAGE volumeMounts: - name: source-volume mountPath: /app - name: node-modules-volume mountPath: /app/node_modules volumes: - name: source-volume hostPath: path: /YOUR_HOST_CODE_DIRECTORY/frontend - name: node-modules-volume emptyDir: {} </code></pre> <p>Note that the command to copy the <code>node_modules</code> directory is <code>cp -a /SOURCE/. /DEST/</code> and not the more common <code>cp -r /SOURCE/* /DEST/</code>. The asterisk in the latter command would be interpreted as a literal <code>*</code> instead of a logical <code>*</code> and it wouldn't match every file/directory as intended.</p> <p>You can do whatever you want with an init container, even create the <code>node_modules</code> directory from a script at initialization (though it'd add a significant delay).</p>
<p>The goal is to orchestrate both production and local development environments using Kubernetes. The problem is that <code>hostPath</code> doesn't work with relative path values. This results in slightly differing configuration files on each developer's machine to accommodate for the different project locations (i.e. <code>"/my/absolute/path/to/the/project"</code>): </p> <pre><code>apiVersion: v1 kind: Service metadata: name: some-service labels: app: app spec: type: LoadBalancer ports: - port: 80 selector: app: app --- apiVersion: apps/v1 kind: Deployment metadata: name: some-deploy spec: selector: matchLabels: app: app replicas: 1 template: metadata: labels: app: app spec: containers: - name: app image: nginx:1.13.12-alpine ports: - containerPort: 80 volumeMounts: - name: vol_example mountPath: /var/www/html volumes: - name: vol_example hostPath: path: "/my/absolute/path/to/the/project" type: Directory </code></pre> <p>How can relative paths be used in Kubernetes config files? Variable replacements (such as <code>$(PWD)/project</code>) have been tried, but didn't seem to work. If config variables can work with volumes, this might help but unsure of how to achieve this.</p>
<p>As mentioned <a href="https://github.com/kubernetes/kubernetes/issues/52787" rel="noreferrer">here</a> <code>kubectl</code> will never support variable substitution. </p> <p>You can create a <a href="https://helm.sh/" rel="noreferrer">helm</a> chart for your app (yaml). It supports yaml template variables (among various other features). So you'll be able to pass <code>hostPath</code> parameter based on development or production. </p>
<p>I'm having problems setting up AWS Integration on a Kubernetes Cluster. I've already set the <em>kubernetes.io/cluster/clustername = owned</em> tag on all Instances, Subnets, VPC, and in a Single SG. I've also passed the <em>--cloud-provider=aws</em> flag to both API Server and Controller Manager, but the Controller Manager does not start.</p> <p>Controller Manager Logs:</p> <pre><code>I0411 21:03:48.360194 1 aws.go:1026] Building AWS cloudprovider I0411 21:03:48.360237 1 aws.go:988] Zone not specified in configuration file; querying AWS metadata service F0411 21:03:48.363067 1 controllermanager.go:159] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0442e20b4a28b2274: "error listing AWS instances: \"NoCredentialProviders: no valid providers in chain. Deprecated.\\n\\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\"" </code></pre> <p>The Policy Attached to the Master Nodes is:</p> <pre><code>{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:*" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "elasticloadbalancing:*" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "route53:*" ], "Resource": [ "*" ] } ] } </code></pre> <p>Querying the AWS Metadata Service from a master via cURL returns proper credentials</p> <p>Any help will be much appreciated!</p> <p>P.S: I'm not using Kops or anything of that kind. I've set up the control components plane by myself.</p>
<p>I was able to fix this by passing the <strong>--cloud-provider=aws</strong> flag to the kubelets. I thought that wasn't needed on Master nodes.</p> <p>Thanks!</p>
<p>I'm running a Kubernetes cluster on bare metal and I'm writing an Ansible task to get the join command from a master node:</p> <pre><code>- name: Get join command from master shell: kubeadm token create --print-join-command when: role == "master" run_once: true register: join_command </code></pre> <p>When I run the playbook, I got the following error: "unable to create bootstrap token after 5 attempts []".</p> <p>If I run the exact same command (<code>kubeadm token create --print-join-command</code>) directly on the master host or remotely using <code>ssh kube-master kubeadm token create --print-join-command</code>, it outputs the join command correctly.</p> <p>I've ran out of options here... any ideas?</p>
<p>You can get that error if your "kubeadm" cannot connect to the Kubernetes cluster using credentials from the configuration file. You can reproduce it by stopping the <code>docker</code> service on your master node.</p> <p>There is no difference between running the command using Ansible or shell in your case, so it should work.</p> <p>So, the only things I can suggest are:</p> <ol> <li>Verify that the Ansible role <code>master</code> is attached to the right host.</li> <li>Check if the Ansible user has access to the <code>kubeadm</code> configuration, its default path is <code>/etc/kubernetes/admin.conf</code>, and make sure that configuration is right. You might try to run the command as root using the <code>become: true</code> option.</li> </ol>
<p>I am trying to whitelist IPs that can access my application. I created http-balancer by following this tutorial. <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p> <p>After creating the service with <code>NodePort</code> I created an <code>ingress.yaml</code> file that looks like the one below. I have created a global static ip and setup a domain name.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.global-static-ip-name: &lt;global-static-ip&gt; spec: rules: - host: &lt;domain_name&gt; - http: paths: - path: /* backend: serviceName: nginx servicePort: 80 </code></pre> <p>This above yaml file works fine and I am able to access the "Welcome to Nginx" page. </p> <p>But when I add the IPs to be whitelisted it does not seem to work and still allows other IPs that are not whitelisted.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.global-static-ip-name: &lt;global-static-ip&gt; ingress.kubernetes.io/whitelist-source-range: "xx.xx.xx.xxx/32" spec: rules: - host: &lt;domain_name&gt; - http: paths: - path: /* backend: serviceName: nginx servicePort: 80 </code></pre> <p>Reference: <a href="http://container-solutions.com/kubernetes-quick-tip/" rel="nofollow noreferrer">http://container-solutions.com/kubernetes-quick-tip/</a> <a href="https://docs.giantswarm.io/guides/advanced-ingress-configuration/" rel="nofollow noreferrer">https://docs.giantswarm.io/guides/advanced-ingress-configuration/</a></p>
<p>I have not worked with Ingress but as per normal nginx rules you need to deny all and then allow the whitelist IPS</p> <pre><code> `location / { proxy_pass https://xxx.xx.xx.xx:8080 allow xx.xx.xx.xxx/32; deny all; allow xx.xx.xx.xxx/32; }` </code></pre> <p>Which inturn wont allow your non-Whitelisted IP's.</p>
<p>We are stuck with a big problem of <code>kubernetes</code> Queuing.</p> <p>We are submitting jobs through Workflow manager i.e. Airflow to cluster manager i.e. AWS Batch. Since there are limitations in AWS batch, irrespective of a number of jobs submitted to the queue, the batch was concurrently executing the number of jobs equal to vCPUs available in the cluster. To overcome this limitation of AWS batch, we are planning to migrate to Kubernetes over AWS batch. </p> <p>But, we are not sure, how <code>kubernetes</code> handle this problem, On exploring, we are getting examples of the queue in following links.</p> <ol> <li><p><a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/</a></p></li> <li><p><a href="https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/</a></p></li> </ol> <p>In these examples, it is required from us to write tasks to queue and read from the queue by our code. But, the case explained here, is not what we are looking for. In our case, Apache airflow's Kubernetes_Pod_operator will submit pod/job to K8 cluster, and we are expecting K8 will put these jobs in its internal queue and will pick up and will execute them on cluster based on the available cluster capacity.</p> <ul> <li><p>We want to know, does k8 internally support queue and put the jobs/ pods in the queue, and picks up and executes them on cluster based on the available cluster capacity?</p></li> <li><p>Any solution to this problem? OR is it the limitation in k8, and we should develop our own solution to this?</p></li> </ul>
<p>You can configure container resources in your job yamls. Read the below link on how to achieve this: <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled</a></p> <p>This configuration will make sure that the jobs remain in the 'Pending' state until the requirements are met. Kubernetes scheduler uses an internal queue where all 'Pending' and 'scheduler failed' jobs are stored.</p>
<p>Given is a microservice that after some time needs to quit itself. This is no error condition but (in that case) normal behavior. After exiting it should be automatically restarted.</p> <p>So currently I have a script <code>run_app.sh</code>:</p> <pre><code>#!/usr/bin/env bash while true; do ./app ; done </code></pre> <p>And in the <code>Dockerfile</code> (inheriting <code>FROM ubuntu:16.04</code>) I run it like this:</p> <pre><code>CMD ["./run_app.sh"] </code></pre> <p>It "works" but since <code>app</code> does not have PID 1, it does not receive SIGTERM etc. coming from Kubernetes, which is needed for a graceful shutdown during rolling updates etc.</p> <p>Using <code>while true; do exec ./app ; done</code> in <code>run_app.sh</code> does not solve the problem since the loop no longer exists when <code>app</code> finished and no restart is performed.</p> <p>How can I restart the app automatically inside the container without restarting the container / pod every time it quits, but still have the advantages of PID 1?</p>
<p>Well, your loop <em>does</em> restart your app, so this is not your problem. Your problem is that the signal sent to the docker container is not propagated into the container. Docker just isn't (AFAIK) not meant to be used like this, so it doesn't propagate signals to its app.</p> <p>You have two ways of handling this:</p> <ol> <li><p>You can teach the signal sender (Kubernetes or whatever) to instead of sending a signal to the Docker container do something more elaborate to get the information to the app inside the container. I guess this is not easy (if possible at all).</p></li> <li><p>You can migrate the looping shell script outside of the container and let Kubernetes (or whatever) run this script instead. <em>Within</em> the loop you then can start the Docker container with your app inside. In this case you will need to catch the SIGTERM in the outside looping shell script (<code>help trap</code>) and either send a SIGTERM to the Docker container directly or to the app within the Docker container (using <code>Docker exec</code> or similar).</p></li> </ol>
<p>I am creating a Helm chart. When doing a dry run I get a error:</p> <pre><code>Error: YAML parse error on vstsagent/templates/vsts-buildrelease-agent.yaml: error converting YAML to JSON: yaml: line 28: found character that cannot start any token </code></pre> <p>The dry run also outputs the secret and deployment YAML file which I created. The part where it goes wrong in the deployment shows:</p> <pre><code> - name: ACCOUNT valueFrom: secretKeyRef: name: %!s(&lt;nil&gt;)-%!s(&lt;nil&gt;) key: ACCOUNT - name: TOKEN valueFrom: secretKeyRef: name: %!s(&lt;nil&gt;)-%!s(&lt;nil&gt;) key: TOKEN </code></pre> <p>The output from the dry run for the secret looks fine.</p> <p>The templates I created:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: {{ template "chart.fullname" . }} type: Opaque data: ACCOUNT: {{ .Values.chart.secret.account }} TOKEN: {{ .Values.chart.secret.token }} apiVersion: apps/v1 kind: Deployment metadata: name: {{ template "chart.fullname" . }} labels: app: {{ template "chart.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: updateStrategy: type: RollingUpdate template: metadata: labels: release: {{ .Release.Name }} app: {{ template "chart.name" . }} annotations: agentVersion: {{ .Values.chart.image.tag }} spec: containers: - name: {{ template "chart.name" . }} image: {{ .Values.chart.image.name }} imagePullPolicy: {{ .Values.chart.image.pullPolicy }} env: - name: ACCOUNT valueFrom: secretKeyRef: name: {{ template "chart.fullname" }} key: ACCOUNT - name: TOKEN valueFrom: secretKeyRef: name: {{ template "chart.fullname" }} key: TOKEN </code></pre> <p>The _helper.tpl looks like this:</p> <pre><code>{{/* vim: set filetype=mustache: */}} {{/* Expand the name of the chart. */}} {{- define "chart.name" -}} {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} {{- end -}} {{/* Create a default fully qualified app name. We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). */}} {{- define "chart.fullname" -}} {{- $name := default .Chart.Name .Values.nameOverride -}} {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} {{- end -}} </code></pre> <p>Where am I going wrong in this?</p>
<p>I missed 2 dots....</p> <pre><code> - name: ACCOUNT valueFrom: secretKeyRef: name: {{ template "chart.fullname" . }} key: ACCOUNT - name: TOKEN valueFrom: secretKeyRef: name: {{ template "chart.fullname" . }} key: TOKEN </code></pre>
<p>I'm trying to deploy a scalable web application on google cloud. I have kubernetes deployment which creates multiple replicas of apache+php pods. These have cpu/memory resources/limits set.</p> <p>Lets say that memory limit per replica is 2GB. How do I properly configure apache to respect this limit?</p> <p>I can modify maximum process count and/or maximum memory per process to prevent memory overflow (thus the replicas will not be killed because of OOM). But this does create new problem, this settings will also limit maximum number of requests that my replica could handle. In case of DDOS attack (or just more traffic) the bottleneck could be the maximum process limit, not memory/cpu limit. I think that this could happen pretty often, as these limits are set to worst case scenario, not based on average traffic.</p> <p>I want to configure autoscaler so that it will create multiple replicas in case of such event, not only when the cpu/memory usage is near limit.</p> <p>How do I properly solve this problem? Thanks for help!</p>
<p>I would recommend doing the following instead of trying to configuring apache to limit itself internally:</p> <ul> <li>Enforce resource limits on pods. i.e let them OOM. (but see NOTE*)</li> <li>Define an autoscaling metric for your deployment based on your load.</li> <li>Setup a namespace wide resource-quota. This enforeces a clusterwide limit on the resources pods in that namespace can use.</li> </ul> <p>This way you can let your Apache+PHP pods handle as many requests as possible until they OOM, at which point they respawn and join the pool again, which is fine* (because hopefully they're stateless) and at no point does your over all resource utilization exceed the resource limits (quotas) enforced on the namespace.</p> <hr> <p>* NOTE: This is only true if you're not doing fancy stuff like websockets or stream based HTTP, in which case an OOMing Apache instance takes down other clients that are holding an open socket to the instance. If you want, you should always be able to enforce limits on apache in terms of the number of threads/processes it runs anyway, but it's best not to unless you have solid need for it. With this kind of setup, no matter what you do, you'll not be able to evade DDoS attacks of large magnitudes. You're either doing to have broken sockets (in the case of OOM) or request timeouts (not enough threads to handle requests). You'd need far more sophisticated networking/filtering gear to prevent "good" traffic from taking a hit.</p>
<p>When loading a Pod with a container that has many/large layers, it can take more than 2 minutes on my cluster's machines (slower single thread performance coupled with 7200rpm spinning rust means slow untar/ungzip speeds).</p> <p>This means Kubernetes will give up on that container, saying "context deadline exceeded", then retry. Allowed to run overnight (on accident), it will run out of disk as the attempts pile up more and more.</p> <p>Example pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-large-container-1 spec: containers: - name: X image: X:latest stdin: true tty: true command: ["bash"] </code></pre> <p>Is there a field in the PodSpec I missed or a configuration for kubelet itself?</p> <p>Events seen:</p> <pre><code>2018-04-10 13:01:22 -0700 PDT 2018-04-10 13:01:22 -0700 PDT 1 test-large-container-1.15242b927c24ec40 Pod Normal Scheduled default-scheduler Successfully assigned test-large-container-1 to node1 2018-04-10 13:01:29 -0700 PDT 2018-04-10 13:01:29 -0700 PDT 1 test-large-container-1.15242b942c41e77f Pod spec.initContainers{map} Normal Pulling kubelet, node1 pulling image "X:latest" 2018-04-10 13:01:30 -0700 PDT 2018-04-10 13:01:30 -0700 PDT 1 test-large-container-1.15242b948764b21a Pod spec.initContainers{map} Normal Pulled kubelet, node1 Successfully pulled image "X:latest" 2018-04-10 13:03:30 -0700 PDT 2018-04-10 13:03:30 -0700 PDT 1 test-large-container-1.15242bb0780e06ee Pod spec.initContainers{map} Warning Failed kubelet, node1 Error: context deadline exceeded </code></pre>
<p>Thanks to <a href="https://stackoverflow.com/users/1382970/bits">bits</a>! It was the <code>--runtime-request-timeout</code> flag that I needed to change. Once I increased it enough, it started working!</p>
<p>I have a functional Ingress running with TLS setup and working correctly. I can access <a href="http://whoami.domain.com" rel="nofollow noreferrer">http://whoami.domain.com</a> and <a href="https://whoami.domain.com" rel="nofollow noreferrer">https://whoami.domain.com</a>, and correct certificate is used on the https domain.</p> <p>I'm running on Google, and I know that Googles Ingress controller does not allow setting force ssl to assure that the traffic goes over https. I know I can disable http with <strong>kubernetes.io/ingress.allow-http: "false"</strong> but we do not want the friction for the user to know that they need to use https://</p> <p>My thought of how to solve this would be to have a "redirect" backend that I define as default backend for all port=80 requests, that just 301 to https... However, I cannot find a way to define ingress rules that respects the incoming port.</p> <p>This is my current thought of how to do it, but of course it does not function :)</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app-ingress spec: tls: - hosts: - whoami.domain.com secretName: tls-whoami rules: - host: whoami.domain.com port: 443 # my wish :) http: paths: - backend: serviceName: whoami-service servicePort: 80 - host: whoami.domain.com port: 80 # my wish :) http: paths: - backend: serviceName: http-redirect-service servicePort: 80 </code></pre> <p>I have been trying to find WHAT rule keys one can supply, but cannot find any list, just examples where they are all about host and path.</p>
<p>It is currently not possible to set up redirection from <code>http://</code> to <code>https://</code> in Google Cloud Load Balancers. Therefore you cannot do this in GKE Ingress. <a href="https://issuetracker.google.com/35904733" rel="nofollow noreferrer">https://issuetracker.google.com/35904733</a></p> <p>I personally recommend running a simple service like an nginx container that just rewrites the <code>http://</code> requests to <code>https://</code> and putting it behind the port 80 version of your app.</p> <p><strong>EDIT:</strong> I'm not sure how to achieve this. You may need two separate Ingress objects with the same hostname, but one with <code>tls:</code> and one without. BUT I'm still not sure if it will work, because the Ingress controller can create multiple forwarding-rules and likely you won't be able to achieve this.</p> <p>The best solution here might be just using a TCP/IP Load Balancer (Service type:LoadBalancer) listening on both :80 and :443 and terminating TLS yourself.</p> <p>Check out this question, it's very similar to yours: <a href="https://stackoverflow.com/questions/49667738/implementing-workaround-for-missing-http-https-redirection-in-ingress-gce-with?rq=1">Implementing workaround for missing http-&gt;https redirection in ingress-gce with GLBC</a></p>
<p>I'm trying to setup my Kubernetescluster with a Ceph Cluster using a storageClass, so that with each PVC a new PV is created automatically inside the ceph cluster.</p> <p>But it doesn't work. I've tried a lot and read a lot of documentation and tutorials and can't figure out, what went wrong.</p> <p>I've created 2 secrets, for the ceph admin user and an other user <code>kube</code>, which I created with this command to grant access to a ceph osd pool.</p> <p>Creating the pool: <code>sudo ceph osd pool create kube 128</code></p> <p>Creating the user: <code>sudo ceph auth get-or-create client.kube mon 'allow r' \ osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' \ -o /etc/ceph/ceph.client.kube.keyring</code></p> <p>After that I exported both the keys and converted them to Base64 with: <code>sudo ceph auth get-key client.admin | base64</code> and <code>sudo ceph auth get-key client.kube | base64</code> I used those values inside my secret.yaml to create kubernetes secrets.</p> <pre><code>apiVersion: v1 kind: Secret type: "kubernetes.io/rbd" metadata: name: ceph-secret data: key: QVFCb3NxMVpiMVBITkJBQU5ucEEwOEZvM1JlWHBCNytvRmxIZmc9PQo= </code></pre> <p>And another one named ceph-user-secret.</p> <p>Then I created a storage class to use the ceph cluster</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/rbd parameters: monitors: publicIpofCephMon1:6789,publicIpofCephMon2:6789 adminId: admin adminSecretName: ceph-secret pool: kube userId: kube userSecretName: ceph-kube-secret fsType: ext4 imageFormat: "2" imageFeatures: "layering" </code></pre> <p>To test my setup I created a PVC</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-eng spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi </code></pre> <p>But it gets stuck in the pending state:</p> <pre><code>#kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc-eng Pending standard 25m </code></pre> <p>Also, no images are created inside the ceph kube pool. Do you have any recommendations how to debug this problem?</p> <p>I tried to install the ceph-common ubuntu package on all kubernetes nodes. I switched the kube-controller-manager docker image with an image provided by AT&amp;T which includes the ceph-common package.</p> <p><a href="https://github.com/att-comdev/dockerfiles/tree/master/kube-controller-manager" rel="nofollow noreferrer">https://github.com/att-comdev/dockerfiles/tree/master/kube-controller-manager</a> </p> <p>Network is fine, I can access my ceph cluster from inside a pod and from every kubernetes host.</p> <p>I would be glad if anyone has any ideas!</p>
<p>You must use <code>annotation: ReadWriteOnce</code>. As you can see <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a> (Persistent Volumes section) RBD devices does not support ReadWriteMany mode. Choose different volume plugin (CephFS, for example) if you need read and write data from PV by several pods.</p>
<p>I am using Spring Cloud for Creating Microservice Architecture.</p> <p>I was using the below feature from the Spring Cloud</p> <ul> <li>Zuul – API gateway service that provides dynamic routing, monitoring, resiliency, security, and more - </li> <li>Ribbon – Client side load balancer</li> <li>Feign – Declarative REST client</li> <li>Eureka – Service registration and discovery</li> <li>Sleuth – Distributed tracing via logs</li> <li>Zipkin – Distributed tracing system with request visualization.</li> <li>Hystrix - Circuit Breaker, Fault Tolerance, Hystrix Dashboard for all API </li> </ul> <p>Now Lets say if I have 100 microservices, then we need 100 servers to maintain each microservices. So I thought of using Kubernetes to solve this issue by deploying each microservices in a separate docker container, so now since Kubernetes takes care of microserivice health check, autoscaling, load-balancing so do I need to again use Ribbon, Eureka and Zuul. </p> <p>Can anyone please help me on this</p>
<p>Even when you use Spring Cloud, 100 services do NOT mean 100 servers. In Spring Cloud the packaging unit is Spring Boot application and a single server may host many such Spring Boot applications. If you want, you can containerize the Spring Boot applications and other Spring Cloud infrastructure support components. But that is not Kubernetes.</p> <p>If you move to Kubernetes, you don't need the infrastructure support services like Zuul, Ribbon etc. because Kubernetes has its own components for service discovery, gateway, load balancer etc. In Kubernetes, the packaging unit is Docker images and one or more Docker containers can be put inside one pod which is the minimal scaling unit. So, Kubernetes has a different set of components to manage the Microservices.</p> <p>Kubernetes is a different platform than Spring cloud. Both have the same objectives. However, Kubernetes has some additional features like self healing, auto-scaling, rolling updates, compute resource management, deployments etc.</p>
<p>I create a deployment which results in 4 pods existing across 2 nodes.</p> <p>I then expose these pods via a service which results in the following cluster IP and pod endpoints:</p> <pre><code>Name: s-flask ...... IP: 10.110.201.8 Port: &lt;unset&gt; 9080/TCP TargetPort: 5000/TCP NodePort: &lt;unset&gt; 30817/TCP Endpoints: 192.168.251.131:5000,192.168.251.132:5000,192.168.251.134:5000 + 1 more... </code></pre> <p>If accessing the service internally via the cluster IP, the requests are balanced across both nodes and all pods, not just the pods on a single node (e.g. like access via a nodePort).</p> <p>I know kubernetes uses IP tables to balance requests across pods on a single node, but I can't find any documentation which explains how kubernetes balances internal service requests across multiple nodes (we are don't use load balancers or ingress for internal service load balancing).</p> <p>The cluster IP itself is virtual, the only way I think this can work, is if the cluster IP is round robin mapped to a service endpoint IP address, where the client would have to look up the cluster IP / service and select an endpoint IP?</p>
<p>Everything you need is explained in second paragraph "Virtual IPs and service proxies" of this documentation: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service</a></p> <p>In nutshell: currently, depending on the proxy mode, for ClusterIP it's just round robin/random. It's done by kube-proxy, which runs on each nodes, proxies UDP and TCP and provides load balancing. </p> <p>It's better to think of kubernetes as a whole rather than specific nodes. Abstraction does its thing here.</p> <p>Hope it answers your question.</p>
<p>I am trying to run a docker image that I have build locally with Kubernetes.</p> <p>Here is my command line: </p> <pre><code>kubectl run myImage --image=myImage --port 3030 --image-pull-policy=IfNotPresent </code></pre> <p>I have seen many peoples saying that we need to add the <code>--image-pull-policy=IfNotPresent</code> flag so Kubernetes can look for the image locally instead of Docker Hub, but I still get this error (from the Minikube dashboard on the pod, the service and the deployment).</p> <blockquote> <p>Failed to pull image "myImage": rpc error: code = Unknown desc = Error response from daemon: pull access denied for myImage, repository does not exist or may require 'docker login'</p> </blockquote> <p>But it looks like there is another issue here, I also tried <code>--image-pull-policy=Never</code> and it doesn't work either.</p> <p>Any idea?</p>
<p>The <code>image</code> is not available in <code>minikube</code>.</p> <p>Minikube uses separate <code>docker daemon</code>. That's why, even though the image exists in your machine, it is still missing inside minikube.</p> <p>First, send the image to minikube by,</p> <pre><code>docker save myImage | (eval $(minikube docker-env) &amp;&amp; docker load) </code></pre> <p>This command will save the image as tar archive, then loads the image in minikube by itself.</p> <p>Next, use the image in your deployment with <code>image-pull-policy</code> set to <code>IfNotPresent</code></p> <pre><code>kubectl run myImage --image=myImage --port 3030 --image-pull-policy=IfNotPresent </code></pre>
<p>We're creating a kubernetes statefulset that is mounting a pre-existing NFS share.</p> <p>Here's a trimmed down example:</p> <pre><code>apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: hostname spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: container image: 4730230466298.dkr.ecr.us-east-1.amazonaws.com/container:latest volumeMounts: - name: efs mountPath: /efs readOnly: true volumes: - name: efs nfs: path: / server: 10.33.1.90 readOnly: true </code></pre> <p>This works fine, and the nfs volume is correctly mounted into the container. But how can I specify the mount options on the the mount? I've tried setting the mountOptions parameter as shown here: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options</a></p> <p>on the volume and the volumeMount and it fails to validate. I don't need (or want) to create a PV or PVC because the nfs volume already exists outside of k8s and I just need to use it. </p> <p>Is there anyway to specify the mount options?</p>
<p>You are adding <code>PersistentVolumes</code>'s specs into <code>template.spec.volumes</code> (<code>Pos</code>'s volume).</p> <p>These two are not the same thing. The correct reference for this <code>template.spec.volumes</code> would be <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a></p> <p>You can create a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options" rel="nofollow noreferrer"><code>PersistentVolume</code></a> and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer"><code>persistenVolumeClaim</code></a> with proper <code>mountOptions</code>, then you can use that <code>pvc</code> in the volume field in your above yaml.</p> <p>Here is some example of <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer"><code>nfs volume</code></a> given by kubernetes itself.</p>
<p>Our Kubernetes 1.6 cluster had certificates generated when the cluster was built on April 13th, 2017.</p> <p>On December 13th, 2017, our cluster was upgraded to version 1.8, and new certificates were generated [apparently, an incomplete set of certificates].</p> <p>On April 13th, 2018, we started seeing this message within our Kubernetes dashboard for api-server:</p> <p><code>[authentication.go:64] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]</code></p> <p>Tried pointing <strong>client-certificate</strong> &amp; <strong>client-key</strong> within <code>/etc/kubernetes/kubelet.conf</code> at the certificates generated on Dec 13th [<code>apiserver-kubelet-client.crt</code> and <code>apiserver-kubelet-client.crt</code>], but continue to see the above error.</p> <p>Tried pointing <strong>client-certificate</strong> &amp; <strong>client-key</strong> within <code>/etc/kubernetes/kubelet.conf</code> at <strong>different</strong> certificates generated on Dec 13th [<code>apiserver.crt</code> and <code>apiserver.crt</code>] (I honestly don't understand the difference between these 2 sets of certs/keys), but continue to see the above error.</p> <p>Tried pointing <strong>client-certificate</strong> &amp; <strong>client-key</strong> within <code>/etc/kubernetes/kubelet.conf</code> at non-existent files, and none of the kube* services would start, with <code>/var/log/syslog</code> complaining about this:</p> <p><code>Apr 17 17:50:08 kuber01 kubelet[2422]: W0417 17:50:08.181326 2422 server.go:381] invalid kubeconfig: invalid configuration: [unable to read client-cert /tmp/this/cert/does/not/exist.crt for system:node:node01 due to open /tmp/this/cert/does/not/exist.crt: no such file or directory, unable to read client-key /tmp/this/key/does/not/exist.key for system:node:node01 due to open /tmp/this/key/does/not/exist.key: no such file or directory]</code></p> <p>Any advice on how to overcome this error, or even troubleshoot it at a more granular level? Was considering regenerating certificates for api-server (<code>kubeadm alpha phase certs apiserver</code>), based on instructions within <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-certs" rel="noreferrer">https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-certs</a> ... but not sure if I'd be doing more damage.</p> <p>Relatively new to Kubernetes, and the gentleman who set this up is not available for consult ... any help is appreciated. Thanks.</p>
<p>I think you need re-generate the apiserver certificate <code>/etc/kubernetes/pki/apiserver.crt</code> you can view current expire date like this.</p> <pre><code>openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not ' Not Before: Dec 20 14:32:00 2017 GMT Not After : Dec 20 14:32:00 2018 GMT </code></pre> <p>Here is the steps I used to regenerate the certificates on v1.11.5 cluster. compiled steps from here <a href="https://github.com/kubernetes/kubeadm/issues/581" rel="noreferrer">https://github.com/kubernetes/kubeadm/issues/581</a></p> <hr /> <p><strong>to check all certificate expire date:</strong></p> <pre><code>find /etc/kubernetes/pki/ -type f -name &quot;*.crt&quot; -print|egrep -v 'ca.crt$'|xargs -L 1 -t -i bash -c 'openssl x509 -noout -text -in {}|grep After' </code></pre> <p><strong>Renew certificate on Master node.</strong></p> <p>*) Renew certificate</p> <pre><code>mv /etc/kubernetes/pki/apiserver.key /etc/kubernetes/pki/apiserver.key.old mv /etc/kubernetes/pki/apiserver.crt /etc/kubernetes/pki/apiserver.crt.old mv /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt.old mv /etc/kubernetes/pki/apiserver-kubelet-client.key /etc/kubernetes/pki/apiserver-kubelet-client.key.old mv /etc/kubernetes/pki/front-proxy-client.crt /etc/kubernetes/pki/front-proxy-client.crt.old mv /etc/kubernetes/pki/front-proxy-client.key /etc/kubernetes/pki/front-proxy-client.key.old kubeadm alpha phase certs apiserver --config /root/kubeadm-kubetest.yaml kubeadm alpha phase certs apiserver-kubelet-client kubeadm alpha phase certs front-proxy-client mv /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/apiserver-etcd-client.crt.old mv /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.key.old kubeadm alpha phase certs apiserver-etcd-client mv /etc/kubernetes/pki/etcd/server.crt /etc/kubernetes/pki/etcd/server.crt.old mv /etc/kubernetes/pki/etcd/server.key /etc/kubernetes/pki/etcd/server.key.old kubeadm alpha phase certs etcd-server --config /root/kubeadm-kubetest.yaml mv /etc/kubernetes/pki/etcd/healthcheck-client.crt /etc/kubernetes/pki/etcd/healthcheck-client.crt.old mv /etc/kubernetes/pki/etcd/healthcheck-client.key /etc/kubernetes/pki/etcd/healthcheck-client.key.old kubeadm alpha phase certs etcd-healthcheck-client --config /root/kubeadm-kubetest.yaml mv /etc/kubernetes/pki/etcd/peer.crt /etc/kubernetes/pki/etcd/peer.crt.old mv /etc/kubernetes/pki/etcd/peer.key /etc/kubernetes/pki/etcd/peer.key.old kubeadm alpha phase certs etcd-peer --config /root/kubeadm-kubetest.yaml *) Backup old configuration files mv /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old mv /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.old mv /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old kubeadm alpha phase kubeconfig all --config /root/kubeadm-kubetest.yaml mv $HOME/.kube/config .$HOMEkube/config.old cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config chmod 777 $HOME/.kube/config export KUBECONFIG=.kube/config </code></pre> <p>Reboot the node and check the logs for etcd, kubeapi and kubelet.</p> <p><strong>Note:</strong> Remember to update your CI/CD job kubeconfig file. If you’re using helm command test that also.</p>
<p>I am planning to introduce K8s/Istio into my infra. Right now I have plenty of services which communicate with each other using RabbitMQ.</p> <p>Service mesh concept seems to assume all services should be synchronous.</p> <p>I'd like to keep at least some of my services async and take advantage of Mutual TLS Authentication which istio provides and also aggregated monitoring.</p> <p>Is there any tools/methods which can help me to streamline management of both Sync and Async services?</p> <p>Thanks,</p>
<p>Please note that Istio can control non-HTTP protocols on top of TCP, including providing Mutual TLS Authentication. See this example of mTLS for HTTPS (treated as opaque TCP) - <a href="https://preliminary.istio.io/docs/tasks/security/https-overlay.html" rel="nofollow noreferrer">https://preliminary.istio.io/docs/tasks/security/https-overlay.html</a>.</p> <p>I am not familiar with AMQP, but since it is a <a href="https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol#Overview" rel="nofollow noreferrer">protocol on top TCP</a>, it should be possible to control it by Istio as opaque TCP.</p>
<p>I'm looking for a way to mount configs based on label selector.</p> <p>I will have new configmap's created over time, which I will label with "intendeedtarget"="MainApp". The configs are created from json file, with filename as key.</p> <p>E.g. a yml something like this (but working):</p> <pre><code> volumeMounts: - name: appconfigs mountPath: /appconfigs volumes: - name: appconfigs configMap: selector: matchLabels: intendeedtarget: MainApp </code></pre> <p>That is new configs will be auto mounted in the running pods for the "MainApp" over time.</p> <p>I have already gotten it to mount individual configs as files in the running pods with:</p> <pre><code>containers: - name: {{Name}} image: {{Image}} ports: - containerPort: 80 volumeMounts: - name: appconfigs mountPath: /appconfigs volumes: - name: appconfigs configMap: name: {{ConfigName}} </code></pre> <p>This mounts the json applied to the configmap, as a separate file in the /appconfigs folder.</p> <p>My goal now is to mount all configmaps with the label "intendeedtarget" = "MainApp" in /appconfigs.</p> <p>Any suggestions?</p> <p>A bonus question, does anyone know how to label a configmap as one command, i.e. when creating the configmap?</p> <p>BR</p>
<p>So the short version is no, one cannot specify a whole range of volumeMounts, but there is a work-around available to you: <code>initContainers:</code></p> <p>I believe it would work like this:</p> <ol> <li>The Pod would declare a <code>volume</code> of type <code>emptyDir</code>, named <code>appconfigs</code> <em>(just to make discussing it easy, the name isn't important)</em> mounted where you wish. It is of type <code>emptyDir</code> because we only care about having some shared disk space between the <code>initContainer</code> and the "main" <code>containers</code>, and that shared space should evaporate when the Pod does</li> <li>The <code>initContainer</code> would either have a <code>kubectl</code> binary in it, or download one, or use one of the many, many kubernetes api client libraries, or even just a copy of <code>curl</code> in it <ol> <li>using the <code>ServiceAccount</code> token, it queries for the <code>ConfigMap</code>s of your choosing. It is absolutely possible to put the label criteria into an environment variable of the <code>initContainer</code>, or the <code>initContainer</code> can even use the metadata of its own Pod for introspection, if you prefer that</li> <li>It extracts their payloads, massages them as you wish, and writes the files into the mount point of <code>appconfigs</code></li> </ol></li> <li>Now the <code>containers:</code> will start, but it will find the config files in the shared volume mount, just as you specified</li> </ol>
<p>I'm trying to run confluent kafka image in kubernetes environment &amp; facing </p> <pre><code>FATAL [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.common.KafkaException: Found directory /var/lib/kafka/data, 'data' is not in the form of topic-partition or topic-partition.uniqueId-delete (if marked for deletion). Kafka's log directories (and children) should only contain Kafka topic data. </code></pre> <p>My deployment config:</p> <pre><code>apiVersion: apps/v1beta2 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: kafka-confluent labels: app: kafka-confluent spec: replicas: 1 selector: matchLabels: app: kafka-confluent template: metadata: labels: app: kafka-confluent spec: containers: - name: zookeeper-kafka image: zookeeper:3.5 ports: - containerPort: 2181 - name: kafka-confluent image: confluentinc/cp-kafka:4.0.0 ports: - containerPort: 9092 command: - sh - -c - "exec kafka-server-start /etc/kafka/server.properties \ --override reserved.broker.max.id=2147483647 \ --override zookeeper.connect=localhost:2181 \ --override listeners=PLAINTEXT://:9092 \ " </code></pre> <p>To solve this I've tried to mount some ephemeral volume like this.</p> <pre><code>volumes: - name: kafka-data emptyDir: {} ... volumeMounts: - mountPath: /var/lib/kafka/data name: kafka-data </code></pre> <p>And clear the <code>data</code> dir with init container:</p> <pre><code>containers: - name: cleaner image: busybox command: ['rm', '-rf', '/var/lib/kafka/data/*'] </code></pre> <p>Both tries failed with the same result.</p> <p>Also If I run the image &amp; list the data <code>/var/lib/kafka/data/</code> Looks like the directory is empty.</p> <pre><code>docker run --rm -it confluentinc/cp-kafka:4.0.0 bash root@35087653f43a:/# ls /var/lib/kafka/data/ -al total 8 drwxrwxrwx 2 root root 4096 Apr 16 09:59 . drwxr-xr-x 3 root root 4096 Jan 3 19:20 .. </code></pre>
<p>is it your error? <a href="https://i.stack.imgur.com/al4yx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/al4yx.png" alt="enter image description here"></a></p> <p>Fix add this:</p> <pre><code>log.dirs=/var/lib/kafka/data </code></pre> <p>in server.properties</p>
<p>Is there any way to make kubernetes distribute pods as much as possible? I have "Requests" on all deployments and global Requests as well as HPA. all nodes are the same.</p> <p>Just had a situation where my ASG scaled down a node and one service became completely unavailable as all 4 pods were on the same node that was scaled down. </p> <p>I would like to maintain a situation where each deployment must spread its containers on at least 2 nodes.</p>
<p>Here I leverage <a href="https://stackoverflow.com/a/41169994/1024794">Anirudh's answer</a> adding example code. </p> <p>My initial kubernetes <a href="https://github.com/ypapax/say-grpc/blob/master/backend/kubernetes_external_ip.yaml" rel="noreferrer">yaml</a> looked like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: say-deployment spec: replicas: 6 template: metadata: labels: app: say spec: containers: - name: say image: gcr.io/hazel-champion-200108/say ports: - containerPort: 8080 --- kind: Service apiVersion: v1 metadata: name: say-service spec: selector: app: say ports: - protocol: TCP port: 8080 type: LoadBalancer externalIPs: - 192.168.0.112 </code></pre> <p>At this point, kubernetes scheduler somehow decides that all the 6 replicas should be deployed on the same node.</p> <p>Then I <a href="https://github.com/ypapax/say-grpc/commit/d035e9c2e8195996cfbb2df1af7ca66f4c6ba586" rel="noreferrer">added</a> <code>requiredDuringSchedulingIgnoredDuringExecution</code> to force the pods beeing deployed on different nodes:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: say-deployment spec: replicas: 3 template: metadata: labels: app: say spec: containers: - name: say image: gcr.io/hazel-champion-200108/say ports: - containerPort: 8080 affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - say topologyKey: "kubernetes.io/hostname" --- kind: Service apiVersion: v1 metadata: name: say-service spec: selector: app: say ports: - protocol: TCP port: 8080 type: LoadBalancer externalIPs: - 192.168.0.112 </code></pre> <p>Now all the pods are run on different nodes. And since I have 3 nodes and 6 pods, other 3 pods (6 minus 3) can't be running (pending). This is because I required it: <code>requiredDuringSchedulingIgnoredDuringExecution</code>.</p> <pre><code>kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE say-deployment-8b46845d8-4zdw2 1/1 Running 0 24s 10.244.2.80 night say-deployment-8b46845d8-699wg 0/1 Pending 0 24s &lt;none&gt; &lt;none&gt; say-deployment-8b46845d8-7nvqp 1/1 Running 0 24s 10.244.1.72 gray say-deployment-8b46845d8-bzw48 1/1 Running 0 24s 10.244.0.25 np3 say-deployment-8b46845d8-vwn8g 0/1 Pending 0 24s &lt;none&gt; &lt;none&gt; say-deployment-8b46845d8-ws8lr 0/1 Pending 0 24s &lt;none&gt; &lt;none&gt; </code></pre> <p>Now if I <a href="https://github.com/ypapax/say-grpc/commit/a37efdc26a1f610522990e1e9a77df7dc0b93fdd" rel="noreferrer">loosen</a> this requirement with <code>preferredDuringSchedulingIgnoredDuringExecution</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: say-deployment spec: replicas: 6 template: metadata: labels: app: say spec: containers: - name: say image: gcr.io/hazel-champion-200108/say ports: - containerPort: 8080 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: "app" operator: In values: - say topologyKey: "kubernetes.io/hostname" --- kind: Service apiVersion: v1 metadata: name: say-service spec: selector: app: say ports: - protocol: TCP port: 8080 type: LoadBalancer externalIPs: - 192.168.0.112 </code></pre> <p>First 3 pods are deployed on 3 different nodes just like in the previous case. And the rest 3 (6 pods minus 3 nodes) are deployed on various nodes according to kubernetes internal considerations.</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE say-deployment-57cf5fb49b-26nvl 1/1 Running 0 59s 10.244.2.81 night say-deployment-57cf5fb49b-2wnsc 1/1 Running 0 59s 10.244.0.27 np3 say-deployment-57cf5fb49b-6v24l 1/1 Running 0 59s 10.244.1.73 gray say-deployment-57cf5fb49b-cxkbz 1/1 Running 0 59s 10.244.0.26 np3 say-deployment-57cf5fb49b-dxpcf 1/1 Running 0 59s 10.244.1.75 gray say-deployment-57cf5fb49b-vv98p 1/1 Running 0 59s 10.244.1.74 gray </code></pre>
<p>Im getting a error while running sonar-scanner on a (self-hosted) vsts agent. The agent (visual studio team services) is running on a kubernetes cluster (linux).</p> <p>In VSTS i added the Sonarqube prepare and run analyses (retrieved via the VSTS marketplace). At the run analyses i get the following error:</p> <pre><code>2018-04-17T13:41:17.7246126Z 13:41:17.718 ERROR: Error during SonarQube Scanner execution 2018-04-17T13:41:17.7257629Z 13:41:17.718 ERROR: Unable to load component class org.sonar.scanner.report.ActiveRulesPublisher 2018-04-17T13:41:17.7289820Z 13:41:17.719 ERROR: Caused by: Unable to load component interface org.sonar.api.batch.rule.ActiveRules </code></pre> <p>Full verbose logging of the sonar-scanner:</p> <pre><code> 2018-04-17T13:40:55.4491103Z Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8 2018-04-17T13:40:55.9843800Z INFO: Scanner configuration file: /vsts/agent/_work/_tasks/SonarQubeAnalyze_6d01813a-9589-4b15-8491-8164aeb38055/4.1.1/sonar-scanner/conf/sonar-scanner.properties 2018-04-17T13:40:55.9872383Z INFO: Project root configuration file: NONE 2018-04-17T13:40:56.0773880Z 13:40:56.071 INFO: SonarQube Scanner 3.1.0.1141 2018-04-17T13:40:56.0788506Z 13:40:56.076 INFO: Java 1.8.0_162 Oracle Corporation (64-bit) 2018-04-17T13:40:56.0805405Z 13:40:56.076 INFO: Linux 4.9.0-5-amd64 amd64 2018-04-17T13:40:56.4592573Z 13:40:56.458 DEBUG: keyStore is : 2018-04-17T13:40:56.4608365Z 13:40:56.459 DEBUG: keyStore type is : jks 2018-04-17T13:40:56.4620170Z 13:40:56.460 DEBUG: keyStore provider is : 2018-04-17T13:40:56.4631954Z 13:40:56.460 DEBUG: init keystore 2018-04-17T13:40:56.4643759Z 13:40:56.461 DEBUG: init keymanager of type SunX509 2018-04-17T13:40:56.5660597Z 13:40:56.564 DEBUG: Create: /root/.sonar/cache 2018-04-17T13:40:56.5696355Z 13:40:56.568 INFO: User cache: /root/.sonar/cache 2018-04-17T13:40:56.5709625Z 13:40:56.569 DEBUG: Create: /root/.sonar/cache/_tmp 2018-04-17T13:40:56.5752714Z 13:40:56.574 DEBUG: Extract sonar-scanner-api-batch in temp... 2018-04-17T13:40:56.5940510Z 13:40:56.592 DEBUG: Get bootstrap index... 2018-04-17T13:40:56.5952154Z 13:40:56.593 DEBUG: Download: https://&lt;url&gt;/batch/index 2018-04-17T13:40:56.7993378Z 13:40:56.798 DEBUG: Get bootstrap completed 2018-04-17T13:40:56.8215666Z 13:40:56.818 DEBUG: Download https://&lt;url&gt;/batch/file?name=sonar-scanner-engine-shaded-6.5.jar to /root/.sonar/cache/_tmp/fileCache5321971657904640201.tmp 2018-04-17T13:41:02.6399191Z 13:41:02.638 DEBUG: Create isolated classloader... 2018-04-17T13:41:02.6506592Z 13:41:02.649 DEBUG: Start temp cleaning... 2018-04-17T13:41:02.6644082Z 13:41:02.663 DEBUG: Temp cleaning done 2018-04-17T13:41:02.6656506Z 13:41:02.663 DEBUG: Execution getVersion 2018-04-17T13:41:02.6669835Z 13:41:02.665 INFO: SonarQube server 6.5.0 2018-04-17T13:41:02.6684967Z 13:41:02.665 INFO: Default locale: "en_US", source code encoding: "UTF-8" (analysis is platform dependent) 2018-04-17T13:41:02.6701596Z 13:41:02.666 DEBUG: Work directory: /vsts/agent/_work/1/s/.scannerwork 2018-04-17T13:41:02.6713389Z 13:41:02.667 DEBUG: Execution execute 2018-04-17T13:41:02.9257781Z 13:41:02.924 DEBUG: Publish global mode 2018-04-17T13:41:03.0332419Z 13:41:03.032 INFO: Load global settings 2018-04-17T13:41:03.1301467Z 13:41:03.128 DEBUG: GET 200 https://&lt;url&gt;/api/settings/values.protobuf | time=89ms 2018-04-17T13:41:03.1423184Z 13:41:03.140 INFO: Load global settings (done) | time=109ms 2018-04-17T13:41:03.1546880Z 13:41:03.153 INFO: User cache: /root/.sonar/cache 2018-04-17T13:41:03.3769269Z 13:41:03.375 INFO: Load plugins index 2018-04-17T13:41:03.3867504Z 13:41:03.385 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/index.txt | time=9ms 2018-04-17T13:41:03.3882935Z 13:41:03.387 INFO: Load plugins index (done) | time=12ms 2018-04-17T13:41:03.3894980Z 13:41:03.388 DEBUG: Load plugins 2018-04-17T13:41:03.3919841Z 13:41:03.390 DEBUG: Download plugin sonar-csharp-plugin-5.10.1.1411.jar to /root/.sonar/cache/_tmp/fileCache5198949102678735871.tmp 2018-04-17T13:41:03.4017607Z 13:41:03.399 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/csharp/sonar-csharp-plugin-5.10.1.1411.jar | time=9ms 2018-04-17T13:41:06.7030760Z 13:41:06.699 DEBUG: Download plugin sonar-python-plugin-1.8.0.1496.jar to /root/.sonar/cache/_tmp/fileCache5837697557641973805.tmp 2018-04-17T13:41:06.7140570Z 13:41:06.712 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/python/sonar-python-plugin-1.8.0.1496.jar | time=11ms 2018-04-17T13:41:07.7937342Z 13:41:07.792 DEBUG: Download plugin sonar-java-plugin-4.12.0.11033.jar to /root/.sonar/cache/_tmp/fileCache3113521041013245867.tmp 2018-04-17T13:41:07.8036767Z 13:41:07.802 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/java/sonar-java-plugin-4.12.0.11033.jar | time=10ms 2018-04-17T13:41:09.1704132Z 13:41:09.169 DEBUG: Download plugin sonar-scm-git-plugin-1.2.jar to /root/.sonar/cache/_tmp/fileCache3652847485025121764.tmp 2018-04-17T13:41:09.1814559Z 13:41:09.179 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/scmgit/sonar-scm-git-plugin-1.2.jar | time=10ms 2018-04-17T13:41:10.1258417Z 13:41:10.124 DEBUG: Download plugin sonar-flex-plugin-2.3.jar to /root/.sonar/cache/_tmp/fileCache1763014158619511232.tmp 2018-04-17T13:41:10.1434478Z 13:41:10.141 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/flex/sonar-flex-plugin-2.3.jar | time=17ms 2018-04-17T13:41:10.5811390Z 13:41:10.579 DEBUG: Download plugin sonar-xml-plugin-1.4.3.1027.jar to /root/.sonar/cache/_tmp/fileCache4278096937563691973.tmp 2018-04-17T13:41:10.5931521Z 13:41:10.591 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/xml/sonar-xml-plugin-1.4.3.1027.jar | time=12ms 2018-04-17T13:41:13.0089908Z 13:41:13.006 DEBUG: Download plugin sonar-php-plugin-2.10.0.2087.jar to /root/.sonar/cache/_tmp/fileCache8869518949034818200.tmp 2018-04-17T13:41:13.0190680Z 13:41:13.017 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/php/sonar-php-plugin-2.10.0.2087.jar | time=11ms 2018-04-17T13:41:13.9587794Z 13:41:13.956 DEBUG: Download plugin sonar-scm-svn-plugin-1.5.0.715.jar to /root/.sonar/cache/_tmp/fileCache8353866177366095107.tmp 2018-04-17T13:41:13.9686573Z 13:41:13.966 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/scmsvn/sonar-scm-svn-plugin-1.5.0.715.jar | time=10ms 2018-04-17T13:41:15.7441037Z 13:41:15.740 DEBUG: Download plugin sonar-javascript-plugin-3.1.1.5128.jar to /root/.sonar/cache/_tmp/fileCache1134031791761299423.tmp 2018-04-17T13:41:15.7552087Z 13:41:15.753 DEBUG: GET 200 https://&lt;url&gt;/deploy/plugins/javascript/sonar-javascript-plugin-3.1.1.5128.jar | time=10ms 2018-04-17T13:41:16.6007888Z 13:41:16.598 DEBUG: Load plugins (done) | time=13210ms 2018-04-17T13:41:16.6656267Z 13:41:16.664 DEBUG: API compatibility mode is enabled on plugin Git [scmgit] (built with API lower than 5.2) 2018-04-17T13:41:16.8080245Z 13:41:16.806 DEBUG: API compatibility mode is enabled on plugin SVN [scmsvn] (built with API lower than 5.2) 2018-04-17T13:41:16.8782356Z 13:41:16.877 DEBUG: Plugins: 2018-04-17T13:41:16.8803313Z 13:41:16.877 DEBUG: * C# 5.10.1.1411 (csharp) 2018-04-17T13:41:16.8814834Z 13:41:16.877 DEBUG: * SonarPython 1.8.0.1496 (python) 2018-04-17T13:41:16.8826606Z 13:41:16.878 DEBUG: * SonarJava 4.12.0.11033 (java) 2018-04-17T13:41:16.8838164Z 13:41:16.878 DEBUG: * Git 1.2 (scmgit) 2018-04-17T13:41:16.8849469Z 13:41:16.878 DEBUG: * Flex 2.3 (flex) 2018-04-17T13:41:16.8861132Z 13:41:16.878 DEBUG: * SonarXML 1.4.3.1027 (xml) 2018-04-17T13:41:16.8872441Z 13:41:16.878 DEBUG: * SonarPHP 2.10.0.2087 (php) 2018-04-17T13:41:16.8884157Z 13:41:16.878 DEBUG: * SVN 1.5.0.715 (scmsvn) 2018-04-17T13:41:16.8895519Z 13:41:16.879 DEBUG: * SonarJS 3.1.1.5128 (javascript) 2018-04-17T13:41:17.2954878Z 13:41:17.294 INFO: Process project properties 2018-04-17T13:41:17.3024512Z 13:41:17.301 DEBUG: Process project properties (done) | time=7ms 2018-04-17T13:41:17.3187780Z 13:41:17.317 INFO: Load project repositories 2018-04-17T13:41:17.3400387Z 13:41:17.339 DEBUG: GET 200 https://&lt;url&gt;/batch/project.protobuf?key=&lt;key&gt; | time=20ms 2018-04-17T13:41:17.3772015Z 13:41:17.376 INFO: Load project repositories (done) | time=59ms 2018-04-17T13:41:17.4391020Z 13:41:17.438 DEBUG: Available languages: 2018-04-17T13:41:17.4407025Z 13:41:17.438 DEBUG: * C# =&gt; "cs" 2018-04-17T13:41:17.4419498Z 13:41:17.439 DEBUG: * Python =&gt; "py" 2018-04-17T13:41:17.4431469Z 13:41:17.440 DEBUG: * Java =&gt; "java" 2018-04-17T13:41:17.4447051Z 13:41:17.440 DEBUG: * Flex =&gt; "flex" 2018-04-17T13:41:17.4459538Z 13:41:17.440 DEBUG: * XML =&gt; "xml" 2018-04-17T13:41:17.4471153Z 13:41:17.440 DEBUG: * PHP =&gt; "php" 2018-04-17T13:41:17.4483109Z 13:41:17.440 DEBUG: * JavaScript =&gt; "js" 2018-04-17T13:41:17.4494550Z 13:41:17.445 INFO: Load quality profiles 2018-04-17T13:41:17.4667036Z 13:41:17.465 DEBUG: GET 200 https://&lt;url&gt;/api/qualityprofiles/search.protobuf?projectKey=&lt;key&gt; | time=20ms 2018-04-17T13:41:17.4718216Z 13:41:17.471 INFO: Load quality profiles (done) | time=26ms 2018-04-17T13:41:17.4787808Z 13:41:17.478 INFO: Load active rules 2018-04-17T13:41:17.5539598Z 13:41:17.552 DEBUG: GET 200 https://&lt;url&gt;/api/rules/search.protobuf?f=repo,name,severity,lang,internalKey,templateKey,params,actives,createdAt&amp;activation=true&amp;qprofile=AWLTddaUW_zM7o53qFW6&amp;p=1&amp;ps=500 | time=73ms 2018-04-17T13:41:17.6794770Z 13:41:17.678 INFO: ------------------------------------------------------------------------ 2018-04-17T13:41:17.6808969Z 13:41:17.678 INFO: EXECUTION FAILURE 2018-04-17T13:41:17.6823237Z 13:41:17.678 INFO: ------------------------------------------------------------------------ 2018-04-17T13:41:17.6838138Z 13:41:17.678 INFO: Total time: 21.735s 2018-04-17T13:41:17.7207625Z 13:41:17.718 INFO: Final Memory: 8M/109M 2018-04-17T13:41:17.7229316Z 13:41:17.718 INFO: ------------------------------------------------------------------------ 2018-04-17T13:41:17.7246126Z 13:41:17.718 ERROR: Error during SonarQube Scanner execution 2018-04-17T13:41:17.7257629Z 13:41:17.718 ERROR: Unable to load component class org.sonar.scanner.report.ActiveRulesPublisher 2018-04-17T13:41:17.7289820Z 13:41:17.719 ERROR: Caused by: Unable to load component interface org.sonar.api.batch.rule.ActiveRules 2018-04-17T13:41:17.7301826Z 13:41:17.728 ERROR: 2018-04-17T13:41:17.7318487Z 13:41:17.728 ERROR: Re-run SonarQube Scanner using the -X switch to enable full debug logging. 2018-04-17T13:41:17.7530568Z ##[error]/vsts/agent/_work/_tasks/SonarQubeAnalyze_6d01813a-9589-4b15-8491-8164aeb38055/4.1.1/sonar-scanner/bin/sonar-scanner failed with return code: 1 </code></pre>
<p>The class <code>org.sonar.api.batch.rule.ActiveRules</code> is defined in <code>sonar-plugin-api</code>, so I think the bug is caused by a plugin.</p> <p>I see you use <code>C# 5.10.1.1411</code>. I found this issue: <a href="https://github.com/SonarSource/sonar-csharp/issues/307" rel="nofollow noreferrer">ActiveRulesPublisher issue for C# 5.10.1</a>. The reporter uses exactly the same version as you (<code>5.10.1.1411</code>). Steps which should solve your problem:</p> <ul> <li>stop the SonarQube server</li> <li>purge the <code>{SONAR_QUBE}/data/es</code> folder</li> <li>restart the SonarQube server</li> <li>relaunch the project analysis</li> </ul>
<p>I am trying to set up a conventional web app with a database in Kubernetes. I have accomplished it by configuring 2 services and 2 deployments - one for the app and one for the database. Now I would like to make my database accessible only from the app pods, ie not expose it to outside world like a service. Is it possible using only Kubernetes configuration? </p>
<p>There are following ways to expose the pods. </p> <p>purpose is inter-service communication</p> <p><strong>Internally expose</strong> </p> <ul> <li>service <code>type=clusterIP</code> </li> <li><strong>Headless-service</strong> <code>clusterIP: None</code> is used for database pods </li> </ul> <blockquote> <p>Sometimes you don’t need or want load-balancing and a single service IP. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless-services</a></p> </blockquote> <p><strong>Externally expose</strong><br> Exposing service to the customers. </p> <ul> <li>service <code>type=NodePort</code> or <code>type=LoadBalancer</code></li> </ul>
<p>Who has access to open pod terminals through the openshift web console? What permissions need to be given for this?</p>
<p>Any user that has <code>edit</code> or <code>admin</code> role in the project which the application is running in, can create a terminal session for pods running in that project.</p> <pre><code>oc adm policy add-role-to-user edit &lt;collaborator&gt; </code></pre> <p>The purpose of the different roles you can grant are:</p> <ul> <li>admin - A project manager. The user will have rights to view any resource in the project and modify any resource in the project except for quota. A user with this role for a project will be able to delete the project.</li> <li>edit - A user that can modify most objects in a project, but does not have the power to view or modify roles or bindings. A user with this role can create and delete applications in the project.</li> <li>view - A user who cannot make any modifications, but can see most objects in a project.</li> </ul> <p>A user with <code>view</code> role cannot create a terminal session as that would mean they could modify things.</p>
<p>When i run the helm install command the below line gives me the error:</p> <pre><code>args: [while [ 1 ]; do echo "hi" ; sleep 1; done;] </code></pre> <p>Error:</p> <pre><code>Error: YAML parse error converting YAML to JSON: yaml: line 27: did not find expected ',' or ']' </code></pre>
<p>Square brackets have special meaning in YAML (they indicate a flow sequence, i.e. an inline array). You need to quote that scalar (string):</p> <pre class="lang-yaml prettyprint-override"><code>args: [ 'while [ 1 ]; do echo "I am awake" ; sleep 1; done;' ] </code></pre> <p>...or make it a block scalar and use the literal indicator, <code>|</code>:</p> <pre class="lang-yaml prettyprint-override"><code>args: - | while [ 1 ]; do echo "I am awake" ; sleep 1; done; </code></pre> <p>Both of these <a href="http://yaml-online-parser.appspot.com/?yaml=-+args%3A+%5B+%27while+%5B+1+%5D%3B+do+echo+%22I+am+awake%22+%3B+sleep+1%3B+done%3B%27+%5D%0A-+args%3A%0A++++-+%7C%0A++++++while+%5B+1+%5D%3B+do+echo+%22I+am+awake%22+%3B+sleep+1%3B+done%3B&amp;type=json" rel="nofollow noreferrer">produce the same JSON</a>:</p> <pre class="lang-json prettyprint-override"><code>{ "args": [ "while [ 1 ]; do echo \"I am awake\" ; sleep 1; done;" ] } </code></pre>
<p>I'm looking to understand how to recreate my cluster. There's a cluster-level setting to specify the IP range for nodes created within it, which I want to use so I can set a decent firewall rule. However, it looks like that can't be changed once the cluster is created.</p> <p>I have a number of namespaces, deployments, services, secrets, persistent volumes and claims. If I wanted to transfer them all to a new cluster, should I just <code>kubectl get all --namespace=whatever --format=yaml</code>, <code>kubectl delete -f</code>, and then <code>kubectl apply -f</code> on the new cluster?</p> <p>Would something so crude work for mapping to the same load balancers / public IPs, persistent volumes, secrets, etc?</p>
<p>As you can see the backup and the migration of whole clusters is quite a discussed matter and still an open issue on Kubernetes github as well:</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/issues/24229" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/24229</a></li> </ul> <p>Therefore I do not believe that the command that you posted might be considered a solution or work. I think it will fail due to different resources that are cluster dependent and IPs. Moreover since this kind of use is not supported It will lead for to multiple issues.</p> <p>Lets say that you change zone of the cluster, how could be possible to move the PV if the disk cannot be attached to an instance in a different zone (or possibly if you migrate to a different cloud service)?</p> <p>More important I would not risk to delete my production to run a command that is not documented or indicated as best practise. You could try it on test namespace, but I would not suggest to go further.</p> <p>You can check <a href="https://hackernoon.com/introducing-reshifter-for-kubernetes-backup-restore-migration-upgrade-ffaf78da36" rel="nofollow noreferrer">reshifter</a> and <a href="https://github.com/heptio/ark" rel="nofollow noreferrer">ark</a> since they might cover your needs. I have never tested them but they are mentioned in the thread, so they might be of your interest.</p> <hr> <p>I tried this approach in one of my test cluster obtaining:</p> <pre><code>Error from server (Conflict): Operation cannot be fulfilled Error from server (Conflict): Operation cannot be fulfilled Error from server (Forbidden): [...] </code></pre> <p>Honestly I believe that for a limited subset of resources it might be possible (Note that some resources were created correctly) , but it cannot be considered at all a way to migrate.</p>
<p>I was able to follow the documentation and get a kubernetes cluster up. But I would like to add a second master node I tried this on the second node but seeing an error </p> <pre><code>[root@kubemaster02 ~]# kubeadm init --apiserver-advertise- address=10.122.161.XX --pod-network-cidr=10.244.0.0/16 --kubernetes- version=v1.10.0 [init] Using Kubernetes version: v1.10.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03 [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [preflight] Some fatal errors occurred: [ERROR Port-10250]: Port 10250 is in use [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` </code></pre> <p>My question is if this is the correct way to add the second master, by doing an init ? another question I have is how to tell if the node is configured as a master or not, the following command is not showing the ROLES for some reason (may be older versions)</p> <pre><code>[root@master01 ~]# kubectl get nodes -o wide NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION kubemaster01 Ready 215d v1.8.1 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64 kubemaster02 Ready 132d v1.8.4 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64 kubenode01 Ready 215d v1.8.1 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64 kubenode02 Ready 214d v1.8.1 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-693.5.2.el7.x86_64 </code></pre>
<p>In your case, please look what is running on the port 10250 :</p> <pre><code>netstat -nlp | grep 10250 </code></pre> <p>Because your error is:</p> <blockquote> <p>[ERROR Port-10250]: Port 10250 is in use</p> </blockquote> <p>In general, you can bootstrap additional master, and have 2 masters. Before running kubeadm on the other master, you need to first copy the K8s CA cert from <code>kubemaster01</code>. To do this, you have two options: </p> <p><strong>Option 1: Copy with scp</strong></p> <pre><code>scp root@&lt;kubemaster01-ip-address&gt;:/etc/kubernetes/pki/* /etc/kubernetes/pki </code></pre> <p><strong>Option 2: Copy paste</strong></p> <p>Copy the contents of <code>/etc/kubernetes/pki/ca.crt</code>, <code>/etc/kubernetes/pki/ca.key</code>, <code>/etc/kubernetes/pki/sa.key</code> and <code>/etc/kubernetes/pki/sa.pub</code> and create these files manually on <code>kubemaster02</code> </p> <p><a href="https://kubernetes.io/docs/setup/independent/high-availability/#set-up-master-load-balancer" rel="nofollow noreferrer">The next step is to create a Load Balancer</a> that sits in front of your master nodes. How you do this depends on your environment; you could, for example, leverage a cloud provider Load Balancer, or set up your own using NGINX, keepalived, or HAproxy.</p> <p>For bootstrapping use the <code>config.yaml</code>:</p> <pre><code>cat &gt;config.yaml &lt;&lt;EOF apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration api: advertiseAddress: &lt;private-ip&gt; etcd: endpoints: - https://&lt;your-ectd-ip&gt;:2379 caFile: /etc/kubernetes/pki/etcd/ca.pem certFile: /etc/kubernetes/pki/etcd/client.pem keyFile: /etc/kubernetes/pki/etcd/client-key.pem networking: podSubnet: &lt;podCIDR&gt; apiServerCertSANs: - &lt;load-balancer-ip&gt; apiServerExtraArgs: apiserver-count: "2" EOF </code></pre> <p>Ensure that the following placeholders are replaced:</p> <ul> <li><code>your-ectd-ip</code> the IP address your etcd</li> <li><code>private-ip</code> it with the private IPv4 of the master server.</li> <li><code>&lt;podCIDR&gt;</code> with your Pod CIDR</li> <li><code>load-balancer-ip</code> endpoint to connect your masters </li> </ul> <p>then you can run the command:</p> <pre><code>kubeadm init --config=config.yaml </code></pre> <p>and bootstrap the masters.</p> <p>But if you really want a HA cluster please follow the documentation's minimal requirements and use 3 nodes for masters. They create these requirements for etcd quorum. On every master node they run the etcd which works very close to masters. </p>
<p>I'm currently running on AWS and use <a href="https://github.com/kube-aws/kube-spot-termination-notice-handler" rel="noreferrer">kube-aws/kube-spot-termination-notice-handler</a> to intercept an AWS spot termination notice and gracefully evict the pods.</p> <p>I'm reading <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/preemptible-vm" rel="noreferrer">this GKE documentation page</a> and I see:</p> <blockquote> <p>Preemptible instances terminate after 30 seconds upon receiving a preemption notice.</p> </blockquote> <p>Going into the Compute Engine documentation, I see that a ACPI G2 Soft Off is sent 30 seconds before the termination happens but <a href="https://github.com/kubernetes/kubernetes/issues/22494" rel="noreferrer">this issue</a> suggests that the kubelet itself doesn't handle it.</p> <p><strong>So, how does GKE handle preemption?</strong> Will the node do a drain/cordon operation or does it just do a hard shutdown?</p>
<p>Yes you are right, so far there is no built in way to handle <code>ACPI G2 Soft Off</code>.</p> <p>Notice that if <a href="https://cloud.google.com/compute/docs/instances/preemptible" rel="nofollow noreferrer">normal preemptible</a> instance supports shutdown scripts (where you could introduce some kind of logic to perform drain/cordon), this is not the case if they are Kubernetes nodes: </p> <blockquote> <p>Currently, preemptible VMs do not support shutdown scripts. </p> </blockquote> <p>You can perform some test but quoting again from documentation:</p> <blockquote> <p>You can simulate an instance preemption by stopping the instance.</p> </blockquote> <p>And so far if you stop the instance, even if it is a Kubernetes node no action is taken to cordon/drain and gratefully remove the node from the cluster.</p> <p>However this feature is still in beta therefore it is at its early stage of life and in this moment it is a matter of discussion if and how introduce this feature.</p> <p><strong>Disclaimer: I work For Google Cloud Platform Support</strong></p>
<p>I'm trying to change the <code>client_max_body_size</code> value, so my NGINX ingress will not return the HTTP 413 Content Too Large error (as seen in the logs).</p> <p>I've tested a few solutions.<br /> Here is my config map:</p> <pre><code>kind: ConfigMap apiVersion: v1 data: proxy-connect-timeout: &quot;15&quot; proxy-read-timeout: &quot;600&quot; proxy-send-timeout: &quot;600&quot; proxy-body-size: &quot;8m&quot; hsts-include-subdomains: &quot;false&quot; body-size: &quot;64m&quot; server-name-hash-bucket-size: &quot;256&quot; client-max-body-size: &quot;50m&quot; metadata: name: nginx-configuration namespace: ingress-nginx labels: app: ingress-nginx </code></pre> <p>These changes had no effect at all: in NGINX controller's log I can see the information about reloading the config map, but the values in <code>nginx.conf</code> are the same:</p> <pre><code>$ cat /etc/nginx/nginx.conf | grep client_max client_max_body_size &quot;8m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; client_max_body_size &quot;1m&quot;; </code></pre> <p>My <code>nginx-controller</code> config uses this image: <code>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0</code></p> <p>How can I force NGINX to change this setting? I need to change it globally, for all my ingresses.</p>
<p>You can use the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#custom-max-body-size" rel="noreferrer">annotation</a> <code>nginx.ingress.kubernetes.io/proxy-body-size</code> to set the max-body-size option right in your Ingress object instead of changing a base ConfigMap.</p> <p>Here is the example of usage:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app annotations: nginx.ingress.kubernetes.io/proxy-body-size: "50m" ... </code></pre>
<p>After far as I know: </p> <ul> <li>deploymentconfig &rarr; replicationcontroller &rarr; pod</li> </ul> <p>vs.</p> <ul> <li>deployment &rarr; replicaset &rarr; pod</li> </ul> <p>Otherwise, do these two resources have additional differences?</p> <p>The more detail the better.</p>
<p>A <a href="https://docs.openshift.com/container-platform/3.9/dev_guide/deployments/how_deployments_work.html" rel="noreferrer">DeploymentConfig</a> (DC) in OpenShift is more or less equivalent to a Kubernetes <code>Deployment</code>, nowadays. Main difference (besides that one is using <code>ReplicationController</code> and the other using <code>ReplicaSet</code> as you rightly pointed out) is that </p> <ol> <li><p>there are a few things you can do with a <code>DeploymentConfig</code> (around triggers) that you can't do with a <code>Deployment</code>. </p></li> <li><p><code>DeploymentConfig</code>'s are first-class citizens in the Web console.</p></li> </ol> <p>The reason <code>DeploymentConfig</code>'s exist is because we (Red Hat) are innovating. In other words: <code>DeploymentConfig</code>'s predate <code>Deployment</code>'s and while we're always trying to propose these innovations upstream, they are not always accepted by the community as is. For example, in the case of RBAC, the stuff we had in OpenShift was accepted upstream and that's why you have the same RBAC resources etc. now in OpenShift and Kubernetes. With <code>DeploymentConfig</code>'s that was not the case. Over time one would expect that <code>DeploymentConfig</code>'s are phased out in favor of <code>Deployment</code>'s but I can't give you a timeline. If portability is your primary concern, I'd say, use <code>Deployment</code>'s.</p>
<p>In Kubernetes, ReplicaController uses label selectors to specify which pod(s) it will operate on.</p> <p>Now, my question is, if multiple ReplicaControllers have overlapping label selectors, what will be the behavior of the Kubernetes cluster? Or is it considered some kind of error and should be avoided?</p> <p>For example, I have two ReplicationControllers described as below.</p> <p>rc1.yaml:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: frontend spec: replicas: 3 selector: app: frontend team: payment </code></pre> <p>rc2.yaml:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: backend spec: replicas: 2 selector: app: backend team: payment </code></pre> <p>You can see that they both have the label selector <code>team=payment</code>, but one specifies a replica count of 3 while another of 2.</p> <p>Any explanations or references will be appreciated. Thanks.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#pod-selector" rel="nofollow noreferrer">Kubernetes docs</a> state the following about overlapping selectors (either from other pods, replication controllers or jobs):</p> <blockquote> <p>Also you should not normally create any pods whose labels match this selector, either directly, with another ReplicationController, or with another controller such as Job. If you do so, the ReplicationController thinks that it created the other pods. Kubernetes does not stop you from doing this. If you do end up with multiple controllers that have overlapping selectors, you will have to manage the deletion yourself</p> </blockquote> <p>In the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#selector" rel="nofollow noreferrer">docs about Deployments</a> there is another statement that confirms the above:</p> <blockquote> <p>If you have multiple controllers that have overlapping selectors, the controllers will fight with each other and won’t behave correctly.</p> </blockquote> <p>So, in summary, you should try to avoid overlapping selectors for replication controllers, replica sets (the next-generation replication controller which I would advise you to use instead of replication controllers) and deployments because it might probably lead to severe problems as confirmed by this <a href="https://github.com/kubernetes/kubernetes/issues/24152" rel="nofollow noreferrer">issue</a>. </p>
<p>I am running a 3 nodes bare metal cluster on version <code>1.9.5</code>.</p> <p>IPs of the 3 nodes are :</p> <pre><code>[root@node1 new]# kubectl get nodes NAME STATUS ROLES AGE VERSION IP node1 Ready master,node 1d v1.9.5 172.16.16.1 node2 Ready node 1d v1.9.5 172.16.16.2 node3 Ready node 1d v1.9.5 172.16.16.3 </code></pre> <p>Everything I explain below is being done in 1 namespace i.e. <code>ingress-nginx</code></p> <p>I have 2 apps deployed.</p> <pre><code>[root@node1 new]# kubectl get po -n ingress-nginx NAME READY STATUS RESTARTS AGE app1-5d4d466cc7-595lc 1/1 Running 0 25m app2-55cf97d86d-9v8gl 1/1 Running 0 25m </code></pre> <p>Their services are :</p> <pre><code>[root@node1 new]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE appsvc1 NodePort 10.233.60.145 &lt;none&gt; 80:32601/TCP 25m appsvc2 NodePort 10.233.46.230 &lt;none&gt; 80:30616/TCP 25m </code></pre> <p>So when I access them via <code>NodePort</code>, I get my desired result.</p> <pre><code>curl http://172.16.16.2:32601 &lt;h1&gt;Hello app1!&lt;/h1&gt; curl http://172.16.16.2:30616 &lt;h1&gt;Hello app2!&lt;/h1&gt; </code></pre> <p>Now my aim is to configure path based routing using nginx ingress controller so that at the end of it, I can get routing to app1 using</p> <pre><code>curl http://172.16.16.2/app1 &lt;h1&gt;Hello app1!&lt;/h1&gt; &amp; curl http://172.16.16.2/app2 &lt;h1&gt;Hello app2!&lt;/h1&gt; </code></pre> <p>So now I have setup an ingress controller using <a href="https://github.com/kubernetes/ingress-nginx/tree/master/deploy" rel="nofollow noreferrer">ingress-nginx</a> .</p> <p>The nginx controller is also deployed in the same namespace at the apps i.e. <code>ingress-nginx</code></p> <p>My ingress controller is successfully deployed as the logs indicate :</p> <pre><code>[root@node1 new]# kubectl logs nginx-ingress-controller-9c7b694-bjn6h -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.12.0 Build: git-1df421a Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- I0415 16:08:18.736790 5 main.go:225] Running in Kubernetes Cluster version v1.9 (v1.9.5) - git (clean) commit f01a2bf98249a4db383560443a59bed0c13575df - platform linux/amd64 I0415 16:08:18.743855 5 main.go:84] validated ingress-nginx/default-http-backend as the default backend I0415 16:08:19.182913 5 stat_collector.go:77] starting new nginx stats collector for Ingress controller running in namespace (class nginx) I0415 16:08:19.182944 5 stat_collector.go:78] collector extracting information from port 18080 I0415 16:08:19.211749 5 nginx.go:281] starting Ingress controller I0415 16:08:20.325839 5 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"app-ingress", UID:"918405ac-410e-11e8-a473-080027917402", APIVersion:"extensions", ResourceVersion:"75539", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/app-ingress I0415 16:08:20.326503 5 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-nginx", Name:"nginx-ingress", UID:"8dc22500-410e-11e8-a473-080027917402", APIVersion:"extensions", ResourceVersion:"75538", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ingress-nginx/nginx-ingress I0415 16:08:20.413514 5 store.go:614] running initial sync of secrets I0415 16:08:20.413591 5 nginx.go:302] starting NGINX process... I0415 16:08:20.418356 5 leaderelection.go:174] attempting to acquire leader lease... W0415 16:08:20.422596 5 controller.go:777] service ingress-nginx/nginx-ingress does not have any active endpoints I0415 16:08:20.422686 5 controller.go:183] backend reload required I0415 16:08:20.422694 5 stat_collector.go:34] changing prometheus collector from to default I0415 16:08:20.439620 5 status.go:196] new leader elected: nginx-ingress-controller-9c7b694-h2n4b I0415 16:08:20.534277 5 controller.go:192] ingress backend successfully reloaded... W0415 16:08:28.768140 5 controller.go:777] service ingress-nginx/nginx-ingress does not have any active endpoints I0415 16:09:00.478068 5 leaderelection.go:184] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx I0415 16:09:00.478207 5 status.go:196] new leader elected: nginx-ingress-controller-9c7b694-bjn6h </code></pre> <p>Then I configured my ingress using :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: app-ingress spec: rules: - host: my-test.com http: paths: - backend: serviceName: appsvc1 servicePort: 80 path: /app1 - backend: serviceName: appsvc2 servicePort: 80 path: /app2 </code></pre> <p>I created this using :</p> <pre><code>kubectl create -f app-ingress.yaml -n ingress-nginx </code></pre> <p>I then expose this using :</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx-ingress spec: type: NodePort externalIps: - 172.16.16.1 - 172.16.16.2 - 172.16.16.3 ports: - port: 80 nodePort: 30000 name: http selector: app: nginx-ingress </code></pre> <p><code>app: nginx-ingress</code> points to label from my label in ingress-controller pod.</p> <p>I deploy it using :</p> <pre><code>kubectl create -f nginx-ingress-controller-service.yaml -n=ingress </code></pre> <p>But when I try to access the apps using the URLs, I get :</p> <pre><code>curl http://172.16.16.2/app1 default backend - 404 &amp; curl http://172.16.16.2/app2 default backend - 404 </code></pre> <p>Even doing </p> <pre><code>curl http://my-test.com/app1 default backend - 404 &amp; curl http://my-test.com/app2 default backend - 404 </code></pre> <p>does not work.</p> <p>My /etc/hosts files is :</p> <pre><code>172.16.16.1 my-test.com </code></pre> <p>Is there something I have missed or am doing wrong ? </p> <pre><code>[root@node1 new]# kubectl get all -n ingress-nginx NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/app1 2 2 2 2 48m deploy/app2 2 2 2 2 48m deploy/default-http-backend 1 1 1 1 3h deploy/nginx-ingress-controller 1 1 1 1 3h NAME DESIRED CURRENT READY AGE rs/app1-5d4d466cc7 2 2 2 48m rs/app2-55cf97d86d 2 2 2 48m rs/default-http-backend-55c6c69b88 1 1 1 3h rs/nginx-ingress-controller-9c7b694 1 1 1 3h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/app1 2 2 2 2 48m deploy/app2 2 2 2 2 48m deploy/default-http-backend 1 1 1 1 3h deploy/nginx-ingress-controller 1 1 1 1 3h NAME DESIRED CURRENT READY AGE rs/app1-5d4d466cc7 2 2 2 48m rs/app2-55cf97d86d 2 2 2 48m rs/default-http-backend-55c6c69b88 1 1 1 3h rs/nginx-ingress-controller-9c7b694 1 1 1 3h NAME READY STATUS RESTARTS AGE po/app1-5d4d466cc7-595lc 1/1 Running 0 48m po/app1-5d4d466cc7-5dn72 1/1 Running 0 48m po/app2-55cf97d86d-9v8gl 1/1 Running 0 48m po/app2-55cf97d86d-lckpn 1/1 Running 0 48m po/default-http-backend-55c6c69b88-8shkt 1/1 Running 0 3h po/nginx-ingress-controller-9c7b694-bjn6h 1/1 Running 0 52m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/appsvc1 NodePort 10.233.60.145 &lt;none&gt; 80:32601/TCP 48m svc/appsvc2 NodePort 10.233.46.230 &lt;none&gt; 80:30616/TCP 48m svc/default-http-backend ClusterIP 10.233.5.30 &lt;none&gt; 80/TCP 3h svc/ingress-nginx NodePort 10.233.6.186 &lt;none&gt; 80:31301/TCP,443:32103/TCP 3h svc/nginx-ingress NodePort 10.233.9.163 172.16.16.1,172.16.16.2,172.16.16.3 80:30000/TCP 2h </code></pre> <p>I even changed my ingress config to remove the host and be open for all hosts:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: app-ingress spec: rules: - http: paths: - path: /app1 backend: serviceName: appsvc1 servicePort: 80 - path: /app2 backend: serviceName: appsvc2 servicePort: 80 </code></pre> <p>And now I get redirected.</p> <pre><code>[root@node1 ~]# curl http://172.16.16.1/app1 &lt;html&gt; &lt;head&gt;&lt;title&gt;308 Permanent Redirect&lt;/title&gt;&lt;/head&gt; &lt;body bgcolor="white"&gt; &lt;center&gt;&lt;h1&gt;308 Permanent Redirect&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.13.9&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; [root@node1 ~]# curl http://172.16.16.1/app2 &lt;html&gt; &lt;head&gt;&lt;title&gt;308 Permanent Redirect&lt;/title&gt;&lt;/head&gt; &lt;body bgcolor="white"&gt; &lt;center&gt;&lt;h1&gt;308 Permanent Redirect&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.13.9&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Ingress controller logs say :</p> <pre><code>10.233.102.128 - [10.233.102.128] - - [15/Apr/2018:17:35:52 +0000] "GET /app1 HTTP/1.1" 308 187 "-" "curl/7.29.0" 80 0.000 [ingress-nginx-appsvc1-80] - - - - 10.233.102.128 - [10.233.102.128] - - [15/Apr/2018:17:36:09 +0000] "GET /app1 HTTP/1.1" 308 187 "-" "curl/7.29.0" 79 0.000 [ingress-nginx-appsvc1-80] - - - - 10.233.102.128 - [10.233.102.128] - - [15/Apr/2018:17:36:11 +0000] "GET /app2 HTTP/1.1" 308 187 "-" "curl/7.29.0" 79 0.000 [ingress-nginx-appsvc2-80] - - - - 10.233.102.128 - [10.233.102.128] - - [15/Apr/2018:17:36:25 +0000] "GET /app2 HTTP/1.1" 308 187 "-" "curl/7.29.0" 85 0.000 [ingress-nginx-appsvc2-80] - - - - 10.233.102.128 - [10.233.102.128] - - [15/Apr/2018:17:36:51 +0000] "GET /app2 HTTP/1.1" 308 187 "-" "curl/7.29.0" 80 0.000 [ingress-nginx-appsvc2-80] - - - - </code></pre>
<p>All your configs are fine, except one thing related to annotation:</p> <pre><code> annotations: nginx.ingress.kubernetes.io/rewrite-target: / </code></pre> <p>You don't need this annotation, because it's processed earlier than routing and you always have the request to <code>/</code> . You have no route for this path, so just remove this annotation and it should work. </p>
<p>I am suffering below problem.</p> <p>I use ubuntu 16.04. I am following guide document, but it does not works for me.</p> <pre><code># cat /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main # curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - OK # apt-get update Hit:1 http://kr.archive.ubuntu.com/ubuntu xenial InRelease Hit:2 http://kr.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:3 http://kr.archive.ubuntu.com/ubuntu xenial-backports InRelease Hit:4 http://security.ubuntu.com/ubuntu xenial-security InRelease Get:5 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B] Ign:5 https://packages.cloud.google.com/apt kubernetes-xenial InRelease Fetched 8993 B in 1s (5258 B/s) Reading package lists... Done W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB W: The repository 'http://apt.kubernetes.io kubernetes-xenial InRelease' is not signed. N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use. N: See apt-secure(8) manpage for repository creation and user configuration details. </code></pre> <p>I want to upgrade kubeadm version. How to fix this?</p>
<p>You should try to update <a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy" rel="noreferrer">PGP</a> keys from the keyserver provided by Canonical:</p> <pre><code>sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 6A030B21BA07F4FB Executing: /tmp/apt-key-gpghome.F7OvCVWiqu/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 6A030B21BA07F4FB gpg: key 6A030B21BA07F4FB: public key "Google Cloud Packages Automatic Signing Key &lt;[email protected]&gt;" imported gpg: Total number processed: 1 gpg: imported: 1 </code></pre> <p>next:</p> <pre><code>sudo apt update &amp;&amp; sudo apt upgrade </code></pre> <p>will do the rest of the job.</p> <p>If problem still exists, binaries also can be downloaded directly:</p> <pre><code>wget https://storage.googleapis.com/kubernetes-release/release/v1.10.1/bin/linux/amd64/kubeadm wget https://storage.googleapis.com/kubernetes-release/release/v1.10.1/bin/linux/amd64/kubectl wget https://storage.googleapis.com/kubernetes-release/release/v1.10.1/bin/linux/amd64/kubelet </code></pre>
<pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Looking at the <a href="https://github.com/kubernetes/charts/tree/master/stable/sonarqube" rel="noreferrer">Sonarqube helm chart</a></p> <p><strong>requirements.yaml</strong></p> <pre><code>dependencies: - name: sonarqube version: 0.5.0 repository: https://kubernetes-charts.storage.googleapis.com/ </code></pre> <p>Trying to install the latest version of the java plugin:</p> <p><strong>values.yaml</strong></p> <pre><code>plugins: install: - "http://central.maven.org/maven2/org/sonarsource/java/sonar-java-plugin/5.3.0.13828/sonar-java-plugin-5.3.0.13828.jar" </code></pre> <p>However, I am getting an error on the init container:</p> <pre><code>$ kubectl logs sonarqube-sonarqube-7b5dfd84cf-sglk5 -c install-plugins sh: /opt/sonarqube/extensions/plugins/install_plugins.sh: Permission denied </code></pre> <hr> <pre><code>$ kubectl describe po sonarqube-sonarqube-7b5dfd84cf-sglk5 Name: sonarqube-sonarqube-7b5dfd84cf-sglk5 Namespace: default Node: docker-for-desktop/192.168.65.3 Start Time: Thu, 19 Apr 2018 15:22:04 -0500 Labels: app=sonarqube pod-template-hash=3618984079 release=sonarqube Annotations: &lt;none&gt; Status: Pending IP: 10.1.0.250 Controlled By: ReplicaSet/sonarqube-sonarqube-7b5dfd84cf Init Containers: install-plugins: Container ID: docker://b090f52b95d36e03b8af86de5a6729cec8590807fe23e27689b01e5506604463 Image: joosthofman/wget:1.0 Image ID: docker-pullable://joosthofman/wget@sha256:74ef45d9683b66b158a0acaf0b0d22f3c2a6e006c3ca25edbc6cf69b6ace8294 Port: &lt;none&gt; Command: sh -c /opt/sonarqube/extensions/plugins/install_plugins.sh State: Waiting Reason: CrashLoopBackOff </code></pre> <p><strong>Is there a way to <code>exec</code> into the into the init container?</strong></p> <p>My attempt:</p> <pre><code>$ kubectl exec -it sonarqube-sonarqube-7b5dfd84cf-sglk5 -c install-plugins sh error: unable to upgrade connection: container not found ("install-plugins") </code></pre> <hr> <p><strong>Update</strong></p> <p>With @WarrenStrange's suggestion:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE sonarqube-postgresql-59975458c6-mtfjj 1/1 Running 0 11m sonarqube-sonarqube-685bd67b8c-nmj2t 1/1 Running 0 11m $ kubectl get pods sonarqube-sonarqube-685bd67b8c-nmj2t -o yaml ... initContainers: - command: - sh - -c - 'mkdir -p /opt/sonarqube/extensions/plugins/ &amp;&amp; cp /tmp/scripts/install_plugins.sh /opt/sonarqube/extensions/plugins/install_plugins.sh &amp;&amp; chmod 0775 /opt/sonarqube/extensions/plugins/install_plugins.sh &amp;&amp; /opt/sonarqube/extensions/plugins/install_plugins.sh ' image: joosthofman/wget:1.0 imagePullPolicy: IfNotPresent name: install-plugins resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /opt/sonarqube/extensions name: sonarqube subPath: extensions - mountPath: /tmp/scripts/ name: install-plugins - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-89d9n readOnly: true ... </code></pre> <p>Create a new pod manifest extracted from the init container manifest. Replace the command with <code>sleep 6000</code> and execute the commands. This allows you to poke around.</p>
<p>The issue is that the container does not exist (see the CrashLoopBackOff).</p> <p>One of the things that I do with init containers (assuming you have the source) is to put a sleep 600 on failure in the entrypoint. At least for debugging. This lets you exec into the container to poke around to see the cause of the failure.</p>
<p>I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm.</p> <p>Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way.</p> <p>How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application?</p> <p>Here is some part of my deployment.yaml file</p> <pre><code>... ... spec: restartPolicy: Always containers: - name: sample-app image: "sample-app:latest" imagePullPolicy: Always env: - name: "USERNAME" value: "app-username" - name: "PASSWORD" value: "28sin47dsk9ik" ... ... </code></pre> <p>How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm?</p> <p>Is this possible? If yes, then how do I do this?</p>
<p>You can <code>export</code> the variable and use it while running <code>helm install</code>.</p> <p>Before that, you have to modify your chart so that the value can be <code>set</code> while installation. </p> <p>Skip this part, if you already know, how to setup template fields.</p> <hr> <p>As you don't want to expose the data, so it's better to have it saved as secret in kubernetes. </p> <p>First of all, add this two lines in your <code>Values</code> file, so that these two values can be set from outside.</p> <pre><code>username: root password: password </code></pre> <p>Now, add a <code>secret.yaml</code> file inside your <code>template</code> folder. and, copy this code snippet into that file. </p> <pre><code>apiVersion: v1 kind: Secret metadata: name: {{ .Release.Name }}-auth data: password: {{ .Values.password | b64enc }} username: {{ .Values.username | b64enc }} </code></pre> <p>Now tweak your deployment yaml template and make changes in <code>env</code> section, like this</p> <pre><code>... ... spec: restartPolicy: Always containers: - name: sample-app image: "sample-app:latest" imagePullPolicy: Always env: - name: "USERNAME" valueFrom: secretKeyRef: key: username name: {{ .Release.Name }}-auth - name: "PASSWORD" valueFrom: secretKeyRef: key: password name: {{ .Release.Name }}-auth ... ... </code></pre> <hr> <p>If you have modified your template correctly for <code>--set</code> flag, you can set this using environment variable.</p> <pre><code>$ export USERNAME=root-user </code></pre> <p>Now use this variable while running helm install,</p> <pre><code>$ helm install --set username=$USERNAME ./mychart </code></pre> <p>If you run this <code>helm install</code> in <code>dry-run</code> mode, you can verify the changes,</p> <pre><code>$ helm install --dry-run --set username=$USERNAME --debug ./mychart [debug] Created tunnel using local port: '44937' [debug] SERVER: "127.0.0.1:44937" [debug] Original chart version: "" [debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart NAME: irreverant-meerkat REVISION: 1 RELEASED: Fri Apr 20 03:29:11 2018 CHART: mychart-0.1.0 USER-SUPPLIED VALUES: username: root-user COMPUTED VALUES: password: password username: root-user HOOKS: MANIFEST: --- # Source: mychart/templates/secret.yaml apiVersion: v1 kind: Secret metadata: name: irreverant-meerkat-auth data: password: password username: root-user --- # Source: mychart/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: irreverant-meerkat labels: app: irreverant-meerkat spec: replicas: 1 template: metadata: name: irreverant-meerkat labels: app: irreverant-meerkat spec: containers: - name: irreverant-meerkat image: alpine env: - name: "USERNAME" valueFrom: secretKeyRef: key: username name: irreverant-meerkat-auth - name: "PASSWORD" valueFrom: secretKeyRef: key: password name: irreverant-meerkat-auth imagePullPolicy: IfNotPresent restartPolicy: Always selector: matchLabels: app: irreverant-meerkat </code></pre> <p>You can see that the data of username in secret has changed to <code>root-user</code>.</p> <p>I have added <a href="https://github.com/the-redback/kubernetes-yaml-drafts/tree/master/helm-charts/mychart" rel="noreferrer">this example</a> into github repo.</p> <p>There is also some discussion in <a href="https://github.com/kubernetes/helm" rel="noreferrer">kubernetes/helm</a> repo regarding this. You can see <a href="https://github.com/kubernetes/helm/issues/944" rel="noreferrer">this issue</a> to know about all other ways to use environment variables.</p>
<p>I am currently trying to deploy the following on Minikube. I used the configuration files to use a hostpath as a persistent storage on minikube node.</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: "pv-volume" spec: capacity: storage: "20Gi" accessModes: - "ReadWriteOnce" hostPath: path: /data --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: "orientdb-pv-claim" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "20Gi" --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: orientdbservice spec: #replicas: 1 template: metadata: name: orientdbservice labels: run: orientdbservice test: orientdbservice spec: containers: - name: orientdbservice image: orientdb:latest env: - name: ORIENTDB_ROOT_PASSWORD value: "rootpwd" ports: - containerPort: 2480 name: orientdb volumeMounts: - name: orientdb-config mountPath: /data/orientdb/config - name: orientdb-databases mountPath: /data/orientdb/databases - name: orientdb-backup mountPath: /data/orientdb/backup volumes: - name: orientdb-config persistentVolumeClaim: claimName: orientdb-pv-claim - name: orientdb-databases persistentVolumeClaim: claimName: orientdb-pv-claim - name: orientdb-backup persistentVolumeClaim: claimName: orientdb-pv-claim --- apiVersion: v1 kind: Service metadata: name: orientdbservice labels: run: orientdbservice spec: type: NodePort selector: run: orientdbservice ports: - protocol: TCP port: 2480 name: http </code></pre> <p>which results in following</p> <pre><code>#kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv-volume 20Gi RWO Retain Available 4h pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h #kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO #kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE orientdbservice 10.0.0.16 &lt;nodes&gt; 2480:30552/TCP 4h #kubectl get pods NAME READY STATUS RESTARTS AGE orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 4h #kubectl describe pod orientdbservice-458328598-zsmw5 Events: FirstSeen LastSeen Count From SubObjectPath TypeReason Message --------- -------- ----- ---- ------------- -------- ------ ------- 4h 1m 37 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases] 4h 1m 37 kubelet, minikube Warning FailedSync Error syncing pod </code></pre> <p>I see the following error</p> <pre><code>Unable to mount volumes for pod,timeout expired waiting for volumes to attach/mount for pod </code></pre> <p>Is there something incorrect in way I am creating Persistent Volume and PersistentVolumeClaim on my node.</p> <pre><code>minikube version: v0.20.0 </code></pre> <p>Appreciate all the help</p>
<p><strong>Your configuration is fine.</strong></p> <p>Tested under <code>minikube v0.24.0</code>, <code>minikube v0.25.0</code> and <code>minikube v0.26.1</code> without any problem.</p> <p>Take in mind that <strong>minikube is under active development</strong>, and, specially if you're under windows, is like they say <strong>experimental software</strong>.</p> <p>Update to a newer version of minikube and redeploy it. This should solve the problem.</p> <p>You can check for updates with the <code>minikube update-check</code> command which results in something like this:</p> <pre><code>$ minikube update-check CurrentVersion: v0.25.0 LatestVersion: v0.26.1 </code></pre> <p>To upgrade minikube simply type <code>minikube delete</code> which deletes your current minikube installation and download the new release as described.</p> <pre><code>$ minikube delete There is a newer version of minikube available (v0.26.1). Download it here: https://github.com/kubernetes/minikube/releases/tag/v0.26.1 To disable this notification, run the following: minikube config set WantUpdateNotification false Deleting local Kubernetes cluster... Machine deleted. </code></pre>
<p>I am using the ingress-nginx system <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a>. I'm making extensive use of this project. Jenkins, Consul, Prometheus and more are working just fine using the exact same ingress config as appended to the end.</p> <p>I am able to access my vault pod directly through port-forwarding with kubectl. But when I attempt to access it over my nginx-ingress, I am returned a 503</p> <pre><code>kubectl port-forward vault-vault-f9778f86d-srr9n 8200:8200 -n vault curl 127.0.0.1:8200/v1/1 {"errors":["Vault is sealed"]} ➜ vault curl -L vault.me.com/v1/1 &lt;html&gt; &lt;head&gt;&lt;title&gt;503 Service Temporarily Unavailable&lt;/title&gt;&lt;/head&gt; &lt;body bgcolor="white"&gt; &lt;center&gt;&lt;h1&gt;503 Service Temporarily Unavailable&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.13.8&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Looking at the logs I see the following in response to the vault.me.com curl</p> <pre><code>10.233.104.128 - [10.233.104.128] - - [19/Apr/2018:20:42:56 +0000] "GET / HTTP/1.1" 308 187 "-" "curl/7.43.0" 77 0.000 [] - - - - 10.233.104.128 - [10.233.104.128] - - [19/Apr/2018:20:42:56 +0000] "GET / HTTP/1.1" 503 213 "-" "curl/7.43.0" 77 0.000 [] - - - - </code></pre> <p>Where as if I try to access my consul backend, I see the following.</p> <pre><code>10.233.104.128 - [10.233.104.128] - - [19/Apr/2018:20:43:34 +0000] "GET / HTTP/1.1" 308 187 "-" "curl/7.43.0" 78 0.000 [consul-consul-consul-8500] - - - - 10.233.104.128 - [10.233.104.128] - - [19/Apr/2018:20:43:39 +0000] "GET / HTTP/1.1" 308 187 "-" "curl/7.43.0" 78 0.000 [consul-consul-consul-8500] - - - - 10.233.104.128 - [10.233.104.128] - - [19/Apr/2018:20:43:39 +0000] "GET / HTTP/1.1" 301 39 "-" "curl/7.43.0" 78 0.002 [consul-consul-consul-8500] 10.233.114.4:8500 39 0.002 301 10.233.104.128 - [10.233.104.128] - - [19/Apr/2018:20:43:39 +0000] "GET /ui/ HTTP/1.1" 200 30178 "-" "curl/7.43.0" 81 0.001 [consul-consul-consul-8500] 10.233.82.19:8500 30178 0.001 200 </code></pre> <p>I'm not entire sure whats going on, nor am I clear on how to debug this. I've spent the two hours reading through the source on this tool, but I'm not seeing much of anything.</p> <p>I read somewhere that any response that isn't 2xx or 3xx fails nginx's backend health check and will be removed from the pool of backends to be routed to. That would manifest as the 503 that I am seeing. Its true that vault's / does return a non 2xx or 3xx so that would make sense.</p> <pre><code>vault curl -v 127.0.0.1:8200 * Rebuilt URL to: 127.0.0.1:8200/ * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 8200 (#0) &gt; GET / HTTP/1.1 &gt; Host: 127.0.0.1:8200 &gt; User-Agent: curl/7.43.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 404 Not Found &lt; Cache-Control: no-store &lt; Content-Type: text/plain; charset=utf-8 &lt; X-Content-Type-Options: nosniff &lt; Date: Thu, 19 Apr 2018 20:46:40 GMT &lt; Content-Length: 19 &lt; 404 page not found </code></pre> <p><strong>Is there a way to change the URL that nginx tries to healthcheck, or disable the healthchecking entirely</strong></p> <p><a href="https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/" rel="nofollow noreferrer">https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/</a> From nginx's docs it says if there is a single server in a backend group, it should never be getting marked as unavailable, regardless of return code.</p> <p>Whats going on?</p> <blockquote> <p>Note that if there is only a single server in a group, the fail_timeout and max_fails parameters are ignored and the server is never marked unavailable.</p> </blockquote> <pre><code>k get svc -n vault NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vault-vault ClusterIP 10.233.47.151 &lt;none&gt; 8200/TCP 1h apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx-internal name: vault namespace: vault spec: rules: - host: vault.me.com http: paths: - backend: serviceName: vault-vault servicePort: 8200 path: / tls: - hosts: - me.com - vault.me.com secretName: wildcard-secret </code></pre>
<p>Unsealing the vault through the port-forward method will change the state of the kubernetes pod level healthchecking, which is what is queried by ingress-nginx to determine if a backend is valid or not.</p> <p>Once the vault is unsealed you can access the backend as you expect.</p> <p>At the time of writing, the vault CLI does not appear to respect http 308 redirects. Which gives you a very unhelpful error about golang's parser library. I wasted several hours on this. I hope this information helps someone.</p> <p><a href="https://github.com/hashicorp/vault/issues/4401" rel="nofollow noreferrer">https://github.com/hashicorp/vault/issues/4401</a></p>
<p>I would like the containers in my pod to share a volume for temporary (cached) data. I don't mind if the data is lost when the pod terminates (in fact, I want the data deleted and space reclaimed).</p> <p>The <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="noreferrer">kubernetes docs</a> make an <code>emptyDir</code> sound like what I want:</p> <blockquote> <p>An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node</p> </blockquote> <p>.. and </p> <blockquote> <p>By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead</p> </blockquote> <p>That sounds like the default behaviour is to store the volume on disk, unless I explicitly request in-memory.</p> <p>However, if I create the following pod on my GKE cluster:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: alpine spec: containers: - name: alpine image: alpine:3.7 command: ["/bin/sh", "-c", "sleep 60m"] volumeMounts: - name: foo mountPath: /foo volumes: - name: foo emptyDir: {} </code></pre> <p>.. and then open a shell on the pod and write a 2Gb file to the volume:</p> <pre><code>kubectl exec -it alpine -- /bin/sh $ cd foo/ $ dd if=/dev/zero of=file.txt count=2048 bs=1048576 </code></pre> <p>Then I can see in the GKE web console that the RAM usage of the container has increased by 2Gb:</p> <p><a href="https://i.stack.imgur.com/bryZy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bryZy.png" alt="memory increase in the alpine contianer"></a></p> <p>It looks to me like the GKE stores <code>emptyDir</code> volumes in memory by default. The workload I plan to run needs plenty of memory, so I'd like the <code>emptyDir</code> volume to be backed by disk - is that possible? The <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/volumes" rel="noreferrer">GKE storage docs</a> don't have much to say on the issue.</p> <p>An alternative approach might be to use a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/local-ssd" rel="noreferrer">local SSD</a> for my cached data, however if I mount them as recommended in the GKE docs they're shared by all pods running on the same node and the data isn't cleaned up on pod termination, which doesn't meet my goals of automatically managed resources.</p> <h2>Mounts</h2> <p>Here's the output of <code>df -h</code> inside the container:</p> <pre><code># df -h Filesystem Size Used Available Use% Mounted on overlay 96.9G 26.2G 70.7G 27% / overlay 96.9G 26.2G 70.7G 27% / tmpfs 7.3G 0 7.3G 0% /dev tmpfs 7.3G 0 7.3G 0% /sys/fs/cgroup /dev/sda1 96.9G 26.2G 70.7G 27% /foo /dev/sda1 96.9G 26.2G 70.7G 27% /dev/termination-log /dev/sda1 96.9G 26.2G 70.7G 27% /etc/resolv.conf /dev/sda1 96.9G 26.2G 70.7G 27% /etc/hostname /dev/sda1 96.9G 26.2G 70.7G 27% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm tmpfs 7.3G 12.0K 7.3G 0% /run/secrets/kubernetes.io/serviceaccount tmpfs 7.3G 0 7.3G 0% /proc/kcore tmpfs 7.3G 0 7.3G 0% /proc/timer_list tmpfs 7.3G 0 7.3G 0% /proc/sched_debug tmpfs 7.3G 0 7.3G 0% /sys/firmware </code></pre> <h2>The View from the Node</h2> <p>I discovered it's possible to ssh into the node instance, and I was able to find the 2Gb file on the node filesystem:</p> <pre><code>root@gke-cluster-foo-pool-b-22bb9925-xs5p:/# find . -name file.txt ./var/lib/kubelet/pods/79ad1aa4-4441-11e8-af32-42010a980039/volumes/kubernetes.io~empty-dir/foo/file.txt </code></pre> <p>Now that I can see it is being written to the underlying filesystem, I'm wondering if maybe the RAM usage I'm seeing in the GKE web UI is the linux filesystem cache or similar, rather than the file being stored in a RAM disk?</p>
<p>From the mount information you've supplied, the <code>emptyDir</code> volume is mounted on a drive partition, so it's working as intended, and isn't mounted in memory. It's likely that the memory usage you see is due to the filesystem buffer cache, so with sufficient memory pressure, it'd eventually get written to the disk. However, given that you have so much free memory, it's likely that the system saw no need to do so immediately.</p> <p>If you have more doubts, give <code>sync</code> or <code>echo 3 &gt; /proc/sys/vm/drop_caches</code> a go on the machines to flush filesystem information to disk. You should see a change in memory usage.</p>
<p>We've currently got a production application using Kubernetes on AWS. Everything's working very well except I think we've made a misconfiguration problem. </p> <p>We expose different services from within the cluster on domain names and we're now up to about 5 different services. Kubernetes' standard way to expose these services is through load balancers, but in our config we've created 6 load balancers. As you can imagine that many load balancers running can incur substantial cost overheads. </p> <p>Is there any way to configure an individual load balancer to route to kubernetes targets based on domain names? So we can have one domain pointing at an ELB and have that route to the correct services internally? </p>
<p>You can use Ingress controller. Ingress will setup a single AWS load balancer and can be used to expose many services. If you services are all HTTP based, it should work quite well. For more information about ingress you can have a look to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes docs</a> or at the default <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx based implementation</a>. If needed there are also some other implementations using for example Envoy proxy etc.</p>
<p>While running the minikube start in windows 10, getting the following error.</p> <blockquote> <p>Error : Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition</p> </blockquote> <p>Please help me resolve the give issue, I delete the minikube and restarted many time.</p>
<p>I faced the same issue on Centos 7</p> <p><code>minikube delete</code></p> <p>then </p> <p><code>minikube start</code></p> <p>solved my issue</p>
<p>I was trying to build with Kubernetes, and I was wanted to use the local image which I pulled in the early days to save time.(So it does not have to pull a image every time a container is created).The problem is that Kubernetes just ignored my local images.</p> <p>For example. when I run <code>docker images</code>, I got this in the list </p> <pre><code>gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 7 months ago 52.36 MB </code></pre> <p>But when I tried to build a deployment with the config(just a part of the config file) </p> <pre><code>image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 imagePullPolicy: Never </code></pre> <p>It said <code>Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1" is not present with pull policy of Never</code></p> <p>I wonder if there is anything I missed. Thanks for help!</p>
<p>If you are using a Kubernetes cluster, you have to make sure the Docker image you need is available in all nodes, as you don't really know which one will get assigned the Pod.</p> <p>So: pull/build the needed image from each cluster node.</p>
<p>I have an application which will de deployed in an OpenShift/Kubernetes cluster as a pod. I know this is against the principles of Kubernetes, but this pod should only be run once (so ther shall not be parallel processing). There may be a second pod running for the case the first one crashes to immediately take over. My question now is: How would you implement a "lock" and ensure that this lock is released when the container crashes.</p> <p>My first idea is to write a "locked" attribute to the used database. As long as the attribute is set, the second pod won´t do anything. Once the processing pod crashes, it shouild realease this lock. But how to release it when the application is already crashed?</p> <p>Thanks for your ideas in advance!</p>
<p>This has to be done by the application, because Kubernetes deliberately does not provide these application-tier primitives out of the box. There are ways to orchestrate things such that Kubernetes will generally keep only one pod running, but the guarantees offered by that orchestration are limited. </p> <p>Using a durable datastore to coordinate ownership of a responsibility at the application tier is a good idea, and using a "locked" attribute or similar is also fine. The typical way the problem of ensuring "liveness" is handled is by having the responsible application periodically update a last-update timestamp in that datastore. Then the secondary application can keep tabs on the interval since the last update. </p> <p>In this kind of solution there also has to be a way of taking ownership of the responsibility, which can be done by having the "locked" attribute be an application instance ID. </p> <p>So, at periodic intervals, the responsible instance of the coordinated application does a SELECT FOR UPDATE to atomically update the timestamp for their application instance ID. The update only succeeds if the application still owns the responsibility. </p> <p>The backup instance periodically checks the last-updated timestamp. If the interval since the last update exceeds the timeout, then the backup instance attempts to do an atomic update to change the locked attribute to their application instance ID- again, only if the last-update timestamp is too old. </p> <p>One has to be a little careful about race conditions and use transactions and the datastore's atomicity appropriately. Also, when work fails or gets interrupted, there has to be a way to appropriately retry or rollback. </p> <p>But for many cases this kind of simple solution is fine. Hope that helps.</p>
<p>I'm currently deploying a K8S cluster through Rancher RKE using AWS EC2 virtual machines (with CentOS 7 and Docker 17.03.2-ce). Unfortunately after depolying K8S dashboard, I'm not been able to access it from external, through API SERVER (<a href="https://API-server-ip:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">https://API-server-ip:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a>). Service are up and running without problems:</p> <pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.43.0.1 &lt;none&gt; 443/TCP 1h ingress-nginx default-http-backend ClusterIP 10.43.76.101 &lt;none&gt; 80/TCP 1h kube-system kube-dns ClusterIP 10.43.0.10 &lt;none&gt; 53/UDP,53/TCP 1h kube-system kubernetes-dashboard ClusterIP 10.43.198.196 &lt;none&gt; 443/TCP 1h </code></pre> <p>I saw that PEM certificate have been already created within /etc/kubernetes/ssl of the API SERVER machine:</p> <pre><code>-rw-r--r--. 1 root root 1679 Apr 19 09:19 kube-apiserver-key.pem -rw-r--r--. 1 root root 1302 Apr 19 09:19 kube-apiserver.pem -rw-r--r--. 1 root root 1679 Apr 19 09:19 kube-ca-key.pem -rw-r--r--. 1 root root 1017 Apr 19 09:19 kube-ca.pem -rw-r--r--. 1 root root 493 Apr 19 09:19 kubecfg-kube-controller-manager.yaml -rw-r--r--. 1 root root 437 Apr 19 09:19 kubecfg-kube-node.yaml -rw-r--r--. 1 root root 441 Apr 19 09:19 kubecfg-kube-proxy.yaml -rw-r--r--. 1 root root 457 Apr 19 09:19 kubecfg-kube-scheduler.yaml -rw-r--r--. 1 root root 1675 Apr 19 09:19 kube-controller-manager-key.pem -rw-r--r--. 1 root root 1062 Apr 19 09:19 kube-controller-manager.pem -rw-r--r--. 1 root root 1679 Apr 19 09:19 kube-etcd-&lt;...&gt;-compute-amazonaws-com-key.pem -rw-r--r--. 1 root root 1298 Apr 19 09:19 kube-etcd-&lt;...&gt;-us-east-2-compute-amazonaws-com.pem -rw-r--r--. 1 root root 1679 Apr 19 09:19 kube-node-key.pem -rw-r--r--. 1 root root 1070 Apr 19 09:19 kube-node.pem -rw-r--r--. 1 root root 1675 Apr 19 09:19 kube-proxy-key.pem -rw-r--r--. 1 root root 1046 Apr 19 09:19 kube-proxy.pem -rw-r--r--. 1 root root 1675 Apr 19 09:19 kube-scheduler-key.pem -rw-r--r--. 1 root root 1050 Apr 19 09:19 kube-scheduler.pem </code></pre> <p>I tried to use <code>kube-apiserver-key.pem</code> as key to generate a client certificate <code>openssl req -new -key /etc/kubernetes/ssl/kube-apiserver-key.pem -out /tmp/user-cert.pem</code> and eventually use it to access. Unfortunately the generated certificate is resulted to be in invalid format (I tried both to install on MacOS X and on SSL online validator.</p> <p>Any help?</p>
<p>After several digging I managed to found a solution.</p> <p>In the generate RKE kubeconfig generated file, both client-certificate-data and client-key-data are present as base64 encoded keys for kube-admin.</p> <p>In order to use them in my client browser I had first to decode them for obtaining the respective certificate and key</p> <pre><code>echo '&lt;KUBE_ADMIN_CLIENT_CERTIFICATE_DATA&gt;' | base64 --decode &gt; kube-admin-cert.pem echo '&lt;KUBE_ADMIN_CLIENT_KEY_DATA&gt;' | base64 --decode &gt; kube-admin-cert-key.pem </code></pre> <p>Once the certificates have been generated it's possibile to extract the correspondant .p12 certificate file</p> <pre><code>openssl pkcs12 -export -clcerts -inkey kube-admin-cert-key.pem -in kube-admin-cert.pem -out kube-admin-cert.p12 </code></pre> <p>Eventually, once the p12 certificate has been installed in local client browser, it's possibile to authenticate successfully to the proxy api server.</p>
<p>I have a multi container pod deployment that exposes port 8080 the port inside the container is accessible through localhost but not the pod IP when I telnet on the pod local host I'm able to connect but when I telnet on the pod IP that's in /etc/hosts I get connection refused.</p> <p>deployment.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test namespace: yara labels: component: test-multi-container-pod spec: replicas: 1 template: metadata: labels: app: test spec: serviceAccountName: test containers: - name: container-1 image: "gcr.io/projectID/my-image1:v1.9.3" pullPolicy: "IfNotPresent" resources: limits: cpu: 1000m memory: 2Gi requests: cpu: 500m memory: 2Gi - name: container2 image: "gcr.io/projectID/my-image2:0.0.107" pullPolicy: "IfNotPresent" securityContext: runAsUser: 0 resources: limits: cpu: 1000m memory: 2Gi requests: cpu: 500m memory: 2Gi - name: "app-container" ## nodejs image that exposes ports 3000 &amp; 8080 image: "gcr.io/projectID/node:8.9.4_1804082101" workingDir: "/usr/src/app" pullPolicy: "Always" command: ["tail", "-f", "/dev/null"] ports: - name: http containerPort: 3000 - name: graphql containerPort: 8080 resources: limits: cpu: 1500m memory: 2Gi requests: cpu: 1500m memory: 2Gi </code></pre> <p>service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: test-app namespace: "yara" labels: component: test-multi-container-pod spec: type: NodePort ports: - protocol: TCP name: http port: 3000 targetPort: http - protocol: TCP name: graphql port: 8080 targetPort: graphql selector: component: test-multi-container-pod </code></pre>
<p>The <code>command</code> option in Pod Spec overrides <code>Entrypoint</code> option in Docker Container, that's why you actually run tail instead of your application</p> <pre><code> - name: "app-container" ... command: ["tail", "-f", "/dev/null"] </code></pre> <p>According to the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">documentation</a>, <code>command</code> in kubernetes overrides docker containers <code>entrypoint</code> with the following rules:</p> <ul> <li>If you do not supply command or args for a Container, the defaults defined in the Docker image are used.</li> <li>If you supply a command but no args for a Container, only the supplied command is used. The default EntryPoint and the default Cmd defined in the Docker image are ignored.</li> <li>If you supply only args for a Container, the default Entrypoint defined in the Docker image is run with the args that you supplied.</li> <li>If you supply a command and args, the default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args.</li> </ul> <p>All containers in a Pod share the same network namespace. It looks similar, as if processes from containers in the Pod would run on the same host and be able to bind only to ports which aren't occupied by other processes in the same Pod. Practically, if you configure two containers that use the same port binding, one of them fails to start with error: "[emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)". </p> <p>If you need that particular pod container process to be found and accessed by other Pods and Services, you can describe it with <code>port:</code> directive in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="nofollow noreferrer">Pod Spec</a>. It gives the system additional information about the network connections a container uses, but is primarily informational. <strong><em>Not specifying a port in Pod Spec does not prevent that port from being exposed.</em></strong> Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network by the Pod address and from other containers in the pod via <code>localhost</code>.</p> <p>So, the response you've received from localhost:8080 could be delivered from another container in the pod which binds to that port. </p> <p>You can find a good explanation of the Pod networking in this <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727" rel="nofollow noreferrer">article</a>. </p>
<p>I have a simple k8s installation with few nodes and ceph (kubernetes.io/rbd) as storageclass. I have a deployment with a single pod which uses a persistent volume from the persistent volume claim (ReadWriteOnce) from this storage class.</p> <p>A node with this pod have failed (NotReady in <code>get nodes</code> output for a long time and it's physically dead).</p> <p>K8s could not create a new pod for my deploy because of 'Multi-Attach error for volume "pvc-..." Volume is already exclusively attached to one node and can't be attached to another'.</p> <p>I see that pv is bounded to the failed node: "Status: Bound".</p> <p>How can I force kubernetes to forget about old pod to allow a new pod to bound to the persistent volume?</p>
<p>It is a complex problem.</p> <p>Kubelet daemon, which manages mounts of Volumes, should set the information about a new status of volume to enable the Scheduler to spawn a Pod on the other node. </p> <p>But, you have the 'NotReady' status, which means Kubernetes cannot communicate with the Kubelet to check the current status of Volumes. In Kubernetes, the status of the Volume is the last one which has been received - "Bound." It is not possible to reset that status somehow without changing the state of the node.</p> <p>I see only 2 workarounds here:</p> <ol> <li>Use PVC in <code>ReadWriteMany</code>mode instead of <code>ReadWriteOnce</code>. CephFS can work in that mode, but RBD can't. That mode allows Kubernetes to claim the same volume on several nodes at the same time.</li> <li>Delete failed node from the cluster. It will remove all objects linked to the node and Scheduler will be able to claim your Volume again.</li> </ol>
<p>I'm in the process of trying to convert a Docker Cloud based setup to k8s.</p> <p>I've found the k8s kompose tool which can convert a docker compose file into a k8s file. Based on my results I'm wondering if it is possible to define a full stack in a single file.</p> <p>I'm running <code>kompose convert</code> but it ends up creating several different files:</p> <p><a href="https://i.stack.imgur.com/eIZMw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIZMw.png" alt="enter image description here"></a></p> <p>Next I'm wanting to upload my stack to k8s via the UI:</p> <p><a href="https://i.stack.imgur.com/vPiRw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vPiRw.png" alt="enter image description here"></a></p> <p>Is the only way, to upload each of the files that was generated separately? Or is it possible to create a single file to represent the stack like with compose?</p>
<p>I'm not sure via the UI but via the command line Kompose gives you the ability to "up" them all as you are used to with docker-compose. I believe Kompose also has support for Healm which might be your easiest route for utilizing all the files in an easy-ish way.</p>
<p>I have created two node pools. A small one for all the google system jobs and a bigger one for my tasks. The bigger one should reduce its size to 0 after the job is done.</p> <blockquote> <p><strong>The problem is:</strong> Even if there are no cron jobs, the node pool do not reduce his size to 0.</p> </blockquote> <p><strong>Creating cluster:</strong></p> <pre><code>gcloud beta container --project "projectXY" clusters create "cluster" --zone "europe-west3-a" --username "admin" --cluster-version "1.9.6-gke.0" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/cloud-platform" --num-nodes "1" --network "default" --enable-cloud-logging --enable-cloud-monitoring --subnetwork "default" --enable-autoscaling --enable-autoupgrade --min-nodes "1" --max-nodes "1" </code></pre> <p><strong>Creating node pool:</strong></p> <p>The node pool should reduce its size to 0 after all tasks are done.</p> <pre><code>gcloud container node-pools create workerpool --cluster=cluster --machine-type="n1-highmem-8", -m "n1-highmem-8" --zone=europe-west3-a, -z europe-west3-a --disk-size=100 --enable-autoupgrade --num-nodes=0 --enable-autoscaling --max-nodes=2 --min-nodes=0 </code></pre> <p><strong>Create cron job:</strong></p> <pre><code>kubectl create -f cronjob.yaml </code></pre>
<p>Quoting from Google <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">Documentation</a>:</p> <blockquote> <p>&quot;Note: Beginning with Kubernetes version 1.7, you can specify a minimum size of zero for your node pool. This allows your node pool to scale down completely if the instances within aren't required to run your workloads. However, while a node pool can scale to a zero size, the overall cluster size does not scale down to zero nodes (as at least one node is always required to run system Pods).&quot;</p> </blockquote> <p>Notice also that:</p> <blockquote> <p>&quot;Cluster autoscaler also measures the usage of each node against the node pool's total demand for capacity. If a node has had no new Pods scheduled on it for a set period of time, and <strong>[this option does not work for you since it is the last node]</strong> all Pods running on that node can be scheduled onto other nodes in the pool , the autoscaler moves the Pods and deletes the node.</p> <p>Note that cluster autoscaler works based on Pod resource requests, that is, how many resources your Pods have requested. Cluster autoscaler does not take into account the resources your Pods are actively using. Essentially, cluster autoscaler trusts that the Pod resource requests you've provided are accurate and schedules Pods on nodes based on that assumption.&quot;</p> </blockquote> <p><strong>Therefore I would check:</strong></p> <ul> <li>that your version of your Kubernetes cluster is at least 1.7</li> <li>that there are no pods running on the last node (check every namespace, the pods that have to run on every node do no count: fluentd, kube-dns, kube-proxy), the fact that there are no cronjobs is not enough</li> <li>that for the autoscaler is <strong>NOT</strong> enabled for the corresponding managed instance groups since they are different tools</li> <li>that there are no pods stuck in any weird state still assigned to that node</li> <li>that there is no pods waiting to be scheduled in the cluster</li> </ul> <p>If still everything likely it is an issue with the autoscaler and you can either open a <a href="https://issuetracker.google.com/issues/new?component=187164" rel="nofollow noreferrer">private issue</a> specifying your project ID with Google since there is not much the community can do.</p> <p>If you are interested place in the comments the link of the issue tracker and I will take a look in your project (I work for Google Cloud Platform Support)</p>
<p>I'm trying to get this go-micro greeter example working on Kubernetes <a href="https://github.com/micro/examples/tree/master/greeter" rel="nofollow noreferrer">https://github.com/micro/examples/tree/master/greeter</a></p> <p>I can run this locally in docker fine. However when I attempt to access the greeter api service via Kubernetes (http://{{external-ip}}/greeter/say/hello), I get the error: {"id":"go.micro.api","code":500,"detail":"not found","status":"Internal Server Error"}</p> <p>For the sake of troubleshooting I've simplified the scenario, I simply want to be able to make a call via the micro api to a go-micro api service. Below is my setup:</p> <p>api.go</p> <pre><code>package main import ( "github.com/micro/go-micro" api "github.com/micro/micro/api/proto" "log" k8s "github.com/micro/kubernetes/go/micro" "context" ) type Say struct { } //I just want to access this via the micro api on k8s via services external ip func (s *Say) Hello(ctx context.Context, req *api.Request, rsp *api.Response) error { rsp.StatusCode = 200 rsp.Body = "Hello" return nil } func main() { service := k8s.NewService( micro.Name("default.greeter-api"), ) service.Init() service.Server().Handle( service.Server().NewHandler( &amp;Say{}, ), ) if err := service.Run(); err != nil { log.Fatal(err) } } </code></pre> <p>micro-api-deployment.yml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: micro spec: replicas: 1 selector: matchLabels: app: micro template: metadata: labels: app: micro spec: containers: - name: micro image: microhq/micro:kubernetes args: - "api" - "--handler=rpc" - "--namespace=default" env: - name: MICRO_API_ADDRESS value: ":80" ports: - containerPort: 80 name: api-port </code></pre> <p>micro-api-svc.yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: micro spec: type: LoadBalancer ports: - name: api-http port: 80 targetPort: "api-port" protocol: TCP selector: app: micro </code></pre> <p>greet-deployment.yml</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: greeter-api spec: replicas: 1 selector: matchLabels: app: greeter-api template: metadata: labels: app: greeter-api spec: containers: - name: greeter-api-service image: greeter-api:latest imagePullPolicy: Always command: [ "./greet", "--selector=static", "--server_address=:8080", ] ports: - containerPort: 8080 name: greet-port </code></pre> <p>greet-svc.yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: greet labels: app: greet spec: ports: - port: 8080 protocol: TCP selector: app: greet </code></pre>
<p>Everything is fine with your configs.</p> <blockquote> <p>http://{{external-ip}}/greeter/say/hello), I get the error: {"id":"go.micro.api","code":500,"detail":"not found","status":"Internal Server Error"}</p> </blockquote> <p>You missed just the port number 8080 in your request and tried to call rpc service instead of <code>greeter-api</code> .</p>
<p>I created a pod with an api and web docker container in kuberneters using a yml file (see below).</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test labels: purpose: test spec: containers: - name: api image: gcr.io/test-1/api:latest ports: - containerPort: 8085 name: http protocol: TCP - name: web image: gcr.io/test-1/web:latest ports: - containerPort: 5000 name: http protocol: TCP </code></pre> <p>It show my pod is up and running</p> <pre><code>NAME READY STATUS RESTARTS AGE test 2/2 Running 0 5m </code></pre> <p>but I don't know how to expose it from here.</p> <p>it seems odd I would have to run kubectl run .... again as the pod is already running. It does not show a deployment though. </p> <p>if I try something like </p> <pre><code>kubectl expose deployment test --type="NodePort"--port 80 --target-port 5000 </code></pre> <p>it complains about deployments.extensions "test' not found. What is the cleanest way to deploy from here?</p>
<p>To expose a deployment to the public internet, you will want to use a Service. The service type LoadBalancer handles this nicely, as you can just use pod selectors in the yaml file.</p> <p>So if my deployment.yaml looks like this:</p> <pre><code>kind: Deployment apiVersion: apps/v1beta2 metadata: name: test-dply spec: selector: # Defines the selector that can be matched by a service for this deployment matchLabels: app: test_pod template: metadata: labels: # Puts the label on the pod, this must match the matchLabels selector app: test_pod spec: # Our containers for training each model containers: - name: mycontainer image: myimage imagePullPolicy: Always command: ["/bin/bash"] ports: - name: containerport containerPort: 8085 </code></pre> <p>Then the service that would link to it is:</p> <pre><code>kind: Service apiVersion: v1 metadata: # Name of our service name: prodigy-service spec: # LoadBalancer type to allow external access to multiple ports type: LoadBalancer selector: # Will deliver external traffic to the pod holding each of our containers app: test_pod ports: - name: sentiment protocol: TCP port: 80 targetPort: containerport </code></pre> <p>You can deploy these two items by using <code>kubectl create -f /path/to/dply.yaml</code> and <code>kubectl create -f /path/to/svc.yaml</code>. Quick note: The service will allocate a public IP address, which you can find using <code>kubectl get services</code> with the following output:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE carbon-relay ClusterIP *.*.*.* &lt;none&gt; 2003/TCP 78d comparison-api LoadBalancer *.*.*.* *.*.*.* 80:30920/TCP 15d </code></pre> <p>It can take several minutes to allocate the ip, just a forewarning. But the LoadBalancer's ip is fixed, and you can delete the pod that it points to and re-spin it without consequence. So if I want to edit my test.dply, I can without worrying about my service being impacted. You should rarely have to spin down services</p>
<p>On NixOS is is easy to set up Kubernetes by a single line of config:</p> <pre><code>services.kubernetes.roles = ["master" "node"]; </code></pre> <p>This installs both the master and node components on the local system and therefore creates a nice little working local kubernetes "cluster".</p> <p>If I want to set up a "real" cluster I need to install it over multiple hosts, but I'm not sure about the intended way to connect them.</p> <p>If I install only the master components on one host and only the node components on another node, how do I tell the node where to find its master?</p> <p>There are quite a few <a href="https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/cluster/kubernetes/default.nix" rel="noreferrer">configuration options</a>, but I'm not sure how to use them correctly. Is anyone aware of some example setup? </p>
<p>Have a look at <a href="https://youtu.be/XgZWbrBLP4I?t=29m16s" rel="nofollow noreferrer">the latter part</a> of <a href="https://stackoverflow.com/users/1612283/offlinehacker">Jaka Hudoklin/offlinehacker</a>'s NixCon '15 presentation about Kubernetes on NixOS at GateHub. It has an example configuration that configures docker to use a bridge interface. You can then use openvswitch to link the networks together.</p>
<p>I am seeing the following error in the container when deployed to the kubernetes. I have the line </p> <pre><code>spec: containers: - name: mysql image: mysql/mysql-server:latest args: - "--ignore-db-dir=lost+found" </code></pre> <p>in the config, do I need to include anything else</p> <pre><code>[Entrypoint] MySQL Docker Image 8.0.11-1.1.5 [Entrypoint] Starting MySQL 8.0.11-1.1.5 2018-04-20T15:20:54.975756Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.11) starting as process 1 mysqld: Table 'mysql.plugin' doesn't exist 2018-04-20T15:20:55.415098Z 0 [ERROR] [MY-010735] [Server] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 2018-04-20T15:20:55.495353Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2018-04-20T15:20:55.499087Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2018-04-20T15:20:55.505667Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables 2018-04-20T15:20:55.505905Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-001146 - Table 'mysql.component' doesn't exist 2018-04-20T15:20:55.505932Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-003543 - The mysql.component table is missing or has an incorrect definition. 2018-04-20T15:20:55.506040Z 0 [ERROR] [MY-011071] [Server] unknown variable 'ignore-db-dir=lost+found' 2018-04-20T15:20:55.506057Z 0 [Warning] [MY-010952] [Server] The privilege system failed to initialize correctly. If you have upgraded your server, make sure you're executing mysql_upgrade to correct the issue. 2018-04-20T15:20:55.506137Z 0 [ERROR] [MY-010119] [Server] Aborting 2018-04-20T15:20:57.198050Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.11) MySQL Community Server - GPL.nterval of mirrorId has elapsed or updates are forced -&gt; [Help 1] </code></pre>
<p>This specific error is because <code>--ignore-db-dir</code> is not specified correctly. <code>=</code> should not be used.</p> <p>From your logs, the following can be seen</p> <blockquote> <p>[Server] unknown variable 'ignore-db-dir=lost+found'</p> </blockquote> <p>Since this fails, the ignore database directory option is not handled correctly. </p> <p>Ideally, the additional options that I mentioned below should also be part of the yaml file. Parameters like root password should be set.</p> <pre><code> args: - "--ignore-db-dir" - "lost+found" env: - name: MYSQL_ROOT_PASSWORD # change this value: yourpassword ports: - containerPort: 3306 name: mysql volumeMounts: # name must match the volume name below - name: mysql-persistent-storage # mount path within the container mountPath: /var/lib/mysql </code></pre>
<p>We currently have around 20 jobs. These jobs create one pod each, but we want to make sure that only one of these pods can run at a time, keeping the rest of them in pending status. Increasing the resource limitations makes them to run one by one but I want to be sure that this is always the behaviour.</p> <p>Is there any way of limiting this concurrency to 1, let's say per label or something similar?</p>
<p>Use <a href="https://kubernetes.io/docs/tasks/administer-cluster/quota-pod-namespace/" rel="nofollow noreferrer">ResourceQuota</a> resource:</p> <pre><code>apiVersion: v1 kind: ResourceQuota metadata: name: pod-demo spec: hard: pods: "5" </code></pre>
<p>I have added a new CRD <code>ApiGateway</code> to Kubernetes and I want to watch for new/changed resources of it.</p> <p>This works with a simple Rest Client as shown in the example below. </p> <p>But I´d like to watch for these resources with <code>k8s.io/client-go/kubernetes</code>. </p> <p>While it is simple to get the standard resources like in the client-go example below , I don´t get anything working for CRDs. Is it possible to get that done with client-go?</p> <p><strong>client-go example for standard resources</strong></p> <pre><code>import ( .... "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" ) func handleNewServices(clientset *kubernetes.Clientset) { for { serviceStreamWatcher, err := clientset.CoreV1().Services("").Watch(metav1.ListOptions{}) if err != nil { panic(err.Error()) } //fmt.Printf("%T\n", serviceStreamWatcher) for { select { case event := &lt;-serviceStreamWatcher.ResultChan(): service := event.Object.(*v1.Service) for key, value := range service.Labels { fmt.Printf("Key, VAlue: %s %s\n", key, value) } ... </code></pre> <p><strong>RestClient (working fine)</strong></p> <pre><code>package main import ( "net/http" .... ) func main() { for { // Url "cw.com" must match the config spec.group in api-gateway-crd.yaml // URL "apigateways" must match the config spec.names.plural in api-gateway-crd.yaml resp, err := http.Get("http://localhost:8001/apis/cw.com/v1/apigateways?watch=true") if err != nil { panic(err) } defer resp.Body.Close() decoder := json.NewDecoder(resp.Body) for { var event v1.ApiGatewayWatchEvent if err := decoder.Decode(&amp;event); err == io.EOF { break } else if err != nil { log.Fatal(err) } log.Printf("Received watch event: %s: %s: \n", event.Type, event.Object.Metadata.Name) } } } </code></pre> <p><strong>CRD</strong></p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: apigateways.cw.com spec: scope: Namespaced group: cw.com version: v1 names: kind: ApiGateway singular: apigateway plural: apigateways </code></pre>
<p>If you think about it, <code>client-go</code> knows about <code>deployments</code>, <code>services</code>, <code>pods</code> etc resources. But it doesn't recognize your CRD <code>ApiGateway</code>.</p> <p>So, <code>client-go</code> can't be used as a client for your custom made resources (wait-for-it), unless you made them recognizable to <code>client-go</code>!</p> <p><strong>How?!</strong></p> <p>You have to generate your own client for the CRDs. Kubernetes already have the tools to auto-generate the clients, all you need to specify the <code>structs</code> of <code>API</code>. This is known as <code>code-generation</code>.</p> <p>Here is a <a href="https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/" rel="noreferrer">blog post about code generation</a> by <a href="https://github.com/sttts" rel="noreferrer">STEFAN SCHIMANSKI</a> (who is one of the top contributors to kubernetes).</p> <p><strong>Example Controller</strong></p> <p>Here is a <a href="https://github.com/kubernetes/sample-controller" rel="noreferrer">sample-controller</a> example given by kubernetes itself. The <code>pkg</code> folder contains all the <code>APIS</code> and <code>Client</code>. The <code>main.go</code> and <code>controller.go</code> contains the sample code to watch for the CRD and do some task accordingly.</p> <p><strong>!!Update!!</strong></p> <p>It's easier now to generate client configs and controllers with kubebuilder (<a href="https://github.com/kubernetes-sigs/kubebuilder" rel="noreferrer">github repo</a>), which is maintained by kubernetes-sigs.</p>
<p>Is it possible to enable <code>net.ipv4.ip_forward</code> on a container's network namespace?</p> <hr> <p><strong>Manual</strong></p> <p>From the host, I can enable it with manually with</p> <pre><code>sudo nsenter -t \ $(docker inspect --format '{{.State.Pid}}' $CONTAINER_NAME) \ -n sysctl -w net.ipv4.ip_forward=1 </code></pre> <p>and confirm that forwarding begins working within the container.</p> <p>Is there a way to do this automatically whilst avoiding privileged containers?</p>
<p>In case of some sysctl parameters yes; <code>net.*</code> <strong>is namespaced</strong>, so <code>net.ipv4.ip_forward</code> can be enabled per Pod (per container).</p> <p>Follow the <a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="noreferrer">Using Sysctls in a Kubernetes Cluster</a> guide for details and gotchas.</p> <h2>Longer answer</h2> <p>While <code>net</code> <strong>is namespaced</strong>, not all sysctl variables can be set in namespace. Some simply await for a <a href="https://github.com/torvalds/linux/commit/13b287e8d1cad951634389f85b8c9b816bd3bb1e" rel="noreferrer">"namespacify"</a> patch, but others will possibly never get implemented. In the specific example of <code>net.ipv4</code> one could browse <a href="https://github.com/torvalds/linux/blob/master/include/net/netns/ipv4.h#L44" rel="noreferrer"><code>include/net/netns/ipv4.h</code></a> to see what is supported at the moment. Such support of course depends on the actual kernel version.</p> <p>In case you wanted to "empirically" verify whether <em>sysctl</em> (the actual kernel facility, not the tool) supports a particular variable, you could do something like this (as root):</p> <pre><code># cat /proc/sys/net/ipv4/ip_forward 1 # unshare --net sysctl -w net.ipv4.ip_forward=0 net.ipv4.ip_forward = 0 # cat /proc/sys/net/ipv4/ip_forward 1 </code></pre> <p>As you can see <em>sysctl</em> (the tool) running in a new namespace could set <code>net.ipv4.ip_forward=0</code>; also that it did not affect the parent namespace.</p> <p>An example of a variable that <em>can't be set</em> in a namespace (no support for it at the moment):</p> <pre><code># cat /proc/sys/net/ipv4/icmp_msgs_burst 50 # unshare --net sysctl -w net.ipv4.icmp_msgs_burst=42 sysctl: cannot stat /proc/sys/net/ipv4/icmp_msgs_burst: No such file or directory </code></pre> <p>An example of a variable that is <em>not namespaced</em> would be <code>vm.nr_hugepages</code>. This variable exists in namespaces, but the <code>vm</code> subsystem itself is <a href="http://man7.org/linux/man-pages/man7/namespaces.7.html" rel="noreferrer"><strong>not namespaced</strong></a> (setting this variable will affect all processes):</p> <pre><code># sysctl vm.nr_hugepages vm.nr_hugepages = 0 # unshare sysctl vm.nr_hugepages=1 vm.nr_hugepages = 1 # sysctl vm.nr_hugepages vm.nr_hugepages = 1 </code></pre>
<p>I'm trying to generate a certificate in my local (MacBook) environment which I can package in my Docker image and deploy into my AWS environment via Kubernetes.</p> <p>I've scoured sources online for a solution to this but I'm unable to find the details I need.</p> <p>From my macbook:</p> <pre><code>sudo certbot certonly -a standalone -d my.domain </code></pre> <p>Gives me this error:</p> <pre><code>Failed authorization procedure. my.domain (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://my.domain/.well-known/acme-challenge/T8jtGQswRuMgHKIhGvb- QD73kytTZnHfH5mK5lEZUJc: "{"timestamp":"2018-04-22T22:33:40.845+0000","status":404, "error":"Not Found","message":"No message available","path":"/.well-kno" </code></pre> <p>Clearly, I need a way to prove that I own my own domain. How can I do this locally?</p>
<p>In order to verify ownership of the domain from your macbook you have these two options as stated in the certbot docs:</p> <ul> <li>Use a DNS plugin - <a href="https://certbot.eff.org/docs/using.html#dns-plugins" rel="nofollow noreferrer">https://certbot.eff.org/docs/using.html#dns-plugins</a></li> <li>Use the manual method - <a href="https://certbot.eff.org/docs/using.html#manual" rel="nofollow noreferrer">https://certbot.eff.org/docs/using.html#manual</a></li> </ul> <p>While the standalone option does not require web server software it does require that it is run on the target web server - it is therefore not what you need to do and will result in the failure reported in your question.</p>
<p>I'm building a helm chart for my application, and I'm using <code>stable/nginx-ingress</code> as a subchart. I have a single <code>overrides.yml</code> file that contains (among other overrides):</p> <pre><code>nginx-ingress: controller: annotations: external-dns.alpha.kubernetes.io/hostname: "*.{{ .Release.Name }}.mydomain.com" </code></pre> <p>So, I'm trying to use the release name in the overrides file, and my command looks something like: <code>helm install mychart --values overrides.yml</code>, but the resulting annotation does not do the variable interpolation, and instead results in something like</p> <pre><code>Annotations: external-dns.alpha.kubernetes.io/hostname=*.{{ .Release.Name }}.mydomain.com </code></pre> <p>I installed the subchart by using <code>helm fetch</code>, and I'm under the (misguided?) impression that it would be best to leave the fetched thing as-is, and override values in it - however, if variable interpolation isn't available with that method, I will have to put my values in the subchart's <code>values.yaml</code>.</p> <p>Is there a best practice for this? Is it ok to put my own values in the fetched subchart's <code>values.yaml</code>? If I someday <code>helm fetch</code> this subchart again, I'll have to put those values back in by hand, instead of leaving them in an untouched overrides file...</p> <p>Thanks in advance for any feedback!</p>
<p>I found the issue on github -- it is not supported yet: <a href="https://github.com/kubernetes/helm/issues/2133" rel="nofollow noreferrer">https://github.com/kubernetes/helm/issues/2133</a></p>
<p><code>kubectl</code> I am looking for a single command or maybe combination of commands for the following yaml file </p> <pre><code>--- # # Create a role, `pod-reader`, that can list pods and # bind the default service account in the `mynamespace` namespace # to that role. # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader namespace: mynamespace rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods namespace: mynamespace subjects: - kind: Group name: system:serviceaccounts:mynamespace apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>So pretty much create a namespace that can get, watch and list pods</p>
<p>This assumes the namespace and service account are already created:</p> <pre><code>$ kubectl create namespace mynamespace $ kubectl create serviceaccount mysa -n mynamespace </code></pre> <p>Create the role with:</p> <pre><code>$ kubectl create role pod-reader --namespace=mynamespace \ --verb=get,list,watch \ --resource=pods \ </code></pre> <p>Create the rolebinding with:</p> <pre><code>$ kubectl create rolebinding read-pods --namespace=mynamespace \ --role=pod-reader \ --group=system:serviceaccounts:mynamespace </code></pre> <p>Tip: if you want to preview the generated YAML rather than actually creating the resource, append <code>--dry-run=true -o=yaml</code> to the commands.</p>
<p>I am trying to run <code>pg_dump</code> in a Docker container via <code>kubectl</code> and save the output to my local machine.</p> <p>Here's what I have so far:</p> <p><code>kubectl exec -it MY_POD_NAME -- pg_dump -h DB_HOST -U USER_NAME SCHEMA_NAME &gt; backup.sql</code></p> <p>However this just hangs currently. I am fairly certain it's due to the <code>--</code> ignoring the <code>&gt;</code></p> <p><code>kubectl exec -it MY_POD_NAME -- pg_dump -h DB_HOST -U USER_NAME SCHEMA_NAME</code> outputs to the console as expected.</p>
<p>Use <code>kubectl port-forward POD_NAME 6000:5342</code> to forward your pod port (assumed to be exposed on <code>5432</code>) onto <code>localhost:6000</code>. </p> <p>And then run <code>pg_dump</code> directly with hostname as <code>localhost</code> and port as <code>6000</code></p> <p><code>$ pg_dump -h DB_HOST -U USER_NAME SCHEMA_NAME &gt; backup.sql</code></p>
<p>I am new to kubernetes. I successfully created a headless service for aerospike-kubernetes. I logged into docker container and verified that the mesh contains all the replicas. I have used <a href="https://github.com/aerospike/aerospike-kubernetes/blob/master/aerospike-statefulset.yaml" rel="nofollow noreferrer">https://github.com/aerospike/aerospike-kubernetes/blob/master/aerospike-statefulset.yaml</a> for the same. </p> <p>Now since its a headless service clusterIP is "none" and i am writing a golang program to connect to the aerospike. I am puzzled as to what should go in the IPaddress to connect to aerospike. What should i give in place of xxx-xxx-xxx-xxx ? how can i generate an Internal IP so that i can connect to the entire mesh?</p> <pre><code>client, err := as.NewClient("xxx-xxx-xxx-xxx", 3000) if err != nil { log.Fatal(err) } </code></pre> <p>This golang project will be deployed as a pod so an internal IP would suffice. </p>
<blockquote> <p>headless service clusterIP is "none" </p> </blockquote> <p>This simply means we don't use the load balancer/reverse proxy mode, but rather DNS Round-Robin mode for the service endpoint.</p> <p>By default, visibility of the Aerospike cluster is limited to within the Kubernetes environment. You can use the service name generated by kube-dns to connect. The address would take the form of: &lt;service&gt;.&lt;namespace&gt;.svc.cluster.local. In other words: <code>aerospike.aerospike-cluster-1.svc.cluster.local</code> if you've left it as defaults.</p>
<p>I was following <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">this kubernetes tutorial</a> in order to set up a <code>DNS service</code> and connect together two separate <code>kubernetes pods</code>. The one, which should serve as a gateway, is listening on port 80, the other one on port 90.</p> <p>When I use their Node IP, <code>curl 10.32.0.24</code> and <code>curl 10.32.0.25:90</code> I can reach them. Nevertheless I can't figure out, how to access them via my DNS service. What the <code>URL</code> will be?</p> <p>The <code>Namespace</code> is <code>default</code> and this is the result of <code>kubectl cluster-info:</code> <code> Kubernetes master is running at IP_OF_MY_SERVER:6443 KubeDNS is running at IP_OF_MY_SERVER:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code> My <code>deployment.yaml</code> is almost the same as in the tutorial:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: default-subdomain spec: selector: name: busybox clusterIP: None ports: - name: foo # Actually, no port is needed. port: 80 targetPort: 80 --- apiVersion: v1 kind: Pod metadata: name: busybox1 labels: name: busybox spec: hostname: busybox-1 subdomain: default-subdomain containers: - image: time-provider name: busybox --- apiVersion: v1 kind: Pod metadata: name: busybox2 labels: name: busybox spec: hostname: busybox-2 subdomain: default-subdomain containers: - image: gateway name: busybox </code></pre>
<p>The Kubernetes DNS service works inside a cluster and provide DNS names for pods, not for external services.</p> <p>Here is an extract from the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">instruction</a> you used:</p> <blockquote> <p>Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain. This is best illustrated by example:</p> <p>Assume a Service named <code>foo</code> in the Kubernetes namespace <code>bar</code>. A Pod running in namespace <code>bar</code> can look up this service by simply doing a DNS query for <code>foo</code>. A Pod running in namespace <code>quux</code> can look up this service by doing a DNS query for <code>foo.bar</code>.</p> </blockquote> <p>So, the DNS names of your resources inside a cluster exist only in it.</p> <p>You call to the service from the external network by NodeIPs : <code>curl 10.32.0.24</code> and <code>curl 10.32.0.25:90</code>. And that is a correct way. If you want to use a DNS names to connect to the cluster from outside, you should use any other DNS service to point names to your cluster nodes or LoadBalancer.</p> <p>I recommend you to use <code>Service</code> object to expose your application. Here is a some articles about it: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#ways-to-connect" rel="nofollow noreferrer">ways to connect</a>, <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/" rel="nofollow noreferrer">use a Service to access applications.</a></p>
<p>Is there a practical justification for labelling services besides that this services can be queried?</p> <p>Suppose I've got a service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx </code></pre> <p>Can I then use that label for something else than querying that service with <code>kubectl get svc -l app=nginx</code>? Are there any other common use-cases? Same for Deployments. As a k8s novice I only use labels so that services can match pods and, frankly, don't see much of a use elsewhere.</p>
<p>I would say that to be able to distinguish easily between different types of objects (in your case Service) using labels. Is the most practical use case. Say you have a cluster with different environments dev, staging, production etc. and you need to find services based on that or any other type of infrastructure rules that you need to apply to your Kubernetes cluster to differentiate between similar objects.</p>
<p>According to the documentation here: <a href="https://docs.docker.com/storage/volumes/" rel="nofollow noreferrer">https://docs.docker.com/storage/volumes/</a></p> <blockquote> <p>If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.</p> </blockquote> <p>In other words, the expectation is that, if i have files like so in /var/lib/mysql</p> <pre><code>root@d8fa9a8b305a:/var/lib/mysql# ls auto.cnf xtz ib_logfile0 ibdata1 mysql sys debian-5.7.flag ib_buffer_pool ib_logfile1 ibtmp1 performance_schema </code></pre> <p>Then, when I mount a volume into <code>/var/lib/mysql</code> then all the files from the container should be copied into my volume.</p> <p>But I find this is not happening:</p> <pre><code>/var/lib/mysql/mysql # ls auto.cnf ib_buffer_pool ib_logfile0 ib_logfile1 ibdata1 </code></pre> <p>This is content of the volume that I mounted into /var/lib/mysql, and as you can see the data is not the same as the one present in /var/lib/mysql of the docker image itself. So, as a result, there's a failure on startup.</p> <p><strong>Note</strong>: The Volume in question is actually mounted by kubernetes. So, I'm making a major assumption here that </p> <pre><code>volumeMounts: - name: xtz-persistent-storage mountPath: "/var/lib/mysql/" </code></pre> <p>is the equivalent of doing this : <code>docker run -p 443:443 --rm -v mysql:/var/lib/mysql &lt;image&gt;</code></p>
<p>Kubernetes <code>Volumes</code> is not the same thing as Docker <code>Volumes</code>.</p> <p>From the Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>Docker also has a concept of volumes, though it is somewhat looser and less managed. In Docker, a volume is simply a directory on disk or in another container. Lifetimes are not managed and until very recently there were only local-disk-backed volumes. Docker now provides volume drivers, but the functionality is very limited for now (e.g. as of Docker 1.7 only one volume driver is allowed per container and there is no way to pass parameters to volumes).</p> <p>A Kubernetes volume, on the other hand, has an explicit lifetime - the same as the pod that encloses it. Consequently, a volume outlives any containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.</p> <p>At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.</p> </blockquote> <p>So, although the name of the concept is the same, that is a different <code>volume</code>.</p> <p>On the mount, Kubernetes overlays a destination directory.</p> <p>Unfortunately, for now, there are no ways to merge the content in the mounted volume and in the container. Here is one of <a href="https://groups.google.com/forum/#!topic/kubernetes-dev/muq5KIwOcNo" rel="nofollow noreferrer">discussions</a> about it.</p>
<p>After following the <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/" rel="noreferrer">hello-minikube guide</a> and installing minikube 0.26.1 the dashboard pod is not starting and also the <a href="https://kubernetes.io/docs/getting-started-guides/minikube/#quickstart" rel="noreferrer">hello-minikube</a> pod is not getting started. </p> <p>A <code>kubectl describe pod xxx</code> shows that the pod could not get scheduled.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3m (x3368 over 16h) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. </code></pre>
<p>This has to do with <a href="https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes" rel="noreferrer">taints and tolerations</a> in k8s versions starting from 1.6. By default the master node has a <code>NoSchedule</code> taint.</p> <pre><code># kubectl describe node minikube Name: minikube Roles: master [...] Taints: node-role.kubernetes.io/master:NoSchedule </code></pre> <p>You can add tolerations to the pods as described in <a href="https://stackoverflow.com/a/48496629/1326662">this answer</a> - but in my case I do not want to edit any pod specs as I want to test my deployments locally 1:1 as in a live k8s environment.</p> <p>The other option is to delete the taint on the master node. See the documentation <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">here</a> and <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">there</a>.</p> <pre><code>kubectl taint nodes --all node-role.kubernetes.io/master- </code></pre> <p>In the specific case of a local minikube setup with only one node and testing deployments locally without adding tolerations this works as well: </p> <pre><code>kubectl taint nodes minikube node-role.kubernetes.io/master:NoSchedule- </code></pre> <p>This should be part of the minikube getting started guide imho.</p>
<p>When creating a cluster on GKE its possible to create <a href="https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type#extendedmemory" rel="noreferrer">Custom Instance Types</a>. When adding <code>8GB</code> of memory to an <code>n1-standard-1</code> Kubernetes only shows memory allocatable of <code>6.37GB</code>. Why is this?</p> <p>The requested memory includes all the pods in <code>kube-system</code> namespace so where is this extra memory going?</p>
<p>Quotinig from <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture" rel="noreferrer">documentation</a>:</p> <blockquote> <p><strong>Node allocatable resources</strong> </p> <p><strong>Note</strong> that some of a node's resources are required to run the Kubernetes Engine and Kubernetes resources necessary to make that node function as part of your cluster. As such, you may notice a disparity between your node's total resources (as specified in the machine type documentation) and the node's allocatable resources in Kubernetes Engine</p> <p><strong>Note</strong>: As larger machine types tend to run more containers (and by extension, Kubernetes pods), the amount of resources that Kubernetes Engine reserves for cluster processes scales upward for larger machines.</p> <p><strong>Caution</strong>: In Kubernetes Engine node versions prior to 1.7.6, reserved resources were not counted against a node's total allocatable resources. If your nodes have recently upgraded to version 1.7.6, they might appear to have fewer resources available, as Kubernetes Engine now displays allocatable resources. This can potentially lead to your cluster's nodes appearing overcommitted, and you might want to resize your cluster as a result.</p> </blockquote> <p>For example performing some tests you can doublecheck:</p> <pre><code>Machine type Memory(GB) Allocatable(GB) CPU(cores) Allocatable(cores) g1-small 1.7 1.2 0.5 0.47 n1-standard-1 (default) 3.75 2.7 1 0.94 n1-standard-2 7.5 5.7 2 1.93 n1-standard-4 15 12 4 3.92 n1-standard-8 30 26.6 8 7.91 n1-standard-16 60 54.7 16 15.89 </code></pre> <blockquote> <p><strong>Note</strong>: The values listed for allocatable resources do not account for the resources used by kube-system pods, the amount of which varies with each Kubernetes release. These system pods generally occupy an additional 400m CPU and 400mi memory on each node (values are approximate). It is recommended that you directly inspect your cluster if you require an exact accounting of usable resources on each node.</p> </blockquote> <h2>UPDATE</h2> <p>There is also the official explanation from the Kubernetes Documentation regarding why this resources are used:</p> <blockquote> <p>kube-reserved is meant to capture resource reservation for kubernetes system daemons like the kubelet, container runtime, node problem detector, etc. <strong>It is not meant to reserve resources for system daemons</strong> that are run as pods. kube-reserved is typically a function of pod density on the nodes. This performance dashboard exposes cpu and memory usage profiles of kubelet and docker engine at multiple levels of pod density. This blog post explains how the dashboard can be interpreted to come up with a suitable kube-reserved reservation.</p> </blockquote> <p>I would suggest you to go thorugh <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable" rel="noreferrer">this</a> page if you are interested to learn more.</p>
<p>I am trying to deploy my sample micro service Docker image in Kubernetes cluster having 2 node. I explored everything about Pods, Services, Deployment, StatefulSets and Daemon-sets etc.</p> <p>I am trying to create a sample deployment and Service for that. Here I explored about how deployment provides the scalability and load balancing functionality. And exploring about service discovery by providing Services ClusterIp.</p> <p>I have two questions:</p> <p>My scenario is that I am trying to deploy microservice on my on-premise Ubuntu machine. The machine has the IP address of 192.168.1.15. When I am referring Kubernetes, service will also have one clusterIP. </p> <ol> <li><p>If my microservice end point is <code>/api/v1/loadCustomer</code>, how I can call this end point? Do I need to use clusterIP also ? Can I call simply 192.168.1.15:8080/api/v1/loadCustomers ?</p></li> <li><p>What is the role of clusterIP when I am calling my end point ? Can I directly use port?</p></li> </ol> <p>I am referring to the following link for exploration:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/</a></p>
<p><strong>tldr:</strong></p> <p>you can not access the application using the <strong>clusterIP</strong> from the outside of the cluster. you can access the application using either loadbalancer's IP (<code>type=LoadBalaner</code>) or Node's IP (<code>type=NodePort</code>). </p> <p><strong>benefit of clusterIP:</strong> As you know that pods can be created and terminated during its life-cycle consequently IP (endpoint IP)address created and terminated.Therefore, clusterIP is static which does not depends of the life-cycle of the pods. </p> <p><strong>Long Answer</strong></p> <p><strong>In a Kubernetes cluster</strong> an application or pod has following abstraction. </p> <p><strong>Endpoint IP and Port</strong>:It is provided by the CNI Plugins such as flannel, calico.</p> <ul> <li>Each pod has an IP and tragetPort which is <em>UNIQUE</em>.</li> </ul> <p>you can list and watch the endpoints by the following commands. </p> <pre><code>kubectl get endpoints --all-namespaces </code></pre> <p><strong>clusterIP and port</strong> : It is provided by the <strong><em>kube-proxy</em></strong> component. </p> <ul> <li><p>The replicated pods share a clusterIP and Port. </p></li> <li><p>Load-balancing of request to the replicated pods.</p></li> <li>internally expose so that other pod can discover it</li> </ul> <p>you can list and watch clusterIP and port with the following command </p> <pre><code>kubectl get services --all-namespaces </code></pre> <p><strong>externalIP and port</strong>: It can be layer 3-4 load balancer's IP and port or node's IP and Nodeport.</p> <p>if you want to use loadbalancer's IP and port, you can use <code>type=LoadBalaner</code> in service file.</p> <p>If you want to use node's IP, you need to use <code>type=NodePort</code> in service file. </p>
<p>I am trying to configure hostPath as the volume in kubernetes. I have logged into VM server, from where I usually use kubernetes commands such as kubectl.</p> <p>Below is the pod yaml:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: helloworldanilhostpath spec: replicas: 1 template: metadata: labels: run: helloworldanilhostpath spec: volumes: - name: task-pv-storage hostPath: path: /home/openapianil/samplePV type: Directory containers: - name: helloworldv1 image: ***/helloworldv1:v1 ports: - containerPort: 9123 volumeMounts: - name: task-pv-storage mountPath: /mnt/sample </code></pre> <p>In VM server, I have created "/home/openapianil/samplePV" folder and I have a file present in it. It has a sample.txt file.</p> <p>once I try to create this deployment. it does not happen with error -<br> Warning FailedMount 28s (x7 over 59s) kubelet, aks-nodepool1-39499429-1 MountVolume.SetUp failed for volume "task-pv-storage" : hostPath type check failed: /home/openapianil/samplePV is not a directory.</p> <p>Can anyone please help me in understanding the problem here.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer"><code>hostPath</code></a> type volumes refer to directories on the Node (VM/machine) where your Pod is scheduled for running (<code>aks-nodepool1-39499429-1</code> in this case). So you'd need to create this directory at least on that Node. </p> <p>To make sure your Pod is consistently scheduled on that specific Node you need to set <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="noreferrer"><code>spec.nodeSelector</code></a> in the PodTemplate:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: helloworldanilhostpath spec: replicas: 1 template: metadata: labels: run: helloworldanilhostpath spec: nodeSelector: kubernetes.io/hostname: aks-nodepool1-39499429-1 volumes: - name: task-pv-storage hostPath: path: /home/openapianil/samplePV type: Directory containers: - name: helloworldv1 image: ***/helloworldv1:v1 ports: - containerPort: 9123 volumeMounts: - name: task-pv-storage mountPath: /mnt/sample </code></pre> <p><strong>In most cases it's a bad idea to use this type of volume; there are some special use cases, but chance are yours is not one them!</strong></p> <p>If you need local storage for some reason then a slightly better solution is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer"><code>local</code></a> PersistentVolumes.</p>
<p>I have probaly problem with kubernetes DNS as my service cannot communicate to outside world (bitbucker.org). Actually I found this page: <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p> <p>and validate it on my cluster (no minikube):</p> <pre><code>zordon@megazord:~$ kubectl exec busybox cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <p>and:</p> <pre><code>zordon@megazord:~$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1 </code></pre> <p>Any idea how can I resolve to be able connect from inside pod to outside world ?</p> <p>This probably is related to Flannel, as connectivity from image run only by docker is avalible. Whant to mention that I have run my cluster with this example: <a href="https://blog.alexellis.io/kubernetes-in-10-minutes/" rel="nofollow noreferrer">https://blog.alexellis.io/kubernetes-in-10-minutes/</a></p> <p>I have also modify <a href="https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml</a> and pass argument --iface with my wifi card which has acces to internet but then kube-flannel-ds cannot start from:</p> <pre><code>args: - --ip-masq - --kube-subnet-mgr </code></pre> <p>to:</p> <pre><code>args: - --ip-masq - --kube-subnet-mgr - --iface=wlan0ec5 zordon@megazord:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE etcd-megazord 1/1 Running 1 21m kube-apiserver-megazord 1/1 Running 1 21m kube-controller-manager-megazord 1/1 Running 1 22m kube-dns-86f4d74b45-8gh6q 3/3 Running 5 22m kube-flannel-ds-2wqqr 1/1 Running 1 17m kube-flannel-ds-59txb 1/1 Running 1 15m kube-proxy-bdxb4 1/1 Running 1 15m kube-proxy-mg44x 1/1 Running 1 22m kube-scheduler-megazord 1/1 Running 1 22m zordon@megazord:~$ kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 23m zordon@megazord:~$ kubectl describe service kube-dns -n kube-system Name: kube-dns Namespace: kube-system Labels: k8s-app=kube-dns kubernetes.io/cluster-service=true kubernetes.io/name=KubeDNS Annotations: &lt;none&gt; Selector: k8s-app=kube-dns Type: ClusterIP IP: 10.96.0.10 Port: dns 53/UDP TargetPort: 53/UDP Endpoints: 10.244.0.27:53 Port: dns-tcp 53/TCP TargetPort: 53/TCP Endpoints: 10.244.0.27:53 Session Affinity: None Events: &lt;none&gt; zordon@megazord:~$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns I0419 17:40:11.473047 1 dns.go:48] version: 1.14.8 I0419 17:40:11.473975 1 server.go:71] Using configuration read from directory: /kube-dns-config with period 10s I0419 17:40:11.474024 1 server.go:119] FLAG: --alsologtostderr="false" I0419 17:40:11.474032 1 server.go:119] FLAG: --config-dir="/kube-dns-config" I0419 17:40:11.474037 1 server.go:119] FLAG: --config-map="" I0419 17:40:11.474041 1 server.go:119] FLAG: --config-map-namespace="kube-system" I0419 17:40:11.474044 1 server.go:119] FLAG: --config-period="10s" I0419 17:40:11.474049 1 server.go:119] FLAG: --dns-bind-address="0.0.0.0" I0419 17:40:11.474053 1 server.go:119] FLAG: --dns-port="10053" I0419 17:40:11.474058 1 server.go:119] FLAG: --domain="cluster.local." I0419 17:40:11.474063 1 server.go:119] FLAG: --federations="" I0419 17:40:11.474067 1 server.go:119] FLAG: --healthz-port="8081" I0419 17:40:11.474071 1 server.go:119] FLAG: --initial-sync-timeout="1m0s" I0419 17:40:11.474074 1 server.go:119] FLAG: --kube-master-url="" I0419 17:40:11.474079 1 server.go:119] FLAG: --kubecfg-file="" I0419 17:40:11.474082 1 server.go:119] FLAG: --log-backtrace-at=":0" I0419 17:40:11.474087 1 server.go:119] FLAG: --log-dir="" I0419 17:40:11.474091 1 server.go:119] FLAG: --log-flush-frequency="5s" I0419 17:40:11.474094 1 server.go:119] FLAG: --logtostderr="true" I0419 17:40:11.474098 1 server.go:119] FLAG: --nameservers="" I0419 17:40:11.474101 1 server.go:119] FLAG: --stderrthreshold="2" I0419 17:40:11.474104 1 server.go:119] FLAG: --v="2" I0419 17:40:11.474107 1 server.go:119] FLAG: --version="false" I0419 17:40:11.474113 1 server.go:119] FLAG: --vmodule="" I0419 17:40:11.474190 1 server.go:201] Starting SkyDNS server (0.0.0.0:10053) I0419 17:40:11.488125 1 server.go:220] Skydns metrics enabled (/metrics:10055) I0419 17:40:11.488170 1 dns.go:146] Starting endpointsController I0419 17:40:11.488180 1 dns.go:149] Starting serviceController I0419 17:40:11.488348 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I0419 17:40:11.488407 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] I0419 17:40:11.988549 1 dns.go:170] Initialized services and endpoints from apiserver I0419 17:40:11.988609 1 server.go:135] Setting up Healthz Handler (/readiness) I0419 17:40:11.988641 1 server.go:140] Setting up cache handler (/cache) I0419 17:40:11.988649 1 server.go:126] Status HTTP port 8081 zordon@megazord:~$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq I0419 17:44:35.785171 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} I0419 17:44:35.785336 1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] I0419 17:44:35.876534 1 nanny.go:119] W0419 17:44:35.876572 1 nanny.go:120] Got EOF from stdout I0419 17:44:35.876578 1 nanny.go:116] dnsmasq[26]: started, version 2.78 cachesize 1000 I0419 17:44:35.876615 1 nanny.go:116] dnsmasq[26]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify I0419 17:44:35.876632 1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0419 17:44:35.876642 1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0419 17:44:35.876653 1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain cluster.local I0419 17:44:35.876666 1 nanny.go:116] dnsmasq[26]: reading /etc/resolv.conf I0419 17:44:35.876677 1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0419 17:44:35.876691 1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0419 17:44:35.876701 1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.1#10053 for domain cluster.local I0419 17:44:35.876709 1 nanny.go:116] dnsmasq[26]: using nameserver 127.0.0.53#53 I0419 17:44:35.876717 1 nanny.go:116] dnsmasq[26]: read /etc/hosts - 7 addresses **zordon@megazord:~$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar** I0419 17:45:06.726670 1 main.go:51] Version v1.14.8 I0419 17:45:06.726781 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) I0419 17:45:06.726842 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} I0419 17:45:06.726927 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} </code></pre> <p><strong>Master node:</strong></p> <pre><code>zordon@megazord:~$ ip -d route unicast default via 192.168.1.1 dev wlp32s0 proto static scope global metric 600 unicast 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 unicast 10.244.1.0/24 via 10.244.1.0 dev flannel.1 proto boot scope global onlink unicast 169.254.0.0/16 dev wlp32s0 proto boot scope link metric 1000 unicast 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 unicast 192.168.1.0/24 dev wlp32s0 proto kernel scope link src 192.168.1.110 metric 600 zordon@megazord:~$ ip a 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp30s0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether 4c:cc:6a:f8:7e:4b brd ff:ff:ff:ff:ff:ff 3: wlp32s0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether ec:08:6b:0c:9c:27 brd ff:ff:ff:ff:ff:ff inet 192.168.1.110/24 brd 192.168.1.255 scope global wlp32s0 valid_lft forever preferred_lft forever inet6 fe80::f632:2f08:9caa:2c82/64 scope link valid_lft forever preferred_lft forever 4: docker0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default link/ether 02:42:32:19:f7:5a brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:32ff:fe19:f75a/64 scope link valid_lft forever preferred_lft forever 6: vethf9de74d@if5: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ba:af:58:a0:4a:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::b8af:58ff:fea0:4a74/64 scope link valid_lft forever preferred_lft forever 7: flannel.1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1450 qdisc noqueue state UNKNOWN group default link/ether a6:d1:45:73:c3:31 brd ff:ff:ff:ff:ff:ff inet 10.244.0.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::a4d1:45ff:fe73:c331/64 scope link valid_lft forever preferred_lft forever 8: cni0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1450 qdisc noqueue state UP group default qlen 1000 link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff inet 10.244.0.1/24 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::24f5:4cff:fee9:a32d/64 scope link valid_lft forever preferred_lft forever 9: veth58367f89@if3: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1450 qdisc noqueue master cni0 state UP group default link/ether 7a:29:e9:c8:bf:3f brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::7829:e9ff:fec8:bf3f/64 scope link valid_lft forever preferred_lft forever </code></pre> <p><strong>Node worker:</strong></p> <pre><code>zordon@k8s-minion-one:~$ ip -d route unicast default via 192.168.1.1 dev enp0s25 proto dhcp scope global src 192.168.1.111 metric 100 unicast 10.244.0.0/24 via 10.244.0.0 dev flannel.1 proto boot scope global onlink unicast 10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 linkdown unicast 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown unicast 192.168.1.0/24 dev enp0s25 proto kernel scope link src 192.168.1.111 unicast 192.168.1.1 dev enp0s25 proto dhcp scope link src 192.168.1.111 metric 100 zordon@k8s-minion-one:~$ ip a 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s25: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 18:03:73:45:75:71 brd ff:ff:ff:ff:ff:ff inet 192.168.1.111/24 brd 192.168.1.255 scope global enp0s25 valid_lft forever preferred_lft forever inet6 fe80::1a03:73ff:fe45:7571/64 scope link valid_lft forever preferred_lft forever 3: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:38:3e:a3:94 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:38ff:fe3e:a394/64 scope link valid_lft forever preferred_lft forever 4: flannel.1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 7a:d0:2a:b4:73:43 brd ff:ff:ff:ff:ff:ff inet 10.244.1.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::78d0:2aff:feb4:7343/64 scope link valid_lft forever preferred_lft forever 5: cni0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 0a:58:0a:f4:01:01 brd ff:ff:ff:ff:ff:ff inet 10.244.1.1/24 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::7440:12ff:fefa:f55/64 scope link valid_lft forever preferred_lft forever </code></pre>
<p>I found the problem. Problem occures when i deploy busybox and try to ping external serwer with name. Problem not exists when I used IP address. So the problem whas in DNS and name resolving. during ping I look into dns logs and see the problem. Helped me config map for dn:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: upstreamNameservers: |- ["8.8.8.8", "8.8.4.4"] </code></pre>
<p>Is there a practical justification for labelling services besides that this services can be queried?</p> <p>Suppose I've got a service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx </code></pre> <p>Can I then use that label for something else than querying that service with <code>kubectl get svc -l app=nginx</code>? Are there any other common use-cases? Same for Deployments. As a k8s novice I only use labels so that services can match pods and, frankly, don't see much of a use elsewhere.</p>
<p>One of the uses I've come across for labels is for tearing down an application end to end. So when I deploy to kubernetes, I make sure everything related to the same application is labeled as such. That way I just do <code>kubectl delete all --selector=app=foo</code> to delete all the foo related Services, Deployments, Crons etc.</p>
<p>I am trying to build a redis cluster with kubernetes on azure. And I am faced with the exact same problem when running different samples : <a href="https://sanderp.nl/running-redis-cluster-on-kubernetes-e451bda76cad" rel="nofollow noreferrer">sanderp.nl/running-redis-cluster-on-kubernetes</a> or <a href="https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md" rel="nofollow noreferrer">github.com/zuxqoj/kubernetes-redis-cluster</a></p> <p>Everything goes well until I try to have the different nodes join the cluster with the <code>redis-trib</code> command. At that time I face the infamous infinite "<strong>Waiting for the cluster to join ....</strong>" message. </p> <p>Trying to see what is happening, I set up the loglevel of the redis pods to <code>debug</code> level. I then noticed that the pods do not seem to announce their correct ip when communicating together. In fact it seems that the <strong>last byte of the ip is replaced by a zero</strong>. Say if pod1 has ip address 10.1.34.<strong>9</strong>, I will see in pod2 logs:</p> <p>Accepted clusternode 10.1.34.<strong>0</strong>:<em>someport</em></p> <p>So the pods do not seem to be able to communicate back and the join cluster process never ends.</p> <p>Now, if before running redis-trib, <strong>I enforce the cluster-announce-ip</strong> by running on each pod :</p> <pre><code>redis-cli -h mypod-ip config set cluster-announce-ip mypod-ip </code></pre> <p><strong>the redis-trib command then completes successfully</strong> and the cluster is up and running.</p> <p>But this not a viable solution as if a pod goes down and comes back, it may have changed ip and I will face the same problem when it will try to join the cluster.</p> <p>Note that I do not encounter any problem when running the samples with minikube.</p> <p>I am using flannel for kubernetes networking. Can the problem come from incorrect configuration of flannel ? Has anyone encountered the same issue ?</p>
<p>You can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">statefulsets</a> for deploying your replicas, so your pod will have a unique name always.</p> <p>Moreover, you will be able to use <code>service</code> DNS-names as host. See this official doc <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a>. </p> <p>The second example you shared, has another part for <a href="https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md" rel="nofollow noreferrer">redis cluster using statefulsets</a>. Try that out.</p>
<p>I'm on google kubernetes engine, and I need to run the filebeat daemonset found (<a href="https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html</a>). I create the cluster with:</p> <pre><code>gcloud container clusters create test_cluster \ --cluster-version "1.9.6-gke.1" \ --node-version "1.9.6-gke.1" \ --zone "us-east1-c" \ --machine-type n1-standard-4 \ --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.full_control","https://www.googleapis.com/auth/sqlservice.admin","https://www.googleapis.com/auth/log ging.write","https://www.googleapis.com/auth/pubsub","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/tra ce.append" \ --num-nodes "1" \ --network "main-network" \ --subnetwork "main-subnetwork" \ --no-enable-cloud-monitoring \ --no-enable-cloud-logging \ --no-enable-legacy-authorization \ --disk-size "50" </code></pre> <p>When I have <code>--cluster-version</code> and <code>--node-version</code> set to <code>1.8.8-gke.0</code> it works but when I change it to <code>1.9.6-gke.1</code> the filebeat pod can't reach my GCE instance that's running logstash.</p> <p>Both the cluster and the GCE instance are running on the same network and I'm sure it's not a firewall issue with google cloud because if I <code>gcloud compute ssh</code> into the GKE instance and do <code>nc -vz -w 5 10.0.0.18 5044</code> it connects fine.</p> <p>When I have the cluster running <code>1.8.8-gke.0</code>, the filebeat pod connects fine to logstash and running <code>traceroute 10.0.0.18</code> shows it connecting fine. When I create the cluster with <code>1.9.6-gke.1</code> then <code>traceroute 10.0.0.18</code> shows the following:</p> <pre><code>[root@filebeat-56wtj filebeat]# traceroute 10.0.0.18 traceroute to 10.0.0.18 (10.0.0.18), 30 hops max, 60 byte packets 1 gateway (10.52.0.1) 0.063 ms 0.016 ms 0.012 ms 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * </code></pre> <p>edit: Note this isn't specific to the filebeat container, I tried it with another container and it also can't reach a GCE instance.</p>
<p>As you can read here [1]: "Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.</p> <p>You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster."</p> <p>[1] <a href="https://cloud.google.com/kubernetes-engine/release-notes#known-issues" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/release-notes#known-issues</a></p>
<p>Creating a <a href="http://kubernetes.io/v1.1/docs/user-guide/services.html#type-loadbalancer" rel="noreferrer">Kubernetes LoadBalancer</a> returns immediatly (ex: <code>kubectl create -f ...</code> or <code>kubectl expose svc NAME --name=load-balancer --port=80 --type=LoadBalancer</code>).</p> <p>I know a manual way to wait in shell:</p> <pre><code>external_ip="" while [ -z $external_ip ]; do sleep 10 external_ip=$(kubectl get svc load-balancer --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}") done </code></pre> <p>This is however not ideal:</p> <ul> <li>Requires at least 5 lines Bash script.</li> <li>Infinite wait even in case of error (else requires a timeout which increases a lot line count).</li> <li>Probably not efficient; could use <code>--wait</code> or <code>--wait-once</code> but using those the command never returns.</li> </ul> <p>Is there a better way to wait until a service <em>external IP</em> (aka <em>LoadBalancer Ingress IP</em>) is set or failed to set?</p>
<p>Just to add to the answers here, the best option right now is to use a bash script. For convenience, I've put it into a single line that includes exporting an environmental variable. </p> <p><strong>Command to wait and find Kubernetes service endpoint</strong></p> <pre><code>bash -c 'external_ip=""; while [ -z $external_ip ]; do echo "Waiting for end point..."; external_ip=$(kubectl get svc NAME_OF_YOUR_SERVICE --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"); [ -z "$external_ip" ] &amp;&amp; sleep 10; done; echo "End point ready-" &amp;&amp; echo $external_ip; export endpoint=$external_ip' </code></pre> <p>I've also modified your script so it only executes a wait if the ip isn't available. The last bit will export an environment variable called "endpoint"</p> <p><strong>Bash Script to Check a Given Service</strong></p> <p>Save this as <code>check-endpoint.sh</code> and then you can execute <code>$sh check-endpoint.sh SERVICE_NAME</code></p> <pre><code>#!/bin/bash # Pass the name of a service to check ie: sh check-endpoint.sh staging-voting-app-vote # Will run forever... external_ip="" while [ -z $external_ip ]; do echo "Waiting for end point..." external_ip=$(kubectl get svc $1 --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}") [ -z "$external_ip" ] &amp;&amp; sleep 10 done echo 'End point ready:' &amp;&amp; echo $external_ip </code></pre> <p><strong>Using this in a Codefresh Step</strong></p> <p>I'm using this for a Codefresh pipeline and it passes a variable $endpoint when it's done. </p> <pre><code> GrabEndPoint: title: Waiting for endpoint to be ready image: codefresh/plugin-helm:2.8.0 commands: - bash -c 'external_ip=""; while [ -z $external_ip ]; do echo "Waiting for end point..."; external_ip=$(kubectl get svc staging-voting-app-vote --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"); [ -z "$external_ip" ] &amp;&amp; sleep 10; done; echo "End point ready-" &amp;&amp; echo $external_ip; cf_export endpoint=$external_ip' </code></pre>
<p>I run <code>istio</code> on <code>Kubernetes</code>. I want to know how the envoy sidecar works. For example, after sidecar is injected into the pod, the original container cannot access the outer network without <code>EgressRule</code>. How does it work?</p>
<p>All the traffic inside the pod is captured by <a href="https://en.wikipedia.org/wiki/Iptables" rel="noreferrer">iptables commands</a> and directed to the sidecar proxy. Then the sidecar proxy performs routing, according to routing tables it receives from Istio Pilot (a part of the <a href="https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc" rel="noreferrer">Istio Control Plane</a>). The routing tables are based on the Kubernetes services and on the Istio <a href="https://istio.io/docs/reference/config/istio.routing.v1alpha1.html#RouteRule" rel="noreferrer">RouteRules</a>. Since Istio cannot know anything about the external services, it cannot route the traffic to the external services without an <a href="https://istio.io/docs/reference/config/istio.routing.v1alpha1.html#EgressRule" rel="noreferrer">EgressRule</a> defined. <code>EgressRules</code> define the routing tables for the external services.</p>
<p>I created a gossip cluster in AWS using <strong>kops</strong>, which means my cluster name is ending with <strong>k8s.local</strong> (clusters.test.k8s.local to be exected ), All is working fine until I tried to create a deployment where the pod name needs to be with a domain at the end (api-manager.iot.test.co.nz).</p> <p>I know that it is not authorized to create pods that are not standing in the requirements os this regex:</p> <pre><code>'[a-z]([-a-z0-9]*[a-z0-9])?' </code></pre> <p>Is there a way I can do that?</p> <p>I tried adding hostname under template->spec but it has the same restrictions (the regex).</p> <p>This is my deployment YAML file:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: labels: name: api-manager spec: replicas: 1 template: metadata: labels: app: api-manager spec: volumes: - name: api-manager-efs persistentVolumeClaim: claimName: pvc-apim containers: - image: api-manager:2.1.0 name: api-manager.iot.test.co.nz ports: - name: porta containerPort: 9763 - name: portb containerPort: 9443 env: - name: SLEEP value: "30" volumeMounts: - name: api-manager-efs mountPath: /home/wso2carbon/wso2am-2.1.0/repository </code></pre>
<p>No, you cannot create that kind of <code>labels</code> by design.</p> <p>From the design <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/identifiers.md" rel="nofollow noreferrer">document</a>:</p> <blockquote> <p><a href="http://www.ietf.org/rfc/rfc1035.txt" rel="nofollow noreferrer">rfc1035</a>/<a href="http://www.ietf.org/rfc/rfc1123.txt" rel="nofollow noreferrer">rfc1123</a> <code>label</code> (DNS_LABEL): An alphanumeric (a-z, and 0-9) string, with a maximum length of 63 characters, with the '-' character allowed anywhere except the first or last character, suitable for use as a hostname or segment in a domain name.</p> </blockquote> <p>Here is the current <a href="https://github.com/kubernetes/kubernetes/blob/HEAD/staging/src/k8s.io/apimachinery/pkg/util/validation/validation.go#L145" rel="nofollow noreferrer">implementation</a>:</p> <pre><code>const dns1035LabelFmt string = "[a-z]([-a-z0-9]*[a-z0-9])?" const dns1035LabelErrMsg string = "a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character" const DNS1035LabelMaxLength int = 63 var dns1035LabelRegexp = regexp.MustCompile("^" + dns1035LabelFmt + "$") // IsDNS1035Label tests for a string that conforms to the definition of a label in // DNS (RFC 1035). func IsDNS1035Label(value string) []string { var errs []string if len(value) &gt; DNS1035LabelMaxLength { errs = append(errs, MaxLenError(DNS1035LabelMaxLength)) } if !dns1035LabelRegexp.MatchString(value) { errs = append(errs, RegexError(dns1035LabelErrMsg, dns1035LabelFmt, "my-name", "abc-123")) } return errs } </code></pre>
<p>I have multiple teams and each team has bunch of Applications and Each application will have different environments( DEV, STAGE, PROD ). Looking for a way to limit the accesses using namespace.</p> <p>Say, each team will have their own namespace.</p> <p>I don't want Application deployed in namespace A access configMaps from namespace B. But, I want Applications deployed in namespace A access Rest Applications deployed in namespace B (either through ingress or services).</p> <p>Also, dev applications should not have visibility on STAGE applications.</p> <p>But, there are few applications which will serve request for DEV &amp; STAGE based on tenantId in the request header.</p> <p>What is the recommendation for creating namespace here?</p>
<p>Namespaces automatically separate resources in the cluster. So if you create a namespace A and B, then if you create a configmap in namespace A it will automatically be unavailable in namespace B.</p> <p>If you want to restrict users on what they can do in your cluster. For example if you like developers to be able to create resources on development but only view things in staging or production. I would take a look at using <a href="https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions" rel="nofollow noreferrer">RBAC</a> </p> <p>If you want to separate access to and from applications on the network layer I would suggest taking a look at <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a>. For that you would need an applicable networking solution for example <a href="https://docs.projectcalico.org/v3.1/getting-started/kubernetes/" rel="nofollow noreferrer">Project Calico</a></p>