prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have a cronjob that runs every 10 minutes. So every 10 minutes, a new pod is created. After a day, I have a lot of completed pods (not jobs, just one cronjob exists). Is there way to automatically get rid of them?</p>
<p>That's a work for labels.</p> <p>Use them on your <code>CronJob</code> and delete completed pods using a <code>selector</code> (<code>-l</code> flag).</p> <p>For example:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cron spec: schedule: "*/1 * * * *" jobTemplate: spec: template: metadata: labels: app: periodic-batch-job is-cron: "true" spec: containers: - name: cron image: your_image imagePullPolicy: IfNotPresent restartPolicy: OnFailure </code></pre> <p>Delete all cron-labeled pods with:</p> <pre><code>kubect delete pod -l is-cron </code></pre>
<p>I'm new to Kubernetes and Rancher, but have a cluster setup and a workload deployed. I'm looking at setting up an ingress, but am confused by what my DNS should look like.</p> <p>I'll keep it simple: I have a domain (example.com) and I want to be able to configure the DNS so that it's routed through to the correct IP in my 3 node cluster, then to the right ingress and load balancer, eventually to the workload.</p> <p>I'm not interested in this xip.io stuff as I need something real-world, not a sandbox, and there's no documentation on the Rancher site that points to what I should do.</p> <p>Should I run my own DNS via Kubernetes? I'm using DigitalOcean droplets and haven't found any way to get Rancher to setup DNS records for me (as it purports to do for other cloud providers).</p> <p>It's really frustrating as it's basically the first and only thing you need to do... "expose an application to the outside world", and this is somehow not trivial.</p> <p>Would love any help, or for someone to explain to me how fundamentally dumb I am and wha tI'm missing!</p> <p>Thanks.</p>
<p>You aren't dumb, man. This stuff gets complicated. Are you using AWS or GKE? Most methods of deploying kubernetes will deploy an internal DNS resolver by default for intra-cluster communication. These URLs are only useful inside the cluster. They take the form of <code>&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local</code> and have no meaning to the outside world.</p> <p>However, exposing a service to the outside world is a different story. On AWS you may do this by setting the service's ServiceType to LoadBalancer, where kubernetes will automatically spin up an AWS LoadBalancer, and along with it a public domain name, and configure it to point to the service inside the cluster. From here, you can then configure any domain name that you own to point to that loadbalancer.</p>
<p>I'm using k8s 1.11.2 to build my service, the YAML file looks like this:</p> <p><strong>Deployment</strong></p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: nginx-test namespace: default labels: - type: test spec: replicas: 1 selector: matchLabels: - type: test template: metadata: labels: - type: test spec: containers: - image: nginx:1.14 name: filebeat ports: - containerPort: 80 </code></pre> <p><strong>Service</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: - type:test spec: type: ExternalName externalName: my.nginx.com externalIPs: - 192.168.125.123 clusterIP: 10.240.20.1 ports: - port: 80 name: tcp selector: - type: test </code></pre> <hr> <p>and I get this error:</p> <blockquote> <p>error validating data: [ValidationError(Service.metadata.labels): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.labels: got "array", expected "map", ValidationError(Service.spec.selector): invalid type for io.k8s.api.core.v1.ServiceSpec.selector: got "array", expected "map"];</p> </blockquote> <p>I am sure the format of my YAML file is right, because I used the website <a href="http://www.yamllint.com/" rel="noreferrer">http://www.yamllint.com/</a> to validate it.</p> <p>Why am I getting this error?</p>
<p><a href="http://www.yamllint.com/" rel="noreferrer">yamllint.com</a> is a dubious service because it does not tell us which YAML version it is checking against and which implementation it is using. Avoid it.</p> <p>More importantly, while your input may be valid YAML, this does not mean that it is a valid input for kubernetes. YAML allows you to create any kind of structure, while kubernetes expects a certain structure from you. This is what the error is telling you:</p> <blockquote> <p>got "array", expected "map"</p> </blockquote> <p>This means that at a place where kubernetes expects a <em>mapping</em> you provided an array (<em>sequence</em> in proper YAML terms). The error message also gives you the path where this problem occurs:</p> <blockquote> <p>ValidationError(Service.metadata.labels):</p> </blockquote> <p>A quick check on metadata labels in kubernetes reveals <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">this documentation</a>, which states that labels need to be mappings, not arrays.</p> <p>So in your input, the last line here is the culprit:</p> <pre><code>metadata: name: nginx-test namespace: default labels: - type: test </code></pre> <p><code>-</code> is a YAML indicator for a sequence item, creating a sequence as value for the key <code>labels:</code>. Dropping it will make it a mapping instead:</p> <pre><code>metadata: name: nginx-test namespace: default labels: type: test </code></pre>
<p>I followed <a href="https://cloud.google.com/container-registry/docs/pushing-and-pulling" rel="nofollow noreferrer">this guide</a> to push a docker image to my Google Cloud Container Registry. However </p> <pre><code>docker push eu.grc.io/&lt;project-id&gt;/&lt;image&gt;:&lt;tag&gt; </code></pre> <p>only returns</p> <pre><code>The push refers to repository [eu.grc.io/&lt;project-id&gt;/&lt;image&gt;] Get https://eu.grc.io/v2/: dial tcp xx.xx.xxx.xx:443: connect: connection refused </code></pre> <p>Has anyone encountered this error before or knows how to get it working? I'm seriously out of ideas, there may be something wrong with my network configuration but I'm not familiar enough with that side of Ubuntu to know what I should be looking for. For context, I've set up a kubernetes cluster for hosting jupyterhub following <a href="https://zero-to-jupyterhub-with-kubernetes.readthedocs.io/en/latest/" rel="nofollow noreferrer">this guide</a>.</p> <p>I can run the image locally, <code>gcloud info</code> returns all the correct info and <code>gcloud auth configure-docker</code> returns <code>credential helpers already registered correctly.</code></p> <p>Additional Info:</p> <pre><code>Ubuntu 16.04 gcloud version Google Cloud SDK 214.0.0 bq 2.0.34 core 2018.08.24 gsutil 4.33 docker version Version: 18.06.1-ce API version: 1.38 Go version: go1.10.3 </code></pre>
<p>Typo in your "eu.grc.io". It is "eu.gcr.io"</p>
<p>I got app that logs to file my_log/1.log, and then I use filebeat to collect the logs from the file</p> <p>Now I use k8s to deploy it into some nodes, and use hostpath type Volumes to mounts my_log file to the local file syetem, /home/my_log, suddenly I found a subtle situation:</p> <p>what will it happened if more than one pod deployed on this machine, and then they try to write the log at the same time?</p> <p>I know that in normal situation, multi-processing try to write to a file at the same time, the system will lock the file,so these processes can write one by one, BUT I am not sure will k8s diffirent pods will not share the same lock space, if so, it will be a disaster. I try to test this and it seems diffirent pods will still share the file lock,the log file seems normal</p>
<blockquote> <p>how kubernetes deal with file write locker accross multi pods when hostpath Volumes concerned</p> </blockquote> <p>It doesn't.</p> <p>Operating System and File System are handling that. As an example let's take <code>syslog</code>. It handles it by opening a socket, setting the socket to server mode, opening a log file in write mode, being notified of packages, parsing the message and finally writing it to the file.</p> <p>Logs can also be cached, and the process can be limited to 1 thread, so you should not have many pods writing to one file. This could lead to issues like missing logs or lines being cut.</p> <p>Your application should handle the file locking to push logs, also if you want to have many pods writing logs, you should have a separate log file for each pod.</p>
<p>I'm trying to access .NET Web API which I docker-ized and mounted in an Kubernet Cluster on Microsoft Azure.</p> <p>The application works fine on local docker machine. The cluster is running, my deployment was correct and the pods where created. Everything I check is fine, but I cannot access my application through the external cluster IP (Load Balancer). This is my YAML deployment file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ohmioapi-deployment spec: selector: matchLabels: app: ohmioapi replicas: 1 template: metadata: labels: app: ohmioapi spec: containers: - name: ohmioapi image: ohmiocontainers.azurecr.io/ohmioapi:latest imagePullPolicy: Always ports: - containerPort: 15200 imagePullSecrets: - name: acr-auth --- apiVersion: v1 kind: Service metadata: name: ohmioapi labels: app: ohmioapi spec: selector: app: ohmioapi ports: - port: 15200 nodePort: 30200 protocol: TCP type: LoadBalancer </code></pre> <p>Can anyone give a hint of where to start looking for? Thanks!</p>
<p>You can use the command <code>kubectl get service</code> to get all the information of services and check your service <code>ohmioapi</code>, the result will like this:</p> <p><a href="https://i.stack.imgur.com/C4gC1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C4gC1.png" alt="enter image description here"></a></p> <p>Or you can use the command <code>kubectl describe service serviceName</code> to get more details about your service, the result will like this:</p> <p><a href="https://i.stack.imgur.com/sGKYH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGKYH.png" alt="enter image description here"></a></p> <p>You can check the port mapping in the load balancer and access from the browser via the external IP and port.</p> <p>You also can use the command <code>kubectl edit service serviceName</code> to edit and check from the config file created by the Kunernetes, the result will like this:</p> <p><a href="https://i.stack.imgur.com/1W3yT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1W3yT.png" alt="enter image description here"></a></p>
<p>I've a huge pipeline with different developer groups with several permission levels.(<strong>For using Jenkins Kubernetes Plugin .</strong>) </p> <p>For example <strong>QA</strong> teams and <strong>Developer</strong> teams has different service accounts at kubernetes cluster. </p> <p>So I need create some connection with kubernetes clusters but every connection I change context of cluster with namespace name . </p> <p>I want to use multiple namespaces at kubernetes context . That is my own kubernetes context file . </p> <pre><code>- context: cluster: minikube namespace: user3 user: minikube </code></pre> <p>How I can handle this problem with kubernetes api call or in yaml files ? That is my example service account yaml file . </p> <pre><code> apiVersion: v1 kind: ServiceAccount metadata: name: dev </code></pre> <hr> <pre><code> kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: dev rules: - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] </code></pre> <hr> <pre><code> apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: dev roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: dev subjects: - kind: ServiceAccount name: dev </code></pre>
<p>If you want one jenkins to talk to kubernetes API with different service accounts you need to create multiple Jenkins "clouds" in configuration, each with different credentials. Then in your pipeline you set the "cloud" option to choose the right one</p>
<p>After some readings, it seems there is no sustainable solution for <strong>auto-scaling</strong> Redis on Kubernetes without adding a controller like <a href="https://github.com/adenda/maestro/wiki/Kubernetes-Redis-controller-for-autoscaling-a-Redis-cluster" rel="nofollow noreferrer">Maestro</a>. Unfortunatly the project seems a bit dead. </p> <p>What are some alternatives for autoscaling Redis ? </p> <p>Edit: Redis is a statefull app.</p>
<p>If you want to autoscale anything on Kubernetes, it requires some type of controller. For general autoscaling, the community is rallying around the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>. By default, you configure it to scale based on CPU utilization.</p> <p>If you want to scale based on metrics other than CPU utilization and you're using the <a href="https://github.com/helm/charts/tree/master/stable/redis" rel="nofollow noreferrer">Redis helm chart</a>, you can easily configure it to run a Prometheus metric sidecar and can set the autoscaler to scale based on one of those values.</p>
<p>I'm running my workloads on AWS EKS service in the cloud. I can see that there is not default Ingress Controller available (as it is available for GKE) we have to pick a 3rd party-one.</p> <p>I decided to go with <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">Traefik</a>. After following documentations and other resources (like <a href="https://github.com/pahud/amazon-eks-workshop/tree/master/03-creating-services/ingress/traefik-ingress" rel="nofollow noreferrer">this</a>), I feel that using Traefik as the Ingress Controller does not create a LoadBalancer in the cloud automatically. We have to go through it manually to setup everything.</p> <p>How to use Traefik to work as the Kubernetes Ingress the same way other Ingress Controllers work (i.e. Nginx etc) that create a LoadBalancer, register services etc? Any working example would be appreciated.</p>
<p>Have you tried with annotations like in this example?</p> <pre><code>apiVersion: v1 kind: Service metadata: name: traefik-proxy annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" spec: type: LoadBalancer selector: app: traefik-proxy tier: proxy ports: - port: 443 targetPort: 80 </code></pre>
<p>I have the following pods <code>hello-abc</code> and <code>hello-def</code>.</p> <p>And I want to send data from <code>hello-abc</code> to <code>hello-def</code>.</p> <p>How would pod <code>hello-abc</code> know the IP address of <code>hello-def</code>?</p> <p>And I want to do this programmatically.</p> <p>What's the easiest way for <code>hello-abc</code> to find where <code>hello-def</code>?</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-abc-deployment spec: replicas: 1 template: metadata: labels: app: hello-abc spec: containers: - name: hello-abc image: hello-abc:v0.0.1 imagePullPolicy: Always args: ["/hello-abc"] ports: - containerPort: 5000 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-def-deployment spec: replicas: 1 template: metadata: labels: app: hello-def spec: containers: - name: hello-def image: hello-def:v0.0.1 imagePullPolicy: Always args: ["/hello-def"] ports: - containerPort: 5001 --- apiVersion: v1 kind: Service metadata: name: hello-abc-service spec: ports: - port: 80 targetPort: 5000 protocol: TCP selector: app: hello-abc type: NodePort --- apiVersion: v1 kind: Service metadata: name: hello-def-service spec: ports: - port: 80 targetPort: 5001 protocol: TCP selector: app: hello-def type: NodePort </code></pre>
<p><strong>Preface</strong></p> <p>Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name. </p> <p>Unfortunately if <code>kube-dns</code> is not configured in your cluster (although it is unlikely) you cannot refer to it by name.</p> <p>You can read more about DNS records for services <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="noreferrer">here</a></p> <p>In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.</p> <p><strong>Solution</strong></p> <p>This means, to reach hello-def you can do so like this </p> <p><code>curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}</code> </p> <p>based on Service Discovery <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables</a></p> <p><strong>Caveat</strong>: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.</p> <p><strong>External Access</strong></p> <p>In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.</p> <p>This would require you to access your service by node-ip:nodePort</p> <p>You can find out the NodePort which was randomly assigned to your service with <code>kubectl describe svc/hello-def-service</code></p> <p><strong>Ingress</strong></p> <p>To reach your service from outside you should implement an ingress service such as nginx-ingress</p> <p><a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a> <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">https://github.com/kubernetes/ingress-nginx</a></p> <p><strong>Sidecar</strong></p> <p>If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via <code>localhost:$port</code></p> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods</a></p> <hr> <p><strong>Service Discovery</strong></p> <blockquote> <p>When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It supports both Docker links compatible variables (see makeLinkVariables) and simpler {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores.</p> </blockquote> <p>Read more about service discovery here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables</a></p>
<p>Is it possible to directly modify the file mounted by <code>configMap</code>? We have an application that reads a configuration file that was <code>configMap</code> type and the application should able to edit the file and the changes should be persisted if the <code>configMap</code> is shared with other pods and when the pod restarts. </p> <p>If <code>configMap</code> is not meant for this, then what should we relay on <code>consul</code> to save the configuration?</p>
<p>Yes a configmap is <a href="https://github.com/kubernetes/kubernetes/issues/62099" rel="noreferrer">not intended to be writeable</a>. If you're interacting with files from a configmap then you could instead put the files in a writeable volume and <a href="https://stackoverflow.com/questions/45681206/what-would-be-an-ideal-way-to-share-writable-volume-across-containers-for-a-web">mount the volume</a>. Or you could, as you suggest, use centralised configuration like consul. Given, that the app is dynamically writing to this data you could consider it state rather than configuration. Then it could be stored in a database. Another option could be a distributed cache <a href="https://redislabs.com/ebook/part-2-core-concepts/chapter-5-using-redis-for-application-support/5-4-service-discovery-and-configuration/5-4-1-using-redis-to-store-configuration-information/" rel="noreferrer">such as redis</a> or hazelcast.</p>
<p>I have deployed a k8s cluster using "kubeadm init" successfully before. I re-install the k8s when meet a problem. Now I re-deploy the k8s cluster failed!</p> <p>linux os</p> <pre><code>uname -a Linux kube-master 4.15.0-33-generic #36-Ubuntu SMP Wed Aug 15 16:00:05 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <p>k8s env</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? kubeadm version kubeadm version: &amp;version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:14:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>My deploy command:</p> <pre><code>sudo kubeadm -v 10 init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.100 --kubernetes-version v1.11.2 [init] this might take a minute or longer if the control plane images have to be pulled I0828 17:12:47.780302 28675 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.2 (linux/amd64) kubernetes/bb9ffb1" 'https://192.168.0.100:6443/healthz?timeout=32s' I0828 17:12:47.780492 28675 round_trippers.go:405] GET https://192.168.0.100:6443/healthz?timeout=32s in 0 milliseconds I0828 17:12:47.780500 28675 round_trippers.go:411] Response Headers: I0828 17:12:48.280824 28675 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.2 (linux/amd64) kubernetes/bb9ffb1" 'https://192.168.0.100:6443/healthz?timeout=32s' I0828 17:12:48.281238 28675 round_trippers.go:405] GET https://192.168.0.100:6443/healthz?timeout=32s in 0 milliseconds I0828 17:12:48.281283 28675 round_trippers.go:411] Response Headers: I0828 17:12:48.780836 28675 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.2 (linux/amd64) kubernetes/bb9ffb1" 'https://192.168.0.100:6443/healthz?timeout=32s' I0828 17:12:48.781171 28675 round_trippers.go:405] GET https://192.168.0.100:6443/healthz?timeout=32s in 0 milliseconds I0828 17:12:48.781199 28675 round_trippers.go:411] Response Headers: I0828 17:12:49.281440 28675 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.2 (linux/amd64) kubernetes/bb9ffb1" 'https://192.168.0.100:6443/healthz?timeout=32s' </code></pre> <p>Who can help me?</p>
<p>After <code>kubeadm init</code> you should copy <em>admin.conf</em> file into the home directory of the user who will use the <code>kubectl</code> command and set the config path into the <code>KUBECONFIG</code> system variable:</p> <pre><code>sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf </code></pre> <p>It is a configuration file from where the <code>kubectl</code> reads details required to make a connection into your K8s cluster.</p>
<p>I am learning Kubernetes with Docker to launch a simple Python web application. I am new to all the above technologies. </p> <p>Below is the approach I was planning on:</p> <ol> <li>Install Kubernetes.</li> <li>Have a cluster up and running locally.</li> <li>Install Docker.</li> <li>Create Python Application</li> </ol> <p>I successfully installed Kubectl on my local using Chocolatey following instructions from <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a>.</p> <p>I created <code>.\kube</code> directory in C:\Users directory. But I do not see any config files neither in the location where kubernetes has been installed <code>C:\ProgramData\Chocolatey\lib\kubernetes-cli\tools\kubernetes\client\bin</code> nor in <code>C:\Users\User1\.kube</code> directory. </p> <p>When I run the command <strong>".\kubectl cluster-info"</strong> in powershell against C:\ProgramData\Chocolatey\lib\kubernetes-cli\tools\kubernetes\client\bin<br> I get <strong>"Kubernetes master is running at <a href="http://localhost:8080" rel="nofollow noreferrer">http://localhost:8080</a>"</strong> response. But when I run the same command against C:\Users\User1.kube I get </p> <blockquote> <p>.\kubectl : The term '.\kubectl' is not recognized as the name of a cmdle or if a path was included, verify that the path is correct and try again.</p> </blockquote> <p>Am I doing it the wrong way or missing anything here? </p> <p>This blog says <a href="https://blog.tekspace.io/install-kubernetes-cli-on-windows-10/" rel="nofollow noreferrer">https://blog.tekspace.io/install-kubernetes-cli-on-windows-10/</a> <strong>"copy config file from Kubernetes master node to .kube folder"</strong> but I dont see any config file! </p> <p>Appreciate your help.</p>
<p>The blog you refer illustrates how to configure the CLI (Command Line Interface) on your Win10 computer, so that you can connect to a Kubernetes cluster.</p> <p>The cluster is running on others machines. In the following picture you see a simplified schema.</p> <p><a href="https://i.stack.imgur.com/hOw0v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hOw0v.png" alt="enter image description here"></a> You connect to the master through a CLI (kubectl), the master receives your commands and acts on the nodes.</p> <p>I suggest to copy kubectl.exe into folder <code>C:\WINDOWS\system32</code> (which is in the <code>PATH</code> variable) so that you can type kubectl from whatever folder you are.</p> <p>The config file the blog speaks about is on the Kubernetes master. It's not in your local machine. If you manage the machine on which the kube master runs, you need to connect (probably via <code>SSH</code>) and get the file (in <code>/etc/kubernetes/</code> - <code>admin.conf</code> or <code>kubernetes.conf</code>, it depends on the installation, I followed <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">this</a>).</p>
<p>I am trying to deploy a kubeless function using serverless. I created a kubernetes cluster using minikube and I am trying to follow this <a href="https://medium.com/bitnami-perspectives/deploying-a-kubeless-function-using-serverless-templates-2d03f49b70e2" rel="nofollow noreferrer">link</a> following which </p> <ol> <li>I installed serverless</li> <li>created a template kubeless-nodejs</li> <li>installed plugins with <code>npm install</code></li> <li>and tried to deploy using <code>serverless deploy -v</code></li> </ol> <p>but I am getting an error</p> <pre><code>/home/vin/serverless/kube/services/email/node_modules/serverless-kubeless/lib/config.js:56 return JSON.parse(this.configMag.data[key]); ^ TypeError: Cannot read property 'runtime-images' of undefined at Config.get (/home/vin/serverless/kube/services/email/node_modules/serverless-kubeless/lib/config.js:56:44) </code></pre> <p>Please point me in the right direction</p>
<p>I found the issue. I had to deploy kubeless to the Kubernetes cluster I had to do this for that:</p> <pre><code>$ export RELEASE=$(curl -s https://api.github.com/repos/kubeless/kubeless/releases/latest | grep tag_name | cut -d '"' -f 4) $ kubectl create ns kubeless $ kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml </code></pre> <p>as given here <a href="https://kubeless.io/docs/quick-start/" rel="nofollow noreferrer">link</a></p>
<p>I have a strange issue where I am trying to apply a PodAntiAffinity to make sure that no 2 pods of the specific deploymentConfig ever end up on the same node:</p> <p>I attempt to edit the dc with:</p> <pre><code>spec: replicas: 1 selector: app: server-config deploymentconfig: server-config strategy: activeDeadlineSeconds: 21600 resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: creationTimestamp: null labels: app: server-config deploymentconfig: server-config spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - server-config topologyKey: "kubernetes.io/hostname" </code></pre> <p>but on saving that, I get a :</p> <pre><code>"/tmp/oc-edit-34z56.yaml" 106L, 3001C written deploymentconfig "server-config" skipped </code></pre> <p>and the changes dont stick. My openshift/Kubernetes versions are:</p> <pre><code>[root@master1 ~]# oc version oc v1.5.1 kubernetes v1.5.2+43a9be4 features: Basic-Auth GSSAPI Kerberos SPNEGO </code></pre> <p>Thanks in advance.</p>
<p>This seems to work, the syntax is wildly different and the "scheduler.alpha.kubernetes.io/affinity" annotation needs to be added to work:</p> <pre><code>spec: replicas: 1 selector: app: server-config deploymentconfig: server-config strategy: activeDeadlineSeconds: 21600 resources: {} rollingParams: intervalSeconds: 1 maxSurge: 25% maxUnavailable: 25% timeoutSeconds: 600 updatePeriodSeconds: 1 type: Rolling template: metadata: annotations: scheduler.alpha.kubernetes.io/affinity: | { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [{ "labelSelector": { "matchExpressions": [{ "key": "app", "operator": "In", "values":["server-config"] }] }, "topologyKey": "kubernetes.io/hostname" }] } } </code></pre> <p>Working as intended and spreading out properly between nodes:</p> <pre><code>[root@master1 ~]# oc get pods -o wide |grep server-config server-config-22-4ktvf 1/1 Running 0 3h 10.1.1.73 10.0.4.101 server-config-22-fz31j 1/1 Running 0 3h 10.1.0.3 10.0.4.100 server-config-22-mrw09 1/1 Running 0 3h 10.1.2.64 10.0.4.102 </code></pre>
<p>In a container running with host networking option it is possible to use a host network interface and its IP from the container and contact external network <em>from</em> this interface and IP. So if a host has several IPs configured, the container can choose which one it uses.</p> <p>Can I have a similar setup with Kubernetes and let a container use an host IP ?</p> <p>NB: I need the process to contact an external service <em>from</em> specific IPs, I dont necessarily need those IPs to be assigned to a container from an external view.</p>
<p>As I wrote in <a href="https://stackoverflow.com/questions/51943950/egress-ip-adress-selection/51944171">Egress IP adress selection</a> :</p> <blockquote> <p>One of the things that could help you solve it is Istio Egress Gateway so I suggest you look into it.</p> <p>Otherwise, it is still dependent on particular platform and way to deploy your cluster. For example on AWS you can make sure your egress traffic always leaves from predefined, known set of IPs by using instances with Elastic IPs assigned to forward your traffic (be it regular EC2s or AWS NAT Gateways). Even with Egress above, you need some way to define a fixed IP for this, so AWS ElasticIP (or equivalent) is a must.</p> </blockquote>
<p>The following deployment file is working if I'm uploading it from my local machine.</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: api namespace: app spec: replicas: 2 selector: matchLabels: run: api template: metadata: labels: run: api spec: containers: - name: api image: gcr.io/myproject/api:1535462260754 ports: - containerPort: 8080 readinessProbe: httpGet: path: /_ah/health port: 8080 initialDelaySeconds: 10 periodSeconds: 5 </code></pre> <p>The same one is on remote Compute Engine machine which running Jenkins. On this machine, with ssh I'm also able to apply this config. Under the Jenkins shell execute it's always throws </p> <pre><code>error: unable to recognize "./dist/cluster/api.deployment.yaml": no matches for kind "Deployment" in version "apps/v1" </code></pre> <p>I tried to change <code>apiVersion</code> to <code>apps/v1beta1</code> and to <code>extensions/v1beta1</code> as well. Don't know what to try else.</p> <p><strong>Update 1</strong></p> <p>kubectl version on Compute Engine:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff0 88eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Pla tform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.5", GitCommit:"9b635efce81582e1da13b3 5a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-08-02T23:42:40Z", GoVersion:"go1.9.3b4", Compiler:"gc ", Platform:"linux/amd64"} </code></pre> <p><strong>Update 2</strong></p> <p>Run inside Jenkins job shown this.</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Error from server (Forbidden): &lt;html&gt;&lt;head&gt;&lt;meta http-equiv='refresh' content='1;url=/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s'/&gt;&lt;script&gt;window.location.replace('/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s');&lt;/script&gt;&lt;/head&gt;&lt;body style='background-color:white; color:white;'&gt; Authentication required &lt;!-- You are authenticated as: anonymous Groups that you are in: Permission you need to have (but didn't): hudson.model.Hudson.Read ... which is implied by: hudson.security.Permission.GenericRead ... which is implied by: hudson.model.Hudson.Administer --&gt; &lt;/body&gt;&lt;/html&gt; </code></pre>
<p>Thanks to <strong>@csanchez</strong> I figured out that I was needed to get credentials under jenkins user. For that I just ran this command:</p> <pre><code>gcloud container clusters get-credentials cluster-1 --zone=my-cluster-zone --project myproject </code></pre>
<p>We have hundreds of deployment and in the config we have imagePullPolicy set as “ifnotpresent” for most of them and for few it is set to “always” now I want to modify all deployment which has <strong>ifnotpresent</strong> to <strong>always</strong>.</p> <p>How can we achieve this with at a stroke?</p> <p>Ex:</p> <pre><code>kubectl get deployment -n test -o json | jq ‘.spec.template.spec.contianer[0].imagePullPolicy=“ifnotpresent”| kubectl -n test replace -f - </code></pre> <p>The above command helps to reset it for one particular deployment.</p>
<p>Kubernetes doesn't natively offer mass update capabilities. For that you'd have to use other CLI tools. That being said, for modifying existing resources, you can also use the <code>kubectl patch</code> function.</p> <p>The script below isn't pretty, but will update all deployments in the namespace.</p> <pre><code>kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]' </code></pre> <p>Note: I used <code>sed</code> to strip the resource type from the name as kubectl doesn't recognize operations performed on resources of type <code>deployment.extensions</code> (and probably others).</p>
<p>I have a kubernetes cluster with a few nodes set up. I want to make sure that pods are distributed efficiently on the nodes.</p> <p>I'll explain:</p> <p>Let's assume that I have two nodes: <code> Node 1 - 2gb ram Node 2 - 2gb ram </code></p> <p>And I have these pods: <code> Pod 1 - 1gb ram on Node 1 Pod 2 - 100mb ram on Node 1 Pod 3 - 1gb ram on Node 2 Pod 4 - 100mb ram on Node 2 </code></p> <p>Ok now the problem: let's say I want to add a pod with 1gb ram to the cluster. Currently there's no room in any node so kubernetes won't do it (unless I add another node). I wonder if there's a way that kubernetes will see that it can move Pod 3 to node 1 to make room for the new pod?</p> <p>Help</p>
<p>The Kubernetes <a href="https://github.com/kubernetes-incubator/descheduler" rel="noreferrer">descheduler</a> incubator project will eventually be integrated into Kubernetes to accommodate rebalancing. This could be prompted by under/overutilization of node resources as your case suggests or for other reasons, such as changes in node taints or affinities. </p> <p>For your case, you could run the descheduler with the <code>LowNodeUtilization</code> strategy and carefully configured thresholds to have some pods evicted and added back to the pod queue after the new 1gb pod.</p> <p>Another method could use <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="noreferrer">pod priority</a> classes to cause a lower priority pod to be evicted and make room for the new incoming 1gb job. Pod priorities are enabled by default starting in version 1.11. Priorities aren't intended to be a rebalancing mechanism, but I mention it because it is a viable solution for ensuring a higher priority incoming pod can be scheduled. Priorities deprecate the old <a href="https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#rescheduler-guaranteed-scheduling-of-critical-add-ons" rel="noreferrer">rescheduler</a> that will be removed in 1.12.</p> <p><strong>Edit - include sample policy</strong></p> <p>The policy I used to test this is below:</p> <pre><code>apiVersion: "descheduler/v1alpha1" kind: "DeschedulerPolicy" strategies: "LowNodeUtilization": enabled: true params: nodeResourceUtilizationThresholds: thresholds: "memory": 50 targetThresholds: "memory": 51 "pods": 0 </code></pre>
<p>I would like to be able to access and manage a GKE (kubernetes) cluster from a Google Cloud function written in python. I managed to access and retrieve data from the created cluster (endpoint, username, and password at least), however I dont know how to use them with the kubernetes package api.</p> <p>Here are my imports :</p> <pre><code>import google.cloud.container_v1 as container from google.auth import compute_engine from google.cloud.container_v1 import ClusterManagerClient from kubernetes import client, config </code></pre> <p>Here is the code for cluster data :</p> <pre><code>project_id = 'my-gcp-project' zone = 'my-zone' cluster_id = 'my-existing-cluster' credentials = compute_engine.Credentials() gclient: ClusterManagerClient = container.ClusterManagerClient(credentials=credentials) cluster = gclient.get_cluster(project_id,zone,cluster_id) cluster_endpoint = cluster.endpoint print("*** CLUSTER ENDPOINT ***") print(cluster_endpoint) cluster_master_auth = cluster.master_auth print("*** CLUSTER MASTER USERNAME PWD ***") cluster_username = cluster_master_auth.username cluster_password = cluster_master_auth.password print("USERNAME : %s - PASSWORD : %s" % (cluster_username, cluster_password)) </code></pre> <p>I would like to do something like this after that :</p> <pre><code>config.load_kube_config() v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) </code></pre> <p>However, I can't figure out how to set my endpoint and authentification informations. Can anyone help me please ?</p>
<p>You can use a bearer token rather than using basic authentication:</p> <pre><code>from google.auth import compute_engine from google.cloud.container_v1 import ClusterManagerClient from kubernetes import client def test_gke(request): project_id = &quot;my-gcp-project&quot; zone = &quot;my-zone&quot; cluster_id = &quot;my-existing-cluster&quot; credentials = compute_engine.Credentials() cluster_manager_client = ClusterManagerClient(credentials=credentials) cluster = cluster_manager_client.get_cluster(name=f'projects/{project_id}/locations/{zone}/clusters/{cluster_id}') configuration = client.Configuration() configuration.host = f&quot;https://{cluster.endpoint}:443&quot; configuration.verify_ssl = False configuration.api_key = {&quot;authorization&quot;: &quot;Bearer &quot; + credentials.token} client.Configuration.set_default(configuration) v1 = client.CoreV1Api() print(&quot;Listing pods with their IPs:&quot;) pods = v1.list_pod_for_all_namespaces(watch=False) for i in pods.items: print(&quot;%s\t%s\t%s&quot; % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) </code></pre>
<p>I'm trying to set up a local k8s cluster and on <code>minikube</code> with installed <code>istio</code> and I have an issue with enabling distributed tracing with Jaeger. I have 3 microservices <code>A -&gt; B -&gt; C</code>. I am propagating the all the headers that are needed:</p> <pre><code>{"x-request-id", "x-b3-traceid", "x-b3-spanid", "x-b3-parentspanid", "x-b3-sampled", "x-b3-flags", "x-ot-span-context"} </code></pre> <p>But on Jaeger interface, I can only see the request to the service A and I cannot see the request going to service B.</p> <p>I have logged the headers that are sent in the request. Headers from service A:</p> <pre><code>Header - x-request-id: c2804368-2ff0-9d90-a2aa-972537968924 Header - x-b3-traceid: 3a2400b40bbe5ed8 Header - x-b3-spanid: 3a2400b40bbe5ed8 Header - x-b3-parentspanid: Header - x-b3-sampled: 1 Header - x-b3-flags: Header - x-ot-span-context: </code></pre> <p>Headers from service B:</p> <pre><code>Header - x-request-id: c2804368-2ff0-9d90-a2aa-972537968924 Header - x-b3-traceid: 3a2400b40bbe5ed8 Header - x-b3-spanid: 3a2400b40bbe5ed8 Header - x-b3-parentspanid: Header - x-b3-sampled: 1 Header - x-b3-flags: Header - x-ot-span-context: </code></pre> <p>So the <code>x-request-id</code>, <code>x-b3-traceid</code>, <code>x-b3-sampled</code>, and <code>x-b3-spanid</code> mathces. There are some headers that aren't set. Also, I'm accessing service A via k8s Service IP of type LoadBalancer, not via ingress. Don't know if this could be the issue.</p> <p>UPD: I have setup istio gateway so now I'm accessing service <code>A</code> via istio gateway. However the result is the same, I can see the trace for <code>gateway-&gt;A</code> but no any further tracing</p>
<p>Some web frameworks return empty string if a non-existent header is queried. I have seen this in Spring Boot and KoaJS.</p> <p>If any of the tracing headers is not sent by Istio, this header logic causes us to send empty string for those non-existent headers which breaks tracing.</p> <p>My suggestion is after getting the values for headers filter out the ones with empty string as their values and propogate the remaining ones.</p>
<h1>this is my network policy:</h1> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny namespace: openstack spec: podSelector: matchLabels: {} policyTypes: - Egress - Ingress </code></pre> <p>I apply this policy and login into 1 pod and it still can connect to google.com.</p> <p>-</p> <pre><code>-2018-08-29 11:36:33-- http://google.com/ Resolving google.com (google.com)... 172.217.4.46, 2607:f8b0:4009:804::200e Connecting to google.com (google.com)|172.217.4.46|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://www.google.com/ [following] --2018-08-29 11:36:33-- http://www.google.com/ Resolving www.google.com (www.google.com)... 172.217.4.36, 2607:f8b0:4009:804::2004 Connecting to www.google.com (www.google.com)|172.217.4.36|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘index.html.85’ index.html.85 [ &lt;=&gt; ] 11.09K --.-KB/s in 0.001s 2018-08-29 11:36:33 (7.77 MB/s) - ‘index.html.85’ saved [11355] </code></pre> <p>Does anyone can explain why the egress doesn't work? Thanks</p>
<p>Before using <code>NetworkPolicy</code>, you need to install a <code>CNI</code> plugin that supports network policies.</p> <p>I use <code>Weave Net</code>, but you can use some other:</p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/</a></p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/</a></p>
<p>I am novice to Kubernetes. Recently my Docker registry url is changed from <code>dockerhub.abc.com</code> to <code>dockerhub.def.com</code>. Is it possible that i can change this in properties of Kubernetes pod so that next time,it pulls from new registry?</p>
<p>If you're using secrets to hold your authorization token for your Docker registry, you can refer to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-in-the-cluster-that-holds-your-authorization-token" rel="nofollow noreferrer">using private registry</a></p> <p>I recommend you to use secrets. All you need to do is create a new secret or update the existing one with your new url, and then put this secret to your Pod's .yaml.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: &lt;your-private-image&gt; imagePullSecrets: - name: &lt;your-secret-name&gt; </code></pre>
<p><strong>Values.yaml</strong></p> <pre><code>cpulimit: 200m memlimit: 512M </code></pre> <p><strong>configmap.yaml</strong></p> <pre><code>mem_pool_size = {{ ((.Values.memlimit)) mul 0.8 }} --&gt; not working mem_pool_size = {{ .Values.memlimit mul 0.8 }} --&gt; not working mem_pool_size = {{ .Values.memlimit * 0.8 }} --&gt; not working mem_pool_size = {{ .Values.memlimit }} * 0.8 --&gt; not working mem_pool_size = {{ .Values.memlimit }} mul 0.8 --&gt; not working </code></pre> <p>Tried many ways but i dint got the exact solution.if user provides value of memlimit as 512M i should assign only 80 % ram so the value will be 410M. I am finding a way whether arithmetic operations are supported in helm templates. Is there any example for this.</p>
<p>In helm templates this is done via <a href="https://docs.helm.sh/chart_template_guide/#template-functions-and-pipelines" rel="nofollow noreferrer">pipelines</a>. Some of them are defined via Go template language and some others are part of <a href="http://masterminds.github.io/sprig/" rel="nofollow noreferrer">Sprig template library</a>.</p> <p>I did not find a complete list which are valid and working in Helm, but I did not find a Sprig one not working as explained in the Sprig documentation.</p> <p>So first the syntax for pipelines has to be:</p> <pre><code>{{ .Values.memlimit | mul 2 }} </code></pre> <p>However the Math Functions only work on int64. But 512M is not an int. So you can let the user specify the limits as bytes or chain more pipes to first remove the "M" and then do the calculation:</p> <pre><code>{{ .Values.memlimit | replace "M" "" |mul 2 }}M </code></pre> <p>Since Memory can be specified with different units you maybe need some regexp:</p> <pre><code>{{ .Values.memlimit |regexFind "[1-9]+" |mul 2 }}{{ .Values.memlimit | regexFind "[a-zA-Z]+" }} </code></pre> <p>But like stated all Sprig Math Functions only work on int64 so mul 0.8 will multiply by zero, mul 1,6 with only multiply with 1 and so on.</p> <p>So probably you have to wait for the Sprig functions to also work with floats to achieve a percentage based calculation or you find some clever trick with the provided Math functions by Sprig and the int64 type.</p> <p>Maybe something like explained in this answer:</p> <p><a href="https://stackoverflow.com/questions/20788793/c-how-to-calculate-a-percentageperthousands-without-floating-point-precision">C How to calculate a percentage(perthousands) without floating point precision</a></p>
<p>In a container running with host networking option it is possible to use a host network interface and its IP from the container and contact external network <em>from</em> this interface and IP. So if a host has several IPs configured, the container can choose which one it uses.</p> <p>Can I have a similar setup with Kubernetes and let a container use an host IP ?</p> <p>NB: I need the process to contact an external service <em>from</em> specific IPs, I dont necessarily need those IPs to be assigned to a container from an external view.</p>
<p><code>hostNetwork=true</code> in pod specification exposes host network to the pod, and container can access network interfaces:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: lookup spec: hostNetwork: true containers: - name: lookup image: sbusso/lookup_ips:latest ports: - containerPort: 9000 </code></pre> <p>To test it: <code>kubectl port-forward lookup 9000</code> and then go to <a href="http://127.0.0.1:9000/" rel="nofollow noreferrer">http://127.0.0.1:9000/</a> and get network interfaces details:</p> <pre><code>lo - 127.0.0.1/8 - ::1/128 eth0 - 10.0.2.15/24 - fe80::a00:27ff:fea1:6e61/64 eth1 - 192.168.99.101/24 - fe80::a00:27ff:fe77:d179/64 </code></pre> <p>Note this option is not recommended in Kubernetes good practices: <a href="https://kubernetes.io/docs/concepts/configuration/overview/#services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/overview/#services</a></p>
<p>My application is deployed on a Kubernetes Cluster that runs on Google Cloud. I want to fetch logs written by my application using <a href="https://cloud.google.com/logging/docs/reference/v2/rest/v2/logs/list" rel="nofollow noreferrer">Stackdriver's REST APIs for logging</a>.</p> <p>From the above documentation page and <a href="https://googlecloudplatform.github.io/google-cloud-python/latest/logging/usage.html#retrieving-log-entries" rel="nofollow noreferrer">this example</a>, it seems that I can only list logs of a project, organization, billing account or folder.</p> <p>I want to know if there are any REST APIs using which I can fetch logs of:</p> <ul> <li>A pod in a Kubernetes Cluster running on Google Cloud</li> <li>A VM instance running on Google Cloud</li> </ul>
<p>you need to request per <a href="https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource" rel="nofollow noreferrer">MonitoredResource</a>, which permits instance names and alike... for GCE that would be <code>gce_instance</code> while for GKE it would be <code>container</code>. individual pods of a cluster can be filtered by their <code>cluster_name</code> &amp; <code>pod_id</code>; the documentation for <a href="https://cloud.google.com/logging/docs/api/v2/resource-list" rel="nofollow noreferrer">resource-list</a> describes it:</p> <blockquote> <p><strong>container</strong> (GKE Container) A Google Container Engine (GKE) container instance.</p> <p><strong>project_id</strong>: The identifier of the GCP project associated with this resource, such as &quot;my-project&quot;.</p> <p><strong>cluster_name</strong>: An immutable name for the cluster the container is running in.</p> <p><strong>namespace_id</strong>: Immutable ID of the cluster namespace the container is running in.</p> <p><strong>instance_id</strong>: Immutable ID of the GCE instance the container is running in.</p> <p><strong>pod_id</strong>: Immutable ID of the pod the container is running in.</p> <p><strong>container_name</strong>: Immutable name of the container.</p> <p><strong>zone</strong>: The GCE zone in which the instance is running.</p> </blockquote>
<p>I am trying to get dns pod name resolution working on my EKS Kubernetes cluster v1.10.3. My understanding is that creating a headless service will create the necessary pod name records I need but I'm finding this is not true. Am I missing something?</p> <p>Also open to other ideas on how to get this working. Could not find alternate solution.</p> <h3>Adding update</h3> <p>I wasn't really clear enough. Essentially what I need is to resolved as such: worker-767cd94c5c-c5bq7 -> 10.0.10.10 worker-98dcd94c5d-cabq6 -> 10.0.10.11 and so on.... </p> <p>I don't really need a round robin DNS just read somewhere that this could be a work around. Thanks!</p> <pre><code># my service apiVersion: v1 kind: Service metadata: ... name: worker namespace: airflow-dev resourceVersion: "374341" selfLink: /api/v1/namespaces/airflow-dev/services/worker uid: 814251ac-acbe-11e8-995f-024f412c6390 spec: clusterIP: None ports: - name: worker port: 8793 protocol: TCP targetPort: 8793 selector: app: airflow tier: worker sessionAffinity: None type: ClusterIP status: loadBalancer: {} # my pod apiVersion: v1 kind: Pod metadata: creationTimestamp: 2018-08-31T01:39:37Z generateName: worker-69887d5d59- labels: app: airflow pod-template-hash: "2544381815" tier: worker name: worker-69887d5d59-6b6fc namespace: airflow-dev ownerReferences: - apiVersion: extensions/v1beta1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: worker-69887d5d59 uid: 16019507-ac6b-11e8-995f-024f412c6390 resourceVersion: "372954" selfLink: /api/v1/namespaces/airflow-dev/pods/worker-69887d5d59-6b6fc uid: b8d82a6b-acbe-11e8-995f-024f412c6390 spec: containers: ... ... name: worker resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: ... ... dnsPolicy: ClusterFirst nodeName: ip-10-0-1-226.us-west-2.compute.internal restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: airflow serviceAccountName: airflow terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: ... ... status: conditions: - lastProbeTime: null lastTransitionTime: 2018-08-31T01:39:37Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2018-08-31T01:39:40Z status: "True" type: Ready - lastProbeTime: null lastTransitionTime: 2018-08-31T01:39:37Z status: "True" type: PodScheduled containerStatuses: ... ... lastState: {} name: worker ready: true restartCount: 0 state: running: startedAt: 2018-08-31T01:39:39Z hostIP: 10.0.1.226 phase: Running podIP: 10.0.1.234 qosClass: BestEffort startTime: 2018-08-31T01:39:37Z # querying the service dns record works! airflow@worker-69887d5d59-6b6fc:~$ nslookup worker.airflow-dev.svc.cluster.local Server: 172.20.0.10 Address: 172.20.0.10#53 Name: worker.airflow-dev.svc.cluster.local Address: 10.0.1.234 # querying the pod name does not work :( airflow@worker-69887d5d59-6b6fc:~$ nslookup worker-69887d5d59-6b6fc.airflow-dev.svc.cluster.local Server: 172.20.0.10 Address: 172.20.0.10#53 ** server can't find worker-69887d5d59-6b6fc.airflow-dev.svc.cluster.local: NXDOMAIN airflow@worker-69887d5d59-6b6fc:~$ nslookup worker-69887d5d59-6b6fc.airflow-dev.pod.cluster.local Server: 172.20.0.10 Address: 172.20.0.10#53 *** Can't find worker-69887d5d59-6b6fc.airflow-dev.pod.cluster.local: No answer </code></pre>
<p>Internally, I suggest using the service DNS records to point to the pod, which you already confirmed works. This of course does not require you to have a Headless service to use service DNS.</p> <p>The kube-dns automatic records work in the following way:</p> <p>pod -> service in the same namespace: curl <a href="http://servicename" rel="nofollow noreferrer">http://servicename</a></p> <p>pod -> service in a different namespace: curl <a href="http://servicename.namespace" rel="nofollow noreferrer">http://servicename.namespace</a></p> <p>Read more about service discovery here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables</a></p> <p>You can read more about DNS records for services here <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services</a></p> <p>If you need custom name resolution externally I recommend using nginx-ingress:</p> <p><a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a> <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p> <p>EDIT: Include details about actual pod DNS</p> <p>v1.2 introduces a beta feature where the user can specify a Pod annotation, pod.beta.kubernetes.io/subdomain, to specify the Pod's subdomain. The final domain will be "...svc.". For example, a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", will have the FQDN "foo.bar.my-namespace.svc.cluster.local"</p> <blockquote> <p>A Records and hostname based on Pod's hostname and subdomain fields Currently when a pod is created, its hostname is the Pod's metadata.name value.</p> <p>With v1.2, users can specify a Pod annotation, pod.beta.kubernetes.io/hostname, to specify what the Pod's hostname should be. The Pod annotation, if specified, takes precedence over the Pod's name, to be the hostname of the pod. For example, given a Pod with annotation pod.beta.kubernetes.io/hostname: my-pod-name, the Pod will have its hostname set to "my-pod-name".</p> <p>With v1.3, the PodSpec has a hostname field, which can be used to specify the Pod's hostname. This field value takes precedence over the pod.beta.kubernetes.io/hostname annotation value.</p> <p>v1.2 introduces a beta feature where the user can specify a Pod annotation, pod.beta.kubernetes.io/subdomain, to specify the Pod's subdomain. The final domain will be "...svc.". For example, a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", will have the FQDN "foo.bar.my-namespace.svc.cluster.local"</p> <p>With v1.3, the PodSpec has a subdomain field, which can be used to specify the Pod's subdomain. This field value takes precedence over the pod.beta.kubernetes.io/subdomain annotation value.</p> </blockquote> <p><a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/services-networking/dns-pod-service/</a></p>
<p>I'm dynamically provisioning a EBS Volume (Kubernetes on AWS through EKS) through PersistentVolumeClaim with a StorageClass </p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: k8sebs parameters: encrypted: "false" type: gp2 zones: us-east-1a provisioner: kubernetes.io/aws-ebs reclaimPolicy: Delete volumeBindingMode: Immediate </code></pre> <p>PVC below</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: testk8sclaim spec: accessModes: - ReadWriteOnce storageClassName: k8sebs resources: requests: storage: 1Gi </code></pre> <p>And pod that uses the volume:</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: alpine image: alpine:3.2 volumeMounts: - mountPath: "/var/k8svol" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: testk8sclaim </code></pre> <p>I need to tag the EBS volume with a custom tag.</p> <p>Documentation mentions nothing about tagging for provisioner aws-ebs, storageclass or PVC. I've spent hours to try to add a tag to the dynamically provided EBS volume but not luck.</p> <p>Is creating custom tags for EBS a possibility in this scenario and if it is how can it be achieved?</p> <p>Thank you,</p> <p>Greg</p>
<p>Seems like at this point in time is not something possible yet.</p> <p>Found these:</p> <p><a href="https://github.com/kubernetes/kubernetes/pull/49390" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/49390</a></p> <p><a href="https://github.com/kubernetes/kubernetes/issues/50898" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/50898</a></p> <p>Hopefully something will be done soon.</p>
<p>This setup is running on an Amazon EKS cluster.</p> <p>I am getting an error where the hostname on a pod does not resolve to the cluster ip.</p> <pre><code>$ curl -vvv myservice:10000 * Rebuilt URL to: myservice:10000/ * Hostname was NOT found in DNS cache </code></pre> <p>The env vars have the right service name, ip, and port.</p> <pre><code>$ env | grep MYSERVICE MYSERVICE_PORT_10000_TCP_PORT=10000 MYSERVICE_PORT=tcp://172.xx.xx.36:10000 MYSERVICE_PORT_10000_TCP=tcp://172.xx.xx.36:10000 MYSERVICE_PORT_10000_TCP_PROTO=tcp MYSERVICE_SERVICE_PORT=10000 MYSERVICE_PORT_10000_TCP_ADDR=172.xx.xx.36 MYSERVICE_SERVICE_HOST=172.xx.xx.36 MYSERVICE_SERVICE_PORT_MYSERVICE=10000 </code></pre> <p>I can curl the cluster ip/port and get the desired response.</p> <p>/etc/resolv.conf looks like</p> <pre><code>$ cat /etc/resolv.conf nameserver 172.20.0.10 search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal options ndots:5 </code></pre> <p>Is a step being skipped by the container to load the hostname + service info?</p>
<p>I created an ingress rule for all traffic throughout my worker-node security group and it started working. It looks like there was an issue with containers on a different host than the host that had the kube-dns pods. There is probably a better solution but as of now this has resolved my issue.</p> <p>EDIT: The previous answer did not resolve my issue. The problem ended up being that two out of three nodes had the wrong cluster ip in /etc/systemd/system/kubelet.service. After resolving that all the pods were able to resolve the DNS. It was temporarily fixed before because the pod coincidentally spun up on the single working node.</p>
<p>I'm kind of a newbie at using GCP/Kubernetes. I want to deploy both a GRPC service and a client to GCP. </p> <p>I have read a lot about it and have tried several things. There's something on cloud endpoints where you compile your proto file and do an api.config.yaml. (<a href="https://cloud.google.com/endpoints/docs/grpc/get-started-grpc-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/endpoints/docs/grpc/get-started-grpc-kubernetes-engine</a>)</p> <p>That's not what I'm trying to do. I want to upload a GRPC service with it's .proto and expose its HTTP/2 public IP address and port. Then, deploy a GRPC client that interacts with that address and exposes REST endpoints.</p> <p>How can I get this done?</p>
<p>To deploy a grpc application to GKE/Kubernetes:</p> <ol> <li>Learn about gRPC, follow one of the quickstarts at <a href="https://grpc.io/docs/quickstart/" rel="noreferrer">https://grpc.io/docs/quickstart/</a></li> <li>Learn about how to build Docker images for your application. <ul> <li>Follow this Docker tutorial: <a href="https://docs.docker.com/get-started/part2/#conclusion-of-part-two" rel="noreferrer">https://docs.docker.com/get-started/part2/#conclusion-of-part-two</a></li> </ul></li> <li>Once you have a Docker image, follow <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a> tutorial to learn how to: <ul> <li>push a container image to Google Container Registry</li> <li>create a GKE cluster</li> <li>deploy the container image</li> <li>expose it on public internet on an IP address.</li> </ul></li> </ol> <p>These should be good to start with.</p> <p>Note that gRPC apps aren't much different than just HTTP web server apps. As far as Kubernetes is concerned, they're just a container image with a port number. :)</p>
<p>I want to change kubelet logs directory location. For achieving same I have modified <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> file contents as follows(as mentioned in <a href="https://stackoverflow.com/questions/46045943/how-to-change-kubelet-working-dir-to-somewhere-else">how to change kubelet working dir to somewhere else</a>)</p> <pre><code># Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" Environment="KUBELET_EXTRA_ARGS=--root-dir=/D/kubelet-files/ --log-dir=/D/kubelet-logs/" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_EXTRA_ARGS $KUBELET_KUBEADM_ARGS </code></pre> <p>After this I executed commands :</p> <pre><code>systemctl daemon-reload systemctl restart kubelet </code></pre> <p>Even I restarted <code>kubeadm</code>. But still logs directory location is not changed and it goes on writing to default <code>/var/lib/kubelet</code> directory . I am using <code>Kubernetes version: v1.11.2</code>. What might be the issue?</p>
<p>I have tried on some machines of mine on GCloud with <code>v1.11.2</code> and I noticed the same your problem.</p> <p>The parameter <code>--log-dir</code> in kubelet seems to have no effect.</p> <p>It is worth opening an issue in kubelet project.</p>
<p>I am looking for some good examples about structuring namespaces in Kubernetes clusters, and an appropriate way to label my pods for long term maintainability.</p> <p>Can you recommend some resources, or examples from real world applications that follow some solid practices?</p>
<p>Namespaces:</p> <ul> <li><p>I recommend grouping resources by namespaces for "resources you can just delete altogether".</p></li> <li><p>Most notably, Kubernetes Policy objects (like RBAC, PodSecurityPolicy, NetworkPolicy, ResourceQuota) are per-namespace. So "namespaces" are often for organizational/team boundary.</p></li> </ul> <p>Labels:</p> <ul> <li>These can be applied to any kind of object (incl. namespaces)</li> <li>Think of labels that indicate logical grouping of objects.</li> <li>For example you can have an <code>app=paymentservice</code> label for everything that empowers this app (Pods, Service, Secret)</li> <li>You can also use labels to indicate: <ul> <li>version/commit numbers: <code>gitcommit=235ab3f</code></li> <li>purpose (especially useful while testing namespaces etc) <code>purpose=testing</code></li> <li>organizational boundary (again, for namespaces) <code>team=payments</code></li> </ul></li> </ul>
<p>I have a pod with only one container that have this resources configuration:</p> <p><code> resources: limits: cpu: 1000m memory: 1000Mi </code></p> <p>From the node where the pod is scheduled I read this:</p> <p><code> CPU Requests CPU Limits Memory Requests Memory Limits 1 (50%) 1 (50%) 1000Mi (12%) 1000Mi (12%) </code></p> <p>Why the "resources requests" are setted when I dont' want that?</p>
<p>Container’s request is set to match its limit regardless if there is a default memory request for the namespace.(<a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/#what-if-you-specify-a-container-s-limit-but-not-its-request" rel="nofollow noreferrer">Kubernetes Doc</a>)</p>
<p>I'm creating a Multi-Tenancy Kubernetes infrastructure. I created a Helm Chart with my app, and now I need automate the helm chart installation when a new namespace is created.</p> <p>For example, when the namespace <code>client1</code> is create I need to run <code>helm install myrepo/myapp --name client1</code>.</p> <p>How can i get the new namespace creation event? And the namespace name?</p>
<p>You can either keep running a script which executes <code>kubectl get namespace</code> every since a while and compares the current result with the old result. When you find out a new namespace created, you can then execute <code>helm install myrepo/myapp --name client1</code>. Or you can run an application in your cluster. What the application does is basically listing all namespaces in the cluster, comparing the current with the cached, if a new namespace found, then call helm client to install your app. For more information, if you are using golang, I would recommend you to use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">kubernetes client-go</a> to get the list of namespaces in the cluster and you can refer to the open resource project <a href="https://github.com/banzaicloud/pipeline/tree/master/helm" rel="nofollow noreferrer">pipeline</a> for the helm client-go part to install your app.</p>
<p>If there's something wrong with the way I phrased the question please tell, so I can be better next time or edit the question.</p> <p><strong>What I did.</strong></p> <p>Use rancher to create an cluster with Amazon EKS.</p> <p>Deployed a nodejs app in 'default' namespace.</p> <p>Installed MongoDB replicaset from the rancher app catalog with default settings.</p> <ul> <li>Service/Deployment name is mongodb-replicaset</li> <li>namespace is also mongodb-replicaset</li> </ul> <p>When I use <code>mongodb://mongodb-replicaset:27017/tradeit_system?replicaSet=rs</code> as connection string.</p> <p>I get the error.</p> <blockquote> <p>MongoNetworkError: failed to connect to server [mongodb-replicaset-:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongodb-replicaset mongodb-replicaset:27017]</p> </blockquote> <p>Then I read in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">kubernetes documentation</a> that to access a service in a different namespace you need to also specify the namespace along with the service name.</p> <p>So I did this <code>mongodb://mongodb-replicaset.mongodb-replicaset:27017/tradeit_system?replicaSet=rss</code> as the connection url I get the error. </p> <blockquote> <p>MongoError: no primary found in replicaset or invalid replica set name</p> </blockquote>
<p>So you have to include the namespace in the hoststring if if you want to access it as well as reference the cluster domain, which you aren't doing.</p> <p>To quote from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">this document</a></p> <blockquote> <p>The domain managed by this Service takes the form: $(service name).$(namespace).svc.cluster.local, where “cluster.local” is the cluster domain. </p> </blockquote> <p>so in your case, your pod DNS would be written as:</p> <p><code>mongodb-replicaset.mongodb-replicaset.svc.cluster.local</code></p>
<p>I want to run a flink job on kubernetes, using a (persistent) state backend it seems like crashing taskmanagers are no issue as they can ask the jobmanager which checkpoint they need to recover from, if I understand correctly.</p> <p>A crashing jobmanager seems to be a bit more difficult. On this <a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077" rel="noreferrer">flip-6 page</a> I read zookeeper is needed to be able to know what checkpoint the jobmanager needs to use to recover and for leader election.</p> <p>Seeing as kubernetes will restart the jobmanager whenever it crashes is there a way for the new jobmanager to resume the job without having to setup a zookeeper cluster?</p> <p>The current solution we are looking at is: when kubernetes wants to kill the jobmanager (because it want to move it to another vm for example) and then create a savepoint, but this would only work for graceful shutdowns.</p> <p>Edit: <a href="http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-HA-with-Kubernetes-without-Zookeeper-td15033.html" rel="noreferrer">http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-HA-with-Kubernetes-without-Zookeeper-td15033.html</a> seems to be interesting but has no follow-up</p>
<p>Out of the box, Flink requires a ZooKeeper cluster to recover from JobManager crashes. However, I think you can have a lightweight implementation of the <code>HighAvailabilityServices</code>, <code>CompletedCheckpointStore</code>, <code>CheckpointIDCounter</code> and <code>SubmittedJobGraphStore</code> which can bring you quite far.</p> <p>Given that you have only one JobManager running at all times (not entirely sure whether K8s can guarantee this) and that you have a persistent storage location, you could implement a <code>CompletedCheckpointStore</code> which retrieves the completed checkpoints from the persistent storage system (e.g. reading all stored checkpoint files). Additionally, you would have a file which contains the current checkpoint id counter for <code>CheckpointIDCounter</code> and all the submitted job graphs for the <code>SubmittedJobGraphStore</code>. So the basic idea is to store everything on a persistent volume which is accessible by the single JobManager.</p>
<p>I've configured my Kubernetes to use one wildcard SSL certificate to all my apps using cert-manager and letsencrypt, now the problem is that I can't configure subdomain redirects cause Ingress is kinda "stiff". Here's how I'm trying to achieve this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-wildcard-ingress namespace: mynamespace annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod certmanager.k8s.io/acme-challenge-type: dns01 certmanager.k8s.io/acme-dns01-provider: azuredns ingress.kubernetes.io/force-ssl-redirect: "true" ingress.kubernetes.io/ssl-redirect: "true" spec: rules: - host: "domain.com" http: paths: - path: / backend: serviceName: some-service servicePort: 3000 - host: somesub.domain.com http: paths: - path: / backend: serviceName: some-other-service servicePort: 80 - host: othersub.domain.com http: paths: - path: / backend: serviceName: one-more-service servicePort: 8080 - host: "*.domain.com" http: paths: - path: / backend: serviceName: default-service-to-all-other-non-mapped-subdomains servicePort: 8000 tls: - secretName: domain-com-tls hosts: - "*.domain.com.br" </code></pre> <p>The problem is that Ingress ignores the declared subdomain redirects just because they're not listed in the "tls:hosts" section. And if I do put them there, it tries to issue the SSL certificate using the wildcard and the other subdomains as well in the same cert, which causes the issuer to refuse the order, saying the obvious: "subdomain.domain.com and *.domain.com are redundant"</p> <p>Is there any other way that I can declare those redirects and force them to use my SSL wildcard certificate?</p>
<p>Well, for anyone who's having this kind of trouble, I've managed to solve it (not the best solution, but it's a start). For this, I'll be using cert-manager and letsencrypt.</p> <p>First, I've created a ClusterIssuer to issue for my certs with letsencrypt:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-prod-dns spec: acme: dns01: providers: - azuredns: clientID: MY_AZURE_CLIENT_ID clientSecretSecretRef: key: client-secret name: azure-secret hostedZoneName: mydomain.com resourceGroupName: MY_AZURE_RESOURCE_GROUP_NAME subscriptionID: MY_AZURE_SUBSCRIPTION_ID tenantID: MY_AZURE_TENANT_ID name: azuredns email: [email protected] privateKeySecretRef: key: "" name: letsencrypt-prod-dns server: https://acme-v02.api.letsencrypt.org/directory </code></pre> <p>Then I've created a fallback ingress to all my subdomains (this one will be the cert generator):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/acme-challenge-type: dns01 certmanager.k8s.io/acme-dns01-provider: azuredns certmanager.k8s.io/cluster-issuer: letsencrypt-prod-dns ingress.kubernetes.io/force-ssl-redirect: "true" ingress.kubernetes.io/ssl-redirect: "true" kubernetes.io/ingress.class: nginx name: wildcard-ingress namespace: some-namespace spec: rules: - host: '*.mydomain.com' http: paths: - backend: serviceName: some-default-service servicePort: 80 path: / tls: - hosts: - '*.mydomain.com' - mydomain.com secretName: wildcard-mydomain-com-tls </code></pre> <p>Notice that I've declared at the TLS section the wildcard AND the absolute paths, so the cert will be valid for the URLs without subdomains too.</p> <p>At this point, any requests to your domain, will be redirected to "some-default-service" with SSL(cert-manager will issue for a new cert as soon as you create the fallback ingress. This can take a while once cert-manager dns01 issuer is not mature yet), great!!! </p> <p>But, what if you need to redirect some specific subdomain to another service? No problem (since they're running on the same namespace), all you have to do is to create a new ingress to your subdomain, pointing it to your existing wildcard-mydomain-com-tls cert secret:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/force-ssl-redirect: "false" ingress.kubernetes.io/ssl-redirect: "true" kubernetes.io/ingress.class: nginx name: somesubdomain-ingress namespace: some-namespace spec: rules: - host: somesubdomain.mydomain.com http: paths: - backend: serviceName: some-other-service servicePort: 8080 path: / tls: - hosts: - somesubdomain.mydomain.com secretName: wildcard-mydomain-com-tls </code></pre> <p>Easy peasy lemon squeezy!!! Now your somesubdomain.mydomain.com overrides your fallback rule and sends the user to another app. The only thing you should notice here is that the secret is valid only for "some-namespace" namespace, if you need to use this cert in another namespace, you could:</p> <ol> <li>Copy the secret from namespace "some-namespace" to "other-namespace". If you do this, remember that cert-manager will NOT renew this cert automatically for "other-namespace", so, you'd have to copy the secret again, every time your cert expires.</li> <li>Recreate the fallback ingress to every namespace you have, so you'd have a new cert for each of them. This approach is more ingress verbose, but, it's fully automatic.</li> </ol> <p>I guess that's it. Hope someone out there can benefit from this info.</p> <p>Cheers</p>
<p>I am novice to Kubernetes. Recently my Docker registry url is changed from <code>dockerhub.abc.com</code> to <code>dockerhub.def.com</code>. Is it possible that i can change this in properties of Kubernetes pod so that next time,it pulls from new registry?</p>
<p>In general you'll find it easiest if you explicitly qualify your image names to include the repository name, and not depend on a default value that isn't the official Docker Hub</p> <pre><code>image: dockerhub.abc.com/dist/image:1.2.3 </code></pre> <p>in which case you can just change the image name in your deployment</p> <pre><code>image: dockerhub.def.com/dist/image:1.2.3 </code></pre> <p>If you're using a system like <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> to manage your Kubernetes manifests, you might find it helpful to include the image base name and/or repository in its values file</p> <pre><code>image: dockerhub.abc.com/dist/image tag: 1.2.3 image: {{ printf "%s:%s" .Values.image .Values.tag }} </code></pre> <p>and then you can just change the image's repository location and redeploy.</p>
<p>I have created a managed Kubernetes cluster in Azure, but it's only for learning purposes and so I only want to pay for the compute whilst I'm actually using it.</p> <p>Is there a easy way to gracefully shut down and start up the VMs, availablity sets and load balancers?</p>
<p>You could use the <a href="https://learn.microsoft.com/en-us/cli/azure/?view=azure-cli-latest" rel="nofollow noreferrer">Azure CLI</a> to <a href="https://learn.microsoft.com/en-gb/azure/aks/start-stop-cluster" rel="nofollow noreferrer">stop the the entire cluster</a>:</p> <pre><code>az aks stop --name myAksCluster --resource-group myResourceGroup </code></pre> <p>And start it again with</p> <pre><code>az aks start --name myAksCluster --resource-group myResourceGroup </code></pre> <hr /> <p>Before this feature, it was possible to stop the virtual machines via Powershell:</p> <pre><code>az vm deallocate --ids $(az vm list -g MC_my_resourcegroup_westeurope --query &quot;[].id&quot; -o tsv) </code></pre> <p>Replace <code>MC_my_resourcegroup_westeurope</code> with the name of your resource group that contains the VM(s).</p> <p>When you want to start the VM(s) again, run:</p> <pre><code>az vm start --ids $(az vm list -g MC_my_resourcegroup_westeurope --query &quot;[].id&quot; -o tsv) </code></pre>
<p>We are using Kubernetes <code>v1.9.5</code> on bare metal, deployed with <code>kubespray</code>, network driver - <code>flannel</code>. </p> <p>When doing HTTP request from pod to another service, if that service has no endpoint, request hangs for <strong>exactly 130 seconds</strong> (I checked in via <code>NodeJS</code> net library and via <code>curl</code>). </p> <p>Where this value comes from? </p> <p>We noticed in while writing retries that try to establish connection to service.</p> <p>Any help appreciated.</p>
<p>It looks like the iptables will DROP packet if its destined to a non-existent end-point and DROP will never send a message back to the requester (<a href="http://www.chiark.greenend.org.uk/~peterb/network/drop-vs-reject" rel="nofollow noreferrer">DROP/REJECT</a>) . This will result in retires from requester. Retries depends on the parameter <code>net.ipv4.tcp_syn_retries</code> and by default its 6 in CentOS-7 </p> <p>When I set <code>net.ipv4.tcp_syn_retries=1</code> in the minion where the requester pod runs , the timeout happens in 3 seconds instead of the 2m 7.23s</p> <p>I hope this clarifies why curl or any port connect request hangs for long time if the request is for a non-existent endpoint.</p>
<p>I have a Kubernetes service running and we have an external API's dependent on this service.</p> <p>We would like to be notified if there is any service restart. Is there any possibility to hit an API endpoint on every service restart?</p>
<p>Hi and welcome to the community!</p> <p>There are multiple ways of achieve this. A really simple one (as pointed out by Thomas) is an Init Container. Refer to the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Kubernetes docs</a> for more on how to get those running! This init container would do nothing more than send out an HTTP request to your external API once the pod is started and terminate immediately afterwards.</p> <p>The other way is much more complex and will require you to write some code yourself. What you'd have to do is write your own controller that watches the entities through the Kubernetes API and notify your external service when a pod is rescheduled, killed, died etc. (You could however have your external service to exactly that why accessing the kube-api directly...)</p>
<p>I use Google Kubernetes Engine and I intentionally put an error in the code. I was hoping the rolling update will stop when it discovers the status is <code>CrashLoopBackOff</code>, but it wasn't.</p> <p>In this <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">page</a>, they say.. </p> <blockquote> <p>The Deployment controller will stop the bad rollout automatically, and will stop scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (maxUnavailable specifically) that you have specified.</p> </blockquote> <p>But it's not happening, is it only if the status <code>ImagePullBackOff</code>?</p> <p>Below is my configuration.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: volume-service labels: group: volume tier: service spec: replicas: 4 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 2 maxSurge: 2 template: metadata: labels: group: volume tier: service spec: containers: - name: volume-service image: gcr.io/example/volume-service:latest </code></pre> <p>P.S. I already read liveness/readiness probes, but I don't think it can stop a rolling update? or is it?</p>
<p>Turns out I just need to set <code>minReadySeconds</code> and it stops the rolling update when the new replicaSet has status <code>CrashLoopBackOff</code> or something like <code>Exited with status code 1</code>. So now the old replicaSet still available and not updated.</p> <p>Here is the new config.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: volume-service labels: group: volume tier: service spec: replicas: 4 minReadySeconds: 60 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 2 maxSurge: 2 template: metadata: labels: group: volume tier: service spec: containers: - name: volume-service image: gcr.io/example/volume-service:latest </code></pre> <p>Thank you for averyone help!</p>
<p>I am fresh to Kubernetes. </p> <p>My understanding of <code>secret</code> is that it encodes information by <code>base64</code>. And from the resources I have seen, it is claimed that <code>secret</code> could protect sensitive information. I do not get this. </p> <p>Besides encoding information with <code>base64</code>, I do not see any real difference between <code>secret</code> and <code>configMap</code>. And we could decode <code>base64</code>-encoded information so easily. That means there is no protection at all... </p> <p>Is my understanding wrong?</p>
<p>The thing which protects a <code>Secret</code> is the fact that it is a distinct resource type in kubernetes, and thus can be subject to a different RBAC policy than a <code>ConfigMap</code>.</p> <p>If you are currently able to read <code>Secret</code>s in your cluster, that's because your <code>ClusterRoleBinding</code> (or <code>RoleBinding</code>) has a rule that specifically grants access to those resources. It can be due to you accessing the cluster through its "unauthenticated" port from one of the master Nodes, or due to the [<code>Cluster</code>]<code>RoleBinding</code> attaching your <code>Subject</code> to <code>cluster-admin</code>, which is probably pretty common in hello-world situations, but I would guess less common in production cluster setups.</p> <p>That's the pedantic answer, however, <em>really</em> guarding the secrets contained in a <code>Secret</code> is trickier, given that they are usually exposed to the <code>Pod</code>s through environment injection or a volume mount. That means anyone who has <code>exec</code> access to the <code>Pod</code> can very easily exfiltrate the secret values, so if the secrets are super important, and must be kept even from the team, you'll need to revoke <code>exec</code> access to your <code>Pod</code>s, too. A middle ground may be to grant the team access to <code>Secret</code>s in their own <code>Namespace</code>, but forbid it from other <code>Namespace</code>s. It's security, so there's almost no end to the permutations and special cases.</p>
<p>I am trying to install <code>traefik</code> as an ingress controller on <code>GKE</code> (google cloud kubernetes engine) and when I try:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml </code></pre> <p>I have this error: </p> <blockquote> <p>Error from server (Forbidden): error when creating "<a href="https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml</a>": clusterroles.rbac.authorization.k8s.io "traefik-ingress-controller" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:[""], Resources:["services"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["services"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["services"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["endpoints"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["endpoints"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["endpoints"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["secrets"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["secrets"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["secrets"], Verbs:["watch"]} PolicyRule{APIGroups:["extensions"], Resources:["ingresses"], Verbs:["get"]} PolicyRule{APIGroups:["extensions"], Resources:["ingresses"], Verbs:["list"]} PolicyRule{APIGroups:["extensions"], Resources:["ingresses"], Verbs:["watch"]}] user=&amp;{[email protected] [system:authenticated] map[user-assertion.cloud.google.com:[ADKE0IBz9kwSuZRZkfbLil8iC/ijcmJJmuys2DvDGxoxQ5yP6Pdq1IQs3JRwDmd/lWm2vGdMXGB4h1QKiwx+3uV2ciTb/oQNtkthBvONnVp4fJGOSW1S+8O8dqvoUNRLNeB5gADNn1TKEYoB+JvRkjrkTOxtIh7rPugLaP5Hp7thWft9xwZqF9U4fgYHnPjCdRgvMrDvGIK8z7ONljYuStpWdJDu7LrPpT0L]]} ownerrules=[PolicyRule{APIGroups:["authorization.k8s.io"], Resources:["selfsubjectaccessreviews" "selfsubjectrulesreviews"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/<em>" "/apis" "/apis/</em>" "/healthz" "/openapi" "/openapi/<em>" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/</em>" "/version" "/version/"], Verbs:["get"]}] ruleResolutionErrors=[]</p> </blockquote> <p>The problem is this part only, the other one is created successfully:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: traefik-ingress-controller rules: - apiGroups: - "" resources: - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch </code></pre> <p>Based on docs ( <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control</a>) I tried executing this command but I still get the same error</p> <pre><code>kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=MY_EMAIL_THAT_I_LOGIN_INTO_GCP </code></pre> <p>Has anyone ever manage to fix this? or it just does not work ?</p> <p>I am trying to make a kubernetes cluster without loadBalancer in order to be cheap on my local machine (minikube), I have no such problems. </p>
<p>So for everyone who is trying to install traefik on GKE, and you get stuck with that error message, just do that first <a href="https://stackoverflow.com/a/46316672/1747159">https://stackoverflow.com/a/46316672/1747159</a></p> <pre class="lang-sh prettyprint-override"><code># Get password value $ gcloud container clusters describe CUSTER_NAME --zone ZONE_NAME | grep password # Pass username and password parameters $ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml --username=admin --password=PASSWORD </code></pre> <p>Thanks Nicola Ben for helping me figure it out</p>
<p>I am a scientist who is exploring the use of Dask on Amazon Web Services. I have some experience with Dask, but none with AWS. I have a few large custom task graphs to execute, and a few colleagues who may want to do the same if I can show them how. I believe that I should be using <a href="https://dask.pydata.org/en/latest/setup/kubernetes-helm.html" rel="nofollow noreferrer">Kubernetes with Helm</a> because I fall into the <a href="https://dask.pydata.org/en/latest/setup/kubernetes.html" rel="nofollow noreferrer">"Try out Dask for the first time on a cloud-based system like Amazon, Google, or Microsoft Azure"</a> category.</p> <ol> <li>I also fall into the "Dynamically create a personal and ephemeral deployment for interactive use" category. Should I be trying native Dask-Kubernetes instead of Helm? It seems simpler, but it's hard to judge the trade-offs.</li> <li>In either case, how do you provide Dask workers a uniform environment that includes your own Python packages (not on any package index)? <a href="https://dask.pydata.org/en/latest/setup/docker.html" rel="nofollow noreferrer">The solution I've found</a> suggests that packages need to be on a <code>pip</code> or <code>conda</code> index.</li> </ol> <p>Thanks for any help!</p>
<h3>Use Helm or Dask-Kubernetes ?</h3> <p>You can use either. Generally starting with Helm is simpler.</p> <h3>How to include custom packages</h3> <p>You can install custom software using pip or conda. They don't need to be on PyPI or the anaconda default channel. You can point pip or conda to other channels. Here is an example installing software using pip from github</p> <pre><code>pip install git+https://github.com/username/repository@branch </code></pre> <p>For small custom files you can also use the <a href="http://dask.pydata.org/en/latest/futures.html#distributed.Client.upload_file" rel="nofollow noreferrer">Client.upload_file</a> method.</p>
<p>I need some help on nginx-php application deployment. I am totally new into kubernetes trying to run a php code. I am running this on minikube. </p> <p>This is my Dockerfile file</p> <pre><code>FROM php:7.2-fpm RUN mkdir /app COPY hello.php /app </code></pre> <p>This is my web.yaml file which includes Deployment and Service</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: web-deployment labels: app: web-server spec: replicas: 1 template: metadata: labels: app: web-server spec: volumes: # Create the shared files volume to be used in both pods - name: shared-files emptyDir: {} - name: nginx-config-volume configMap: name: nginx-config containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: shared-files mountPath: /var/www/html - name: nginx-config-volume mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: php-fpm image: my-php-app:1.0.0 ports: - containerPort: 80 volumeMounts: - name: shared-files mountPath: /var/www/html lifecycle: postStart: exec: command: ["/bin/sh", "-c", "cp -r /app/. /var/www/html"] --- apiVersion: v1 kind: Service metadata: name: web-service labels: app: web-server spec: ports: - port: 80 type: NodePort selector: app: web-server </code></pre> <p>This is my config.yaml file for nginx ConfigMap</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: nginx-config data: nginx.conf: | events { } http { server { listen 80 default_server; listen [::]:80 default_server; # Set nginx to serve files from the shared volume! root /var/www/html; server_name _; location / { try_files $uri $uri/ =404; } location ~ \.php$ { include fastcgi_params; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9000; } } } </code></pre> <p>I have created the deployment, services and configmap using</p> <pre><code>kubectl create -f web.yaml kubectl create -f configmap.yaml </code></pre> <p>After I get the ip and port through</p> <pre><code>minikube service web-service --url </code></pre> <p>I get an ip like this <a href="http://192.168.99.100:31170" rel="nofollow noreferrer">http://192.168.99.100:31170</a> When I browse this Ip, I get a response like <strong>nginx 403 forbidden</strong></p> <p>What did I do wrong here?</p>
<p>Seems like nginx unable to access default page.</p> <p>Either add <code>index hello.php</code> to nginx configuration </p> <pre><code> location / { index hello.php; try_files $uri $uri/ =404; } </code></pre> <p>Or, access your application via absolute URL <code>http://192.168.99.100:31170/hello.php</code> </p>
<p>I am having two linux machines where I am learning Kubernetes. Since resources are limited, I want to configure the same node as master and slave, so the configuration looks like</p> <p>192.168.48.48 (master and slave) 191.168.48.49 (slave)</p> <p>How to perform this setup. Any help will be appreciated. </p>
<p>Yes, you can use <code>minikube</code> the <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">Minikube install</a> for single node cluster. Use <code>kubeadm</code> to install Kubernetes where 1 node is master and another one as Node. Here is the <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">doc</a>, but, make sure you satisfy the prerequisites for the nodes and small house-keeping needs to done as shown in the official document. Then you could install and create two machine cluster for testing purpose if you have two linux machines as you shown two different IP's. </p> <p>Hope this helps.</p>
<h2>Background</h2> <p>I'm running a Kubernetes cluster on Google Cloud Platform. I have 2 Node-Pools in my cluster: <code>A</code> and <code>B</code>. <code>B</code> is cheaper (depends on hardware). I prefer that my deployment will run on <code>B</code>. Unless no free resources in <code>B</code>. In that case, new pods will deploy to <code>A</code>. </p> <p>So I added this section to deployment YAML:</p> <pre><code> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - preference: matchExpressions: - key: B operator: Exists weight: 100 </code></pre> <p>So I giving more weight to node-pool B.</p> <p>At start, it's working good. I came back after 24 hours and found that some pods are deployed to node-pool A while I have free resources (un-allocated machines) in node B. This is wast of money. </p> <h2>So, how its happen?</h2> <p>I sure that the property <code>nodeAffinity</code> is working currectly. I suspect that at same point, node pool B was running without any FREE resources. At this point, the cluster want to grow... The new pod was deployed to node pool A. Until here, everything is fine...</p> <h2>What I want to achieve?</h2> <p>Lets say that after an hour, from lack of node-pool <code>B</code> resources time, There are plany of resources free to alocation. I want that Kubernetes will move the existing pods from A to their new house in node pool B. </p> <p>I looking for something like <code>preferredDuringSchedulingPreferedDuringExecution</code>.</p> <h2>Question</h2> <p>Is this possible?</p> <h2>Update</h2> <p>Based on @Hitobat answer, I tried to use this code:</p> <pre><code> spec: tolerations: - key: A operator: "Exists" effect: "NoExecute" tolerationSeconds: 60 </code></pre> <p>Unfortunately, After waiting enough time, I still see pods on my <code>A</code> nodepool. I did something wrong?</p>
<p>You can taint pool A. Then configure <em>all</em> your pods to tolerate the taint, but with a tolerationSeconds for the duration you want. This is in addition to the config you already did for pool B.</p> <p>The effect will be that the pod is scheduled to A if it won't fit on B, but then after a while will be evicted (and hopefully rescheduled onto B again).</p> <p>See: <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions</a></p>
<p>I am trying to follow the kubernetes tutorial for single-Instance stateful application: <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a></p> <p>The problem is, after I apply all the yaml listed there, I end up with my pod unavailable, as shown below, </p> <pre><code>kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE mysql 1 1 1 0 1h kubectl get pods NAME READY STATUS RESTARTS AGE mysql-fb75876c6-tpdzc 0/1 CrashLoopBackOff 17 1h kubectl describe deployment mysql Name: mysql Namespace: default CreationTimestamp: Mon, 03 Sep 2018 10:50:22 +0000 Labels: &lt;none&gt; Annotations: deployment.kubernetes.io/revision=1 Selector: app=mysql Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: Recreate MinReadySeconds: 0 Pod Template: Labels: app=mysql Containers: mysql: Image: mysql:5.6 Port: 3306/TCP Host Port: 0/TCP Environment: MYSQL_ROOT_PASSWORD: password Mounts: /var/lib/mysql from mysql-persistent-storage (rw) Volumes: mysql-persistent-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mysql-pv-claim ReadOnly: false Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available False MinimumReplicasUnavailable OldReplicaSets: &lt;none&gt; NewReplicaSet: mysql-fb75876c6 (1/1 replicas created) Events: &lt;none&gt; kubectl describe pods mysql-fb75876c6-tpdzc Name: mysql-fb75876c6-tpdzc Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: wombat-dev-kubeadm-worker-1/142.93.56.123 Start Time: Mon, 03 Sep 2018 10:50:22 +0000 Labels: app=mysql pod-template-hash=963143272 Annotations: &lt;none&gt; Status: Running IP: 192.168.1.14 Controlled By: ReplicaSet/mysql-fb75876c6 Containers: mysql: Container ID: docker://08d630190a83fb5097bf8a98f7bb5f474751e021aec68b1be958c675d3f26f27 Image: mysql:5.6 Image ID: docker-pullable://mysql@sha256:2e48836690b8416e4890c369aa174fc1f73c125363d94d99cfd08115f4513ec9 Port: 3306/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 137 Started: Mon, 03 Sep 2018 12:04:24 +0000 Finished: Mon, 03 Sep 2018 12:04:29 +0000 Ready: False Restart Count: 19 Environment: MYSQL_ROOT_PASSWORD: password Mounts: /var/lib/mysql from mysql-persistent-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-6t8pg (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: mysql-persistent-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mysql-pv-claim ReadOnly: false default-token-6t8pg: Type: Secret (a volume populated by a Secret) SecretName: default-token-6t8pg Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 1m (x334 over 1h) kubelet, wombat-dev-kubeadm-worker-1 Back-off restarting failed container </code></pre> <p>Question is: what should I do? Running <code>kubectl logs mysql-fb75876c6-tpdzc</code> returns no output at all. </p> <p>Any help ?</p> <p>This is the version of kubeadm </p> <pre><code>kubeadm version: &amp;version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:14:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>The container is exiting with a <strong>Exit Code 137</strong>, that means a <em>SIGTERM</em> (equivalent to a <code>kill -9 &lt;process&gt;</code>)is sent to the process executed in the container. Usually that means the <strong>OOM Killer</strong> came in to kill it because it was using more memory than the available. Do you have enough memory available on the machine?</p>
<p>As per the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#setup-configuration-memory" rel="nofollow noreferrer">official es docs</a>, disabling swapping is one of the best performance boosts available to Elasticsearch.</p> <p>However, it's proving to be difficult to configure. I've spent a number of hours researching and attempting different methods to disable swapping using the official ES docker image on Kubernetes.</p> <p>When setting <code>bootstrap.memory_lock: true</code> as an env variable, the image fails to boot up with the error: <code>Unable to lock JVM Memory: error=12, reason=Cannot allocate memory. This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536</code>. As the docs point out, this is kind of expected. I've even mounted a custom <code>/etc/security/limits.conf</code> with the settings, but that's failed.</p> <p>What is the recommended way to disable swapping when using the official es image on k8s?</p> <p>And, here are the relevant sections of my yaml</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: elastic-data spec: serviceName: elastic-data replicas: 1 template: spec: securityContext: runAsUser: 0 fsGroup: 0 containers: - name: elastic-data image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.0 env: - name: ES_JAVA_OPTS value: "-Xms2g -Xmx2g" - name: cluster.name value: "elastic-devs" - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.zen.ping.unicast.hosts value: "elastic-master.default.svc.cluster.local" - name: node.master value: "false" - name: node.ingest value: "false" - name: node.data value: "true" - name: network.host value: "0.0.0.0" - name: path.data value: /usr/share/elasticsearch/data - name: indices.memory.index_buffer_size value: "512MB" - name: bootstrap.memory_lock value: "true" resources: requests: memory: "3Gi" limits: memory: "3Gi" ports: - containerPort: 9300 name: transport - containerPort: 9200 name: http volumeMounts: - name: data-volume mountPath: /usr/share/elasticsearch/data - name: swappiness-config mountPath: /etc/security/limits.conf subPath: limits.conf volumes: - name: data-volume persistentVolumeClaim: claimName: pvc-es - name: swappiness-config configMap: name: swappiness-config items: - key: limits.conf path: limits.conf </code></pre> <p>limits.conf</p> <pre><code>elasticsearch soft memlock unlimited elasticsearch hard memlock unlimited elasticsearch hard nofile 65536 elasticsearch soft nofile 65536 </code></pre>
<p>I think, the ulimits in my yaml weren't being recognized, so I followed <a href="https://github.com/kubernetes/kubernetes/issues/3595" rel="nofollow noreferrer">this post</a> and created an image with a custom entrypoint that set the settings.</p>
<p>I have a UI application written in Angular, which has a backend running in NodeJS. I also have two other services which will be invoked from the NodeJS backend. These applications are running in docker containers and are deployed to a Kubernetes cluster in AWS. </p> <p>The flow is like this:</p> <p>AngularUI -> NodeJS -> Service1/Service2</p> <p>AngularUI &amp; NodeJS are in the same docker container, while the other two services are in 2 separate containers.</p> <p>I have been able to get the services running in Kubernetes on AWS. Service to Service calls (Service 1-> Service2) work fine, as I'm invoking them using k8s labels.</p> <p>Now Im not able to figure out how to make calls from the Angular front end to the NodeJS backend, since the requests execute on the client side. I cannot give the IP of the ELB of the service, as the IP changes with every deployment.</p> <p>I tried creating an AWS API Gateway which points to the ELB IP of the Angular UI, but that does not serve up the page. </p> <p>What is the right way to do this? Any help is much appreciated.</p>
<p>The ELB has a static DNS hostname, like <code>foobar.eu-west-4.elb.amazonaws.com</code>. When you have a domain at hand, create an A record (alias) that points to this DNS hostname. E.g.</p> <pre><code>webservice.mydomain.com -&gt; mywebservicelb.eu-west-4.elb.amazonaws.com </code></pre> <hr> <p>You can also use static ip address, which seems to be a fairly new feature:</p> <blockquote> <p>Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.</p> </blockquote> <p><a href="https://aws.amazon.com/de/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/" rel="nofollow noreferrer">https://aws.amazon.com/de/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/</a></p>
<p>I have a simple deployment with 2 replicas.</p> <p>I would like that each of the replicas have same storage folder in them (shared application upload folder)</p> <p>I've been playing with claims and volumes, but haven't got the edge still, so asking for a quick help / example.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: 'test-tomcat' labels: app: test-tomcat spec: selector: matchLabels: app: test-tomcat replicas: 3 template: metadata: name: 'test-tomcat' labels: app: test-tomcat spec: volumes: - name: 'data' persistentVolumeClaim: claimName: claim containers: - image: 'tomcat:9-alpine' volumeMounts: - name: 'data' mountPath: '/app/data' imagePullPolicy: Always name: 'tomcat' command: ['bin/catalina.sh', 'jpda', 'run'] </code></pre> <p><br/></p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: volume labels: type: local spec: storageClassName: manual capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" </code></pre> <p><br/></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 1Gi </code></pre>
<p>First of all, you need to decide what type of a Persistent Volume to use. Here are several examples of an on-premise cluster:</p> <ul> <li><p><strong>HostPath</strong> - local Path on a Node. Therefore, if the first Pod is located on Node1 and the second is on Node2, storages will be different. To resolve this problem, you can use one of the following options. Example of a HostPath:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: example-pv labels: type: local spec: storageClassName: manual capacity: storage: 3Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" </code></pre></li> <li><p><strong>NFS</strong> - PersistentVolume of that type uses Network File System. NFS is a distributed file system protocol that allows you to mount remote directories on your servers. You need to install NFS server before using the NFS in Kubernetes; here is the example <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-16-04" rel="noreferrer">How To Set Up an NFS Mount on Ubuntu</a>. Example in Kubernetes:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 3Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 </code></pre></li> <li><p><strong>GlusterFS</strong> - GlusterFS is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. As for the NFS, you need to install GlusterFS before using it in Kubernetes; here is the <a href="https://github.com/gluster/gluster-kubernetes" rel="noreferrer">link</a> with instructions, and <a href="https://docs.okd.io/latest/install_config/storage_examples/gluster_example.html" rel="noreferrer">one</a> more with the sample. Example in Kubernetes:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-pv annotations: pv.beta.kubernetes.io/gid: "590" spec: capacity: storage: 3Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster path: myVol1 readOnly: false persistentVolumeReclaimPolicy: Retain --- apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1 --- apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: 192.168.122.221 ports: - port: 1 - addresses: - ip: 192.168.122.222 ports: - port: 1 - addresses: - ip: 192.168.122.223 ports: - port: 1 </code></pre></li> </ul> <p>After creating a PersistentVolume, you need to create a PersistaentVolumeClaim. A PersistaentVolumeClaim is a resource used by Pods to request volumes from the storage. After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim’s requirements. Example:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>And the last step, you need to configure a Pod to use the PersistentVolumeClaim. Here is the example:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: 'test-tomcat' labels: app: test-tomcat spec: selector: matchLabels: app: test-tomcat replicas: 3 template: metadata: name: 'test-tomcat' labels: app: test-tomcat spec: volumes: - name: 'data' persistentVolumeClaim: claimName: example-pv-claim #name of the claim should be the same as defined before containers: - image: 'tomcat:9-alpine' volumeMounts: - name: 'data' mountPath: '/app/data' imagePullPolicy: Always name: 'tomcat' command: ['bin/catalina.sh', 'jpda', 'run'] </code></pre>
<p>I have two containers inside one pod. One is my application container and the second is a CloudSQL proxy container. Basically my application container is dependent on this CloudSQL container. </p> <p>The problem is that when a pod is terminated, the CloudSQL proxy container is terminated first and only after some seconds my application container is terminated.</p> <p>So, before my container is terminated, it keeps sending requests to the CloudSQL container, resulting in errors:</p> <pre><code>could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432 </code></pre> <p>That's why, I thought it would be a good idea to specify the order of termination, so that my application container is terminated first and only then the cloudsql one.</p> <p>I was unable to find anything that could do this in the documentation. But maybe there is some way.</p>
<p>This is not directly possible with the Kubernetes pod API at present. Containers may be terminated in any order. The Cloud SQL pod may die more quickly than your application, for example if it has less cleanup to perform or fewer in-flight requests to drain.</p> <p>From <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">Termination of Pods</a>:</p> <blockquote> <p>When a user requests deletion of a pod, the system records the intended grace period before the pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.</p> </blockquote> <hr /> <p>You can get around this to an extent by wrapping the Cloud SQL and main containers in different entrypoints, which communicate their exit status between each other using a shared pod-level file system.</p> <p>This solution will not work with the 1.16 release of the Cloud SQL proxy (<a href="https://stackoverflow.com/questions/52148322/control-order-of-container-termination-in-a-single-pod-in-kubernetes/52156131?noredirect=1#comment110296216_52156131">see comments</a>) as this release ceased to bundle a shell with the container. The 1.17 release is <a href="https://console.cloud.google.com/gcr/images/cloudsql-docker/GLOBAL/gce-proxy?gcrImageListsize=30" rel="nofollow noreferrer">now available in Alpine or Debian Buster variants</a>, so this version is now a viable upgrade target which is once again compatible with this solution.</p> <p>A wrapper like the following may help with this:</p> <pre><code>containers: - command: [&quot;/bin/bash&quot;, &quot;-c&quot;] args: - | trap &quot;touch /lifecycle/main-terminated&quot; EXIT &lt;your entry point goes here&gt; volumeMounts: - name: lifecycle mountPath: /lifecycle - name: cloudsql_proxy image: gcr.io/cloudsql-docker/gce-proxy command: [&quot;/bin/bash&quot;, &quot;-c&quot;] args: - | /cloud_sql_proxy &lt;your flags&gt; &amp; PID=$! function stop { while true; do if [[ -f &quot;/lifecycle/main-terminated&quot; ]]; then kill $PID fi sleep 1 done } trap stop EXIT # We explicitly call stop to ensure the sidecar will terminate # if the main container exits outside a request from Kubernetes # to kill the Pod. stop &amp; wait $PID volumeMounts: - name: lifecycle mountPath: /lifecycle </code></pre> <p>You'll also need a local scratch space to use for communicating lifecycle events:</p> <pre><code>volumes: - name: lifecycle emptyDir: </code></pre> <p><strong>How does this solution work?</strong> It intercepts in the Cloud SQL proxy container the <code>SIGTERM</code> signal passed by the Kubernetes supervisor to each of your pod's containers on shutdown. The &quot;main process&quot; running in that container is a shell, which has spawned a child process running the Cloud SQL proxy. Thus, the Cloud SQL proxy is not immediately terminated. Rather, the shell code blocks waiting for a signal (by simple means of a file appearing in the file system) from the main container that it has successfully exited. Only at that point is the Cloud SQL proxy process terminated and the sidecar container returns.</p> <p>Of course, this has no effect on forced termination in the event your containers take too long to shutdown and exceed the configured grace period.</p> <p>The solution depends on the containers you are running having a shell available to them; this is true of the Cloud SQL proxy (except 1.16, and 1.17 onwards when using the <code>alpine</code> or <code>debian</code> variants), but you may need to make changes to your local container builds to ensure this is true of your own application containers.</p>
<p>I cannot talk to a pod from side car container... any help will be appreciated!</p> <p>Here's my deployment</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: sidecar-deployment spec: replicas: 1 template: metadata: labels: app: sidecar spec: containers: - name: sidecar1 image: sidecar args: - /sidecar - --port=32000 - --path=/sidecar1 ports: - containerPort: 32000 - name: sidecar2 image: sidecar args: - /sidecar - --port=32001 - --path=/sidecar2 ports: - containerPort: 32001 </code></pre> <p>And here's my service to the pod</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: sidecar-service spec: ports: - name: http port: 80 targetPort: 32001 protocol: TCP selector: app: sidecar type: NodePort </code></pre> <p>After deploying ingress, I can connect to the service and sidecar2, because sidecar2 is exposed via service:</p> <pre><code># this works curl -L http://ADDR/sidecar2 </code></pre> <p>But, I was expecting to be able to curl to the side container, but I can't.</p> <p>This is what I did. I ssh into the sidecar container. And curl the colocated pod with <code>localhost</code>:</p> <pre><code>kubectl exec -it sidecar2 -- /bin/bash # this doesn't work curl -L http://localhost:32000/sidecar1 </code></pre> <p>Can somebody help me on this?</p> <p>Thanks!</p>
<p>If your sidecar image exposes the port (recheck your dockefile), you must connect with <code>curl localhost:port/sidecar</code></p> <p>If you have problem connecting from inside the container <strong>using the service</strong> it may be related to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#a-pod-cannot-reach-itself-via-service-ip" rel="nofollow noreferrer">hairpin_mode</a>.</p>
<p>I am hosting an application on GKE and would like to be able to let users from my organization access this application from the web. I would like them to be able to log-in using their Google Account IAM credentials.</p> <p>Is there a way to configure a service exposing the clusters web endpoint such that to access this service the user simply needs to login with their google account?</p> <p>For example, when testing a service I can easily do a web-preview in the cloud-shell and then access the web application in my browser.</p> <p>Is there a way to configure this such that any users authorized in my organization can access the web interface of my application?</p> <p><em>(Note, I asked the <a href="https://devops.stackexchange.com/q/4886/8189">same question on DevOps</a> but I feel like that site is not yet as active as it should be so I ask here as well)</em></p>
<p>Okay, I managed to make it work perfectly. But it took a few steps. I am including the manifest here that is required to setup the <a href="https://cloud.google.com/iap/" rel="nofollow noreferrer">IAP</a> <a href="https://cloud.google.com/iap/docs/enabling-kubernetes-howto" rel="nofollow noreferrer">using an ingress</a>. It requires a few things which I listed in the manifest below. Hopefully this can help others since I could not find a single source that had all of this put together. Essentially all you need to do is run <code>kubectl apply -f secure-ingress.yaml</code> to make everything work (as long as you have all the depenedencies) and then you just need to configure your <a href="https://cloud.google.com/iap/" rel="nofollow noreferrer">IAP</a> as you like it.</p> <hr> <p><strong><code>secure-ingress.yaml</code></strong></p> <pre><code># Configure IAP security using ingress automatically # requirements: kubernetes version at least 1.10.5-gke.3 # requirements: service must respond with 200 at / endpoint (the healthcheck) # dependencies: need certificate secret my-secret-cert # dependencies: need oath-client secret my-secret-oath (with my.domain.com configured) # dependencies: need external IP address my-external-ip # dependencies: need domain my.domain.com to point to my-external-ip IP # dependencies: need an app (deployment/statefulset) my-app apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-secure-ingress namespace: default annotations: kubernetes.io/ingress.class: "gce" kubernetes.io/ingress.allow-http: "false" kubernetes.io/ingress.global-static-ip-name: my-external-ip spec: tls: - secretName: my-secret-cert backend: serviceName: my-service-be-web servicePort: 1234 --- kind: Service apiVersion: v1 metadata: name: my-service-be-web namespace: default annotations: beta.cloud.google.com/backend-config: '{"default": "my-service-be-conf"}' spec: type: NodePort selector: app: my-app ports: - protocol: TCP port: 1234 targetPort: 1234 name: my-port-web --- apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: my-service-be-conf namespace: default spec: iap: enabled: true oauthclientCredentials: secretName: my-secret-oath </code></pre>
<p>I successfully deployed Kafka to Kubernetes on local Docker (gcp &amp; minikube) using <a href="https://github.com/Yolean/kubernetes-kafka" rel="nofollow noreferrer">Yolean/kubernetes-kafka</a> &amp; <a href="https://github.com/helm/charts/tree/master/incubator/kafka" rel="nofollow noreferrer">Helm chart</a></p> <p>and tested topic production successfully from within the cluster using this python script:</p> <pre><code>#!/usr/bin/env python from kafka import KafkaConsumer, KafkaProducer KAFKA_TOPIC = 'demo' # KAFKA_BROKERS = 'localhost:32400' # see step 1 # from inside the cluster in a different namespace # KAFKA_BROKERS = 'bootstrap.kafka.svc.cluster.local:9092' KAFKA_BROKERS = 'kafka.kafka.svc.cluster.local:9092' print('KAFKA_BROKERS: ' + KAFKA_BROKERS) producer = KafkaProducer(bootstrap_servers=KAFKA_BROKERS) messages = [b'hello kafka', b'Falanga', b'3 test messages'] for m in messages: print(f"sending: {m}") producer.send(KAFKA_TOPIC, m) producer.flush() </code></pre> <p>On helm I used this option to enable external use:</p> <pre><code>helm install --name kafka --set external.enabled=true --namespace kafka incubator/kafka </code></pre> <p>and on the original repo I used:</p> <pre><code>kubectl apply -f ./outside-0.yml </code></pre> <p>The resulting services have endpoints and node ports but the script doesn't work from outside the cluster.</p> <p>here is the original service (branch master)</p> <pre><code>➜ ~ kubectl describe svc outside-0 --namespace kafka Name: outside-0 Namespace: kafka Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied- configuration={"apiVersion":"v1","kind":"Service","metadata": {"annotations":{},"name":"outside-0","namespace":"kafka"},"spec":{"ports": [{"nodePort":32400,"port":3240... Selector: app=kafka,kafka-broker-id=0 Type: NodePort IP: 10.99.171.133 LoadBalancer Ingress: localhost Port: &lt;unset&gt; 32400/TCP TargetPort: 9094/TCP NodePort: &lt;unset&gt; 32400/TCP Endpoints: 10.1.3.63:9094 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>here is the helm service description:</p> <pre><code>Name: kafka-0-external Namespace: kafka Labels: app=kafka chart=kafka-0.9.2 heritage=Tiller pod=kafka-0 release=kafka Annotations: dns.alpha.kubernetes.io/internal=kafka.cluster.local external- dns.alpha.kubernetes.io/hostname=kafka.cluster.local Selector: app=kafka,pod=kafka-0,release=kafka Type: NodePort IP: 10.103.70.223 LoadBalancer Ingress: localhost Port: external-broker 19092/TCP TargetPort: 31090/TCP NodePort: external-broker 31090/TCP Endpoints: 10.1.2.231:31090 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>The local docker node does not have an externalIP field:</p> <pre><code>kubectl describe node docker-for-desktop | grep IP InternalIP: 192.168.65.3 </code></pre> <p>I followed the instruction on the outside <a href="https://github.com/Yolean/kubernetes-kafka/blob/master/outside-services/README.md" rel="nofollow noreferrer">Readme</a> i.e.</p> <ol> <li>add hostPort to 50kafka statefullset 9094 port</li> <li>add node port discovery in 10broker-config</li> </ol> <p>&amp; discovered that the local docker node has no externalIP field </p> <p>How can I connect to kafka from outside the cluster on docker? Does this work on GKE or other deployments?</p>
<p>The service is exposing the pod to the internal Kubernetes network. In order to expose the service (which exposes the pod) to the internet, you need to set up an Ingress that points to the service.</p> <p>Ingresses are basically the equivalent of Apache/Nginx for Kubernetes. You can read up on how to do it at the following URL:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <p>Alternatively, you can expose a pod on the node network by defining the <code>service type</code> as a <code>NodePort</code> and assigning your specific port to it. It should be something like the following:</p> <p><code> apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: type: NodePort ports: - port: 80 nodePort: 31090 name: http </code></p>
<p>I'm trying to setup kubernetes (from the tutorials for centos7) on three VMs, unfortunately the joining of the worker fails. I hope someone already had this problem (found it two times on the web with no answers), or might have a guess what's going wrong.</p> <p>Here is what I get by kubeadm join:</p> <pre><code>[preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support I0902 20:31:15.401693 2032 kernel_validator.go:81] Validating kernel version I0902 20:31:15.401768 2032 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03 [discovery] Trying to connect to API Server "192.168.1.30:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.30:6443" [discovery] Requesting info from "https://192.168.1.30:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.30:6443" [discovery] Successfully established connection with API Server "192.168.1.30:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. </code></pre> <p>Though kublet is running:</p> <pre><code>[root@k8s-worker1 nodesetup]# systemctl status kubelet -l ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since So 2018-09-02 20:31:15 CEST; 19min ago Docs: https://kubernetes.io/docs/ Main PID: 2093 (kubelet) Tasks: 7 Memory: 12.1M CGroup: /system.slice/kubelet.service └─2093 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni Sep 02 20:31:15 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent. Sep 02 20:31:15 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent... Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440010 2093 server.go:408] Version: v1.11.2 Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440314 2093 plugins.go:97] No cloud provider specified. [root@k8s-worker1 nodesetup]# </code></pre> <p>As far as I can see, the worker can connect to the master, but it tries to run a healthcheck on some local servlet which has not come up. Any ideas?</p> <p>Here is what I did to configure my worker:</p> <pre><code>exec bash setenforce 0 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux echo "Setting Firewallrules" firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=10255/tcp firewall-cmd --permanent --add-port=30000-32767/tcp firewall-cmd --permanent --add-port=6783/tcp firewall-cmd --reload echo "And enable br filtering" modprobe br_netfilter echo '1' &gt; /proc/sys/net/bridge/bridge-nf-call-iptables echo "disable swap" swapoff -a echo "### You need to edit /etc/fstab and comment the swapline!! ###" echo "Adding kubernetes repo for download" cat &lt;&lt;EOF &gt; /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF echo "install the Docker-ce dependencies" yum install -y yum-utils device-mapper-persistent-data lvm2 echo "add docker-ce repository" yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo echo "install docker ce" yum install -y docker-ce echo "Install kubeadm kubelet kubectl" yum install kubelet kubeadm kubectl -y echo "start and enable kubectl" systemctl restart docker &amp;&amp; systemctl enable docker systemctl restart kubelet &amp;&amp; systemctl enable kubelet echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)" echo "We assume that docker is using cgroupfs ... assuming kubelet does so too" docker info | grep -i cgroup grep -i cgroup /var/lib/kubelet/kubeadm-flags.env # old style # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf systemctl daemon-reload systemctl restart kubelet # There has been an issue reported that traffic in iptable is been routed incorrectly. # Below settings will make sure IPTable is configured correctly. # sudo bash -c 'cat &lt;&lt;EOF &gt; /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF' # Make changes effective sudo sysctl --system </code></pre> <p>Thanks for any help in advance.</p> <p><strong>Update I</strong></p> <p>Journalctl Output from the worker:</p> <pre><code>[root@k8s-worker1 ~]# journalctl -xeu kubelet Sep 02 21:19:56 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent. -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. Sep 02 21:19:56 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent... -- Subject: Unit kubelet.service has begun start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has begun starting up. Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788059 3082 server.go:408] Version: v1.11.2 Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788214 3082 plugins.go:97] No cloud provider specified. Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469 3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a Sep 02 21:19:56 k8s-worker1 systemd[1]: Unit kubelet.service entered failed state. Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service failed. </code></pre> <p>And the get pods on the master side results in:</p> <pre><code>[root@k8s-master ~]# kubectl get pods --all-namespaces=true NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-79n2m 0/1 Pending 0 1d kube-system coredns-78fcdf6894-tlngr 0/1 Pending 0 1d kube-system etcd-k8s-master 1/1 Running 3 1d kube-system kube-apiserver-k8s-master 1/1 Running 0 1d kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 1d kube-system kube-proxy-2x8cx 1/1 Running 3 1d kube-system kube-scheduler-k8s-master 1/1 Running 0 1d [root@k8s-master ~]# </code></pre> <p><strong>Update II</strong> As next step I generated a new token on the master side and used this one on the join command. Though the master token list displayed the the token as a valid one, the worker node insist that the master does not know about this token or it is expired....stop! Time to start all over, beginning with the master setup.</p> <p>So here's what I did:</p> <p>1) resetup the master VM, meaning a fresh centos7 (CentOS-7-x86_64-Minimal-1804.iso) installation on virtualbox. Configured networking von virtualbox: adapter1 as NAT to the host system (for being able to install the stuff) and adapter2 as internal network (same name to master and worker nodes for the kubernetes network).</p> <p>2) With the fresh image installed the basis interface enp0s3 was not configured to run at boot time (so ifup enp03s, and reconfigured in /etc/sysconfig/network-script to run at boot time).</p> <p>3) Configuring the second interface for the internal kubernetes network:</p> <p><strong>/etc/hosts:</strong></p> <pre><code>#!/bin/sh echo '192.168.1.30 k8s-master' &gt;&gt; /etc/hosts echo '192.168.1.40 k8s-worker1' &gt;&gt; /etc/hosts echo '192.168.1.50 k8s-worker2' &gt;&gt; /etc/hosts </code></pre> <p>Identified my second interface via "ip -color -human addr" which showed me the enp0S8 in my case, so:</p> <pre><code>#!/bin/sh echo "Setting up internal Interface" cat &lt;&lt;EOF &gt; /etc/sysconfig/network-scripts/ifcfg-enp0s8 DEVICE=enp0s8 IPADDR=192.168.1.30 NETMASK=255.255.255.0 NETWORK=192.168.1.0 BROADCAST=192.168.1.255 ONBOOT=yes NAME=enp0s8 EOF echo "Activate interface" ifup enp0s8 </code></pre> <p>4) Hostname, swap, disabling SELinux</p> <pre><code>#!/bin/sh echo "Setting hostname und deactivate SELinux" hostnamectl set-hostname 'k8s-master' exec bash setenforce 0 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux echo "disable swap" swapoff -a echo "### You need to edit /etc/fstab and comment the swapline!! ###" </code></pre> <p>Some remarks here: I rebooted as I saw that the later preflight checks seems to parse /etc/fstab to see that the swap does not exists. Also it seems that centos reactivates SElinux (I need to check this later on) as workaround I disabled it again after each reboot.</p> <p>5) Establish the requires firewall settings</p> <pre><code>#!/bin/sh echo "Setting Firewallrules" firewall-cmd --permanent --add-port=6443/tcp firewall-cmd --permanent --add-port=2379-2380/tcp firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=10251/tcp firewall-cmd --permanent --add-port=10252/tcp firewall-cmd --permanent --add-port=10255/tcp firewall-cmd --reload echo "And enable br filtering" modprobe br_netfilter echo '1' &gt; /proc/sys/net/bridge/bridge-nf-call-iptables </code></pre> <p>6) Adding the kubernetes repository</p> <pre><code>#!/bin/sh echo "Adding kubernetes repo for download" cat &lt;&lt;EOF &gt; /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF </code></pre> <p>7) Install the required packages and configure the services</p> <pre><code>#!/bin/sh echo "install the Docker-ce dependencies" yum install -y yum-utils device-mapper-persistent-data lvm2 echo "add docker-ce repository" yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo echo "install docker ce" yum install -y docker-ce echo "Install kubeadm kubelet kubectl" yum install kubelet kubeadm kubectl -y echo "start and enable kubectl" systemctl restart docker &amp;&amp; systemctl enable docker systemctl restart kubelet &amp;&amp; systemctl enable kubelet echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)" echo "We assume that docker is using cgroupfs ... assuming kubelet does so too" docker info | grep -i cgroup grep -i cgroup /var/lib/kubelet/kubeadm-flags.env # old style # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf systemctl daemon-reload systemctl restart kubelet # There has been an issue reported that traffic in iptable is been routed incorrectly. # Below settings will make sure IPTable is configured correctly. # sudo bash -c 'cat &lt;&lt;EOF &gt; /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF' # Make changes effective sudo sysctl --system </code></pre> <p>8) Init the cluster</p> <pre><code>#!/bin/sh echo "Init kubernetes. Check join cmd in initProtocol.txt" kubeadm init --apiserver-advertise-address=192.168.1.30 --pod-network-cidr=192.168.1.0/16 | tee initProtocol.txt </code></pre> <p>To verify here is the result of this command:</p> <pre><code>Init kubernetes. Check join cmd in initProtocol.txt [init] using Kubernetes version: v1.11.2 [preflight] running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly I0904 21:53:15.271999 1526 kernel_validator.go:81] Validating kernel version I0904 21:53:15.272165 1526 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.30] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.30 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 43.504792 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [bootstraptoken] using token: n4yt3r.3c8tuj11nwszts2d [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.1.30:6443 --token n4yt3r.3c8tuj11nwszts2d --discovery-token-ca-cert-hash sha256:466e7972a4b6997651ac1197fdde68d325a7bc41f2fccc2b1efc17515af61172 </code></pre> <p>Remark: looks fine for me so far, though I'm a bit worried that the latest docker-ce version might cause troubles here...</p> <p>9) Deploying the pod network</p> <pre><code>#!/bin/bash echo "Configure demo cluster usage as root" mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config # Deploy-Network using flanel # Taken from first matching two tutorials on the web # kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml # taken from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml echo "Try to run kubectl get pods --all-namespaces" echo "After joining nodes: try to run kubectl get nodes to verify the status" </code></pre> <p>And here's the output of this command:</p> <pre><code>Configure demo cluster usage as root clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created clusterrole.rbac.authorization.k8s.io/flannel configured clusterrolebinding.rbac.authorization.k8s.io/flannel configured serviceaccount/flannel unchanged configmap/kube-flannel-cfg unchanged daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created Try to run kubectl get pods --all-namespaces After joining nodes: try to run kubectl get nodes to verify the status </code></pre> <p>So I tried kubectl get pods --all-namespaces and I get</p> <pre><code>[root@k8s-master nodesetup]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 33m kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 33m kube-system etcd-k8s-master 1/1 Running 0 27m kube-system kube-apiserver-k8s-master 1/1 Running 0 27m kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 27m kube-system kube-proxy-stfxm 1/1 Running 0 28m kube-system kube-scheduler-k8s-master 1/1 Running 0 27m </code></pre> <p>and </p> <pre><code>[root@k8s-master nodesetup]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 35m v1.11.2 </code></pre> <p>Hm...what's wrong with my master?</p> <p>Some observations:</p> <p>Sometime I got connection refused on running the kubectl in the beginning, I found out that it takes some minutes before the service is established. But because of this I was looking in the /var/log/firewalld and found a lot of these:</p> <pre><code>2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?). 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?). 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -n -L DOCKER' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name. 2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?). </code></pre> <p>Wrong docker version? The docker installation setup seems to be broken.</p> <p>Anything else I can check on the master side... It's gonna be late - tomorrow I'm trying to join my worker again (within the 24h range of the initial token period).</p> <p><strong>Update III (After solving the docker issue)</strong></p> <pre><code>[root@k8s-master ~]# kubectl get pods --all-namespaces=true NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 10h kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 10h kube-system etcd-k8s-master 1/1 Running 0 10h kube-system kube-apiserver-k8s-master 1/1 Running 0 10h kube-system kube-controller-manager-k8s-master 1/1 Running 0 10h kube-system kube-flannel-ds-amd64-crljm 0/1 Pending 0 1s kube-system kube-flannel-ds-v6gcx 0/1 Pending 0 0s kube-system kube-proxy-l2dck 0/1 Pending 0 0s kube-system kube-scheduler-k8s-master 1/1 Running 0 10h [root@k8s-master ~]# </code></pre> <p>And master looks happy now</p> <pre><code>[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 10h v1.11.2 [root@k8s-master ~]# </code></pre> <p>Stay tuned...after work I'm fixing docker/firewall on the worker, too and will try to join the cluster again (knowing now how to issue a new token if required). So Update IV will follow in about 10hours</p>
<p>It seems that your <code>kubeadm token</code> has been expired as per <code>kubelet</code> logs attached.</p> <blockquote> <p>Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469<br> 3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized</p> </blockquote> <p>TTL for this token remains 24 hours after command <code>kubeadm init</code> released, check this <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/" rel="nofollow noreferrer">link</a> for more information.</p> <p>The master node’s system runtime components look unhealthy, not sure whether the cluster can be running fine. Although <code>CoreDNS</code> services are in pending state, take a look at <code>kubeadm</code> troubleshooting <a href="https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#coredns-or-kube-dns-is-stuck-in-the-pending-state" rel="nofollow noreferrer">document</a> in order to check whether any of <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">Pod network</a> providers have been installed on your cluster.</p> <p>I recommend rebuilding cluster in order to refresh <code>kubeadm token</code> and bootstrap cluster system modules from scratch.</p>
<p>I am following the steps in the getting started guide for <a href="https://github.com/kubeflow/website/blob/master/content/docs/started/getting-started-minikube.md" rel="nofollow noreferrer">kubeflow</a> and i got stuck at verify the setup works.</p> <p>I managed to get this:-</p> <pre><code>$ kubectl get ns NAME STATUS AGE default Active 2m kube-public Active 2m kube-system Active 2m kubeflow-admin Active 14s </code></pre> <p>but when i do </p> <pre><code>$ kubectl -n kubeflow get svc No resources found. </code></pre> <p>I also got </p> <pre><code>$ kubectl -n kubeflow get pods No resources found. </code></pre> <p>I repeated these both on my mac and my ubuntu VM, and both returned the same problem. Am i missing something here? </p> <p>Thanks.</p>
<p>Yes you're missing something here and that is to use the correct namespace. Use:</p> <pre><code>$ kubectl -n kubeflow-admin get all </code></pre>
<p>I am setting up VerneMQ (a MQTT broker) in a cluster configuration. Therefore I am launching 4 replicas in a stateful set. Apparently VerneMQ wants to communicate with the other brokers in a cluster via DNS like this:</p> <pre><code>echo "Will join an existing Kubernetes cluster with discovery node at ${kube_pod_name}.${VERNEMQ_KUBERNETES_SUBDOMAIN}.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local" </code></pre> <p>Unfortunately the logs indicate that this doesn't work:</p> <blockquote> <p>14:05:56.741 [info] Application vmq_server started on node 'VerneMQ@broker-vernemq-0.broker-vernemq.messaging.svc.cluster.local'</p> </blockquote> <p><code>broker-vernemq-0</code> is the pod's name and <code>broker-vernemq</code> is the name of the statefulset. The service is configured as LoadBalancer.</p> <p><strong>The problem:</strong></p> <p>I connected to the pod <code>broker-vernemq-1</code> via terminal and executed <code>ping broker-vernemq-0</code> and I wondered that it is not able to resolve this hostname:</p> <blockquote> <p>ping: unknown host broker-vernemq-0</p> </blockquote> <p>I was under the impression that this is supposed to work?</p>
<p>The service must be headless for kube-dns to service domain names like that. See <a href="https://stackoverflow.com/a/46638059">https://stackoverflow.com/a/46638059</a></p>
<p>I'm new to Kubernetes and Rancher. I have builde node docker image with below commands:</p> <pre><code>FROM node:10 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY package.json /usr/src/app RUN npm cache clean RUN npm install COPY . /usr/src/app EXPOSE 3000 CMD ["npm","start"] </code></pre> <p>I have put docker image to my repo on docker hub. From Docker hub I'm pulling same image on Rancher/Kubernetes its showing as it as in Active state, as shown below:</p> <blockquote> <p>kubectl get svc -n nodejs</p> </blockquote> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE node-front-end ClusterIP 10.43.14.96 &lt;none&gt; 49160/TCP 21m node-front-end-nodeport NodePort 10.43.171.52 &lt;none&gt; 49160:31366/TCP 21m </code></pre> <p>But when I'm trying with above IP and Port it's giving message : "This site can’t be reached"</p> <p>So i'm not able to understand what I'm doing wrong here. </p> <p>Please guide.</p>
<blockquote> <p>But when I'm trying with above IP and Port it's giving message : "This site can’t be reached"</p> </blockquote> <p>Correct, those <code>ClusterIP</code>s are "virtual," in that they exist only inside the cluster. The address you will want to use is <em>any</em> of the <code>Node</code>'s IP addresses, and then the port <code>:31366</code> listed there in the <code>Service</code> of type <code>NodePort</code>.</p> <p>Just in case you don't already know them, one can usually find the IP address of the Nodes with <code>kubectl get -o wide nodes</code>.</p>
<p>I have got the following services:</p> <pre><code>ubuntu@master:~$ kubectl get services --all-namespaces NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes 100.64.0.1 &lt;none&gt; 443/TCP 48m kube-system kube-dns 100.64.0.10 &lt;none&gt; 53/UDP,53/TCP 47m kube-system kubernetes-dashboard 100.70.83.136 &lt;nodes&gt; 80/TCP 47m </code></pre> <p>I am attempting to access kubernetes dashboard. The following response seems reasonable, taking into account curl is not a browser.</p> <pre><code>ubuntu@master:~$ curl 100.70.83.136 &lt;!doctype html&gt; &lt;html ng-app="kubernetesDashboard"&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Kubernetes Dashboard&lt;/title&gt; &lt;link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"&gt; &lt;meta name="viewport" content="width=device-width"&gt; &lt;link rel="stylesheet" href="static/vendor.36bb79bb.css"&gt; &lt;link rel="stylesheet" href="static/app.d2318302.css"&gt; &lt;/head&gt; &lt;body&gt; &lt;!--[if lt IE 10]&gt; &lt;p class="browsehappy"&gt;You are using an &lt;strong&gt;outdated&lt;/strong&gt; browser. Please &lt;a href="http://browsehappy.com/"&gt;upgrade your browser&lt;/a&gt; to improve your experience.&lt;/p&gt; &lt;![endif]--&gt; &lt;kd-chrome layout="column" layout-fill&gt; &lt;/kd-chrome&gt; &lt;script src="static/vendor.633c6c7a.js"&gt;&lt;/script&gt; &lt;script src="api/appConfig.json"&gt;&lt;/script&gt; &lt;script src="static/app.9ed974b1.js"&gt;&lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>According to the documentation the right access point is <a href="https://localhost/ui" rel="noreferrer">https://localhost/ui</a>. So, I am trying it and receive a bit worrying result. <strong>Is it expected response?</strong></p> <pre><code>ubuntu@master:~$ curl https://localhost/ui curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. </code></pre> <p>Trying the same without certificate validation. For curl it might be OK. but I have got the same in a browser, which is connecting though port forwarding via vagrant forwarded_port option.</p> <pre><code>ubuntu@master:~$ curl -k https://localhost/ui Unauthorized </code></pre> <p><strong>What I am doing wrong? and how to make sure I can access the UI?</strong> Currently it responds with Unauthorized.</p> <p>The docs for the dashboard tell the password is in the configuration:</p> <pre><code>ubuntu@master:~$ kubectl config view apiVersion: v1 clusters: [] contexts: [] current-context: "" kind: Config preferences: {} users: [] </code></pre> <p>but it seems I have got nothing... <strong>Is it expected behavior? How can I authorize with the UI?</strong></p>
<p>The offical wiki is a little bit confusing so I reordered it here:</p> <p>If you use the <a href="https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended/kubernetes-dashboard.yaml" rel="noreferrer">recommended</a> yaml to deploy the dashboard, you should only access your dashboard by https, and you should generate your certs, refer to <a href="https://github.com/kubernetes/dashboard/wiki/Installation" rel="noreferrer">guide</a>. Then you can run <code>kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'</code> to visit the dashboard on &quot;http://localhost:8001/ui&quot;. This page needs to use a token to login. To generate it, refer to <a href="https://github.com/kubernetes/dashboard/wiki/Creating-sample-user" rel="noreferrer">this page</a>. Also you can add <code>NodePort</code> to your yaml and access it using <code>&lt;nodeip&gt;:&lt;port&gt;</code>.</p> <p>If you deploy using the <a href="https://github.com/kubernetes/dashboard/blob/master/aio/deploy/alternative.yaml" rel="noreferrer">http alternative</a> method, you can <strong>only access your dashboard by nodeip:port</strong>. Remember to add it to yaml first!! After deployment, you should also generate your token and <strong>add header <code>Authorization: Bearer &lt;token&gt;</code> for every request</strong>.</p> <p>I think this can help you and others who want to use kube-dashboard.</p>
<p>I am trying to switch my local dev environment to run in minikube. I have all the container images built and I have all the YAML configs and I have all the services I need running and I can access them using the URL returned from <code>minikube service web --url</code> (web is the name of my front facing nginx server). But there is one thing that I have not been able to figure out. The project I am working on requires smart external devices communicating with the backend. I have a few devices sitting on my bench, connected to the local LAN, but I cannot figure out how to expose services running inside minikube to the outside, i.e. so a device can connect to a service using my laptop's external IP. Is there a standard way of doing this?</p> <p>Edit: I have attempted to configure an ingress for my service. Here is my ingress config.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web spec: backend: serviceName: web servicePort: 80 </code></pre> <p>The web service is accessible via <code>minikube service web</code> command and is exposed as type NodePort. All I get is "default backend 404" when I try to access the ingress. On the other hand, even if it did work, I would still have a problem, since ingress is exposing the service on the VM internal subnet and is not accessible from outside of the host machine. I am starting to consider running a proxy or accelerator of some sort to forward things from the host to the minikube vm. Still need to have ingress running to have a persistent endpoint for the proxy.</p>
<p>There are multiple ways. But i found out solution this way.</p> <pre><code>~ → 🐳 $ minikube status minikube: Running cluster: Running kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100 </code></pre> <p>Here we can connect with the service using 192.168.99.100 and nodeport. Say for Dashboard with node port 30000 the url will be: <a href="http://192.168.99.100:30000/" rel="noreferrer">http://192.168.99.100:30000/</a> </p> <p>one can get the service port by using below commands:</p> <pre><code>~ → 🐳 $ kubectl get svc --all-namespaces </code></pre>
<p>While I can create custom objects just fine, I am wondering how one is supposed to handle large payloads (Gigabytes) for an object.</p> <p>CRs are mostly used in order to interface with garbage collection/reference counting in Kubernetes.</p> <p>Adding the payload via YAML does not work, though (out of memory for large payloads):</p> <pre><code>apiVersion: "data.foo.bar/v1" kind: Dump metadata: name: my-data ownerReferences: - apiVersion: apps/v1 kind: Deploy name: my-deploy uid: d9607a69-f88f-11e7-a518-42010a800195 spec: payload: dfewfawfjr345434hdg4rh4ut34gfgr_and_so_on_... </code></pre> <p>One could perhaps add the payload to a PV and just reference that path in the CR. Then I have the problem, that it seems like I cannot clean up the payload file, should the CR get finalized (could not find any info about custom Finalizers).</p> <p>Have no clear idea how to integrate such a concept into Kubernetes lifetimes.</p>
<p>In general the limit on size for any Kube API object is ~1M due to etcd restrictions, but putting more than 20-30k in an object is a bad idea and will be expensive to access (and garbage collection will be expensive as well).</p> <p>I would recommend storing the data in a object storage bucket and using an RBAC proxy like <a href="https://github.com/brancz/kube-rbac-proxy" rel="nofollow noreferrer">https://github.com/brancz/kube-rbac-proxy</a> to gate access the bucket contents (use a URL to the proxy as a reference from your object). That gives you all the benefits of tracking the data in the api, but keeps the object size small. If you want a more complex integration you could implement an aggregated API and reuse the core Kubernetes libraries to handle your API, storing the data in the object store.</p>
<p>After connecting my Gitlab repo to my self-putup Kubernetes cluster via Operations > Kubernetes, I want to install Helm Tiller via the GUI; but I get:</p> <blockquote> <p>Something went wrong while installing Helm Tiller</p> <p>Kubernetes error: configmaps "values-content-configuration-helm" already exists</p> </blockquote> <p>There are no pods running on the cluster and <code>kubectl version</code> returns:</p> <blockquote> <p>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}</p> <p>Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}</p> </blockquote> <p><strong>update</strong></p> <p>the output of <code>kubectl get cm --all-namespaces</code>:</p> <pre><code>NAMESPACE NAME DATA AGE gitlab-managed-apps values-content-configuration-helm 3 7d ... </code></pre> <p>deleting this namespace solves the issue!</p>
<p>Find the <code>gitlab-managed-apps</code> namespace with <code>kubectl get cm --all-namespaces</code>:</p> <pre><code>NAMESPACE NAME DATA AGE gitlab-managed-apps values-content-configuration-helm 3 7d ... </code></pre> <p>deleting this namespace solves the issue:</p> <pre><code>kubectl delete namespace gitlab-managed-apps </code></pre> <p>Thanks to <a href="https://stackoverflow.com/users/1933452/lev-kuznetsov">Lev Kuznetsov</a>.</p>
<p>I want to manually delete iptables rules for debugging. I have several rules created by kube-proxy based on service <code>nettools</code>:</p> <pre><code># kubectl get endpoints nettools NAME ENDPOINTS AGE nettools 172.16.27.138:7493 1h </code></pre> <p>And its iptables rules:</p> <pre><code># iptables-save|grep nettools -A KUBE-SEP-6DFMUWHMXOYMFWKG -s 172.16.27.138/32 -m comment --comment "default/nettools:web" -j KUBE-MARK-MASQ -A KUBE-SEP-6DFMUWHMXOYMFWKG -p tcp -m comment --comment "default/nettools:web" -m tcp -j DNAT --to-destination 172.16.27.138:7493 -A KUBE-SERVICES -d 10.0.1.2/32 -p tcp -m comment --comment "default/nettools:web cluster IP" -m tcp --dport 7493 -j KUBE-SVC-INDS3KD6I5PFKUWF -A KUBE-SVC-INDS3KD6I5PFKUWF -m comment --comment "default/nettools:web" -j KUBE-SEP-6DFMUWHMXOYMFWKG </code></pre> <p>However,I cannot delete those rules:</p> <pre><code># iptables -D KUBE-SVC-INDS3KD6I5PFKUWF -m comment --comment "default/nettools:web" -j KUBE-SEP-6DFMUWHMXOYMFWKG iptables v1.4.21: Couldn't load target `KUBE-SEP-6DFMUWHMXOYMFWKG':No such file or directory # iptables -D KUBE-SERVICES -d 10.0.1.2/32 -p tcp -m comment --comment "default/nettools:web cluster IP" -m tcp --dport 7493 -j KUBE-SVC-INDS3KD6I5PFKUWF iptables v1.4.21: Couldn't load target `KUBE-SVC-INDS3KD6I5PFKUWF':No such file or directory </code></pre>
<p>There are multiple tables in play when dealing with <code>iptables</code>. <code>filter</code> table is the default if nothing is specified. The rules that you are trying to delete are part of the <code>nat</code> table.</p> <p>Just add <code>-t nat</code> to your rules to delete those rules.</p> <p>Example:</p> <pre><code># iptables -t nat -D KUBE-SVC-INDS3KD6I5PFKUWF -m comment --comment "default/nettools:web" -j KUBE-SEP-6DFMUWHMXOYMFWKG </code></pre>
<p>trying to get into istio on kubernetes but it seems i am missing either some fundamentals, or i am doing things back to front. I am quite experienced in kubernetes, but istio and its virtualservice confuses me a bit.</p> <p>I created 2 deployments (helloworld-v1/helloworld-v2). Both have the same image, the only thing thats different is the environment variables - which output either version: "v1" or version: "v2". I am using a little testcontainer i wrote which basically returns the headers i got into the application. A kubernetes service named "helloworld" can reach both.</p> <p>I created a Virtualservice and a Destinationrule</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: helloworld spec: hosts: - helloworld http: - route: - destination: host: helloworld subset: v1 weight: 90 - destination: host: helloworld subset: v2 weight: 10 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: helloworld spec: host: helloworld subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 </code></pre> <p>According to the docs not mentioning any gateway should use the internal "mesh" one. Sidecar containers are successfully attached:</p> <pre><code>kubectl -n demo get all NAME READY STATUS RESTARTS AGE pod/curl-6657486bc6-w9x7d 2/2 Running 0 3h pod/helloworld-v1-d4dbb89bd-mjw64 2/2 Running 0 6h pod/helloworld-v2-6c86dfd5b6-ggkfk 2/2 Running 0 6h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/helloworld ClusterIP 10.43.184.153 &lt;none&gt; 80/TCP 6h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/curl 1 1 1 1 3h deployment.apps/helloworld-v1 1 1 1 1 6h deployment.apps/helloworld-v2 1 1 1 1 6h NAME DESIRED CURRENT READY AGE replicaset.apps/curl-6657486bc6 1 1 1 3h replicaset.apps/helloworld-v1-d4dbb89bd 1 1 1 6h replicaset.apps/helloworld-v2-6c86dfd5b6 1 1 1 6h </code></pre> <p>Everything works quite fine when i access the application from "outside" (istio-ingressgateway), v2 is called one times, v1 9 nine times:</p> <pre><code>curl --silent -H 'host: helloworld' http://localhost {"host":"helloworld","user-agent":"curl/7.47.0","accept":"*/*","x-forwarded-for":"10.42.0.0","x-forwarded-proto":"http","x-envoy-internal":"true","x-request-id":"a6a2d903-360f-91a0-b96e-6458d9b00c28","x-envoy-decorator-operation":"helloworld:80/*","x-b3-traceid":"e36ef1ba2229177e","x-b3-spanid":"e36ef1ba2229177e","x-b3-sampled":"1","x-istio-attributes":"Cj0KF2Rlc3RpbmF0aW9uLnNlcnZpY2UudWlkEiISIGlzdGlvOi8vZGVtby9zZXJ2aWNlcy9oZWxsb3dvcmxkCj8KGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIjEiFoZWxsb3dvcmxkLmRlbW8uc3ZjLmNsdXN0ZXIubG9jYWwKJwodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USBhIEZGVtbwooChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSDBIKaGVsbG93b3JsZAo6ChNkZXN0aW5hdGlvbi5zZXJ2aWNlEiMSIWhlbGxvd29ybGQuZGVtby5zdmMuY2x1c3Rlci5sb2NhbApPCgpzb3VyY2UudWlkEkESP2t1YmVybmV0ZXM6Ly9pc3Rpby1pbmdyZXNzZ2F0ZXdheS01Y2NiODc3NmRjLXRyeDhsLmlzdGlvLXN5c3RlbQ==","content-length":"0","version":"v1"} "version": "v1", "version": "v1", "version": "v2", "version": "v1", "version": "v1", "version": "v1", "version": "v1", "version": "v1", "version": "v1", </code></pre> <p>But as soon as i do the curl from within a pod (in this case just byrnedo/alpine-curl) against the service things start to get confusing:</p> <pre><code>curl --silent -H 'host: helloworld' http://helloworld.demo.svc.cluster.local {"host":"helloworld","user-agent":"curl/7.61.0","accept":"*/*","version":"v1"} "version":"v2" "version":"v2" "version":"v1" "version":"v1" "version":"v2" "version":"v2" "version":"v1" "version":"v2“ "version":"v1" </code></pre> <p>Not only that i miss all the istio attributes (which i understand in a service to service communication because as i understand it they are set when the request first enters the mesh via gateway), but the balance for me looks like the default 50:50 balance of a kubernetes service.</p> <p>What do i have to do to achieve the same 1:9 balance on an inter-service communication? Do i have to create a second, "internal" gateway to use instead the service fqdn? Did i miss a definition? Should calling a service fqdn from within a pod respect a virtualservice routing?</p> <p>used istio version is 1.0.1, used kubernetes version v1.11.1. </p> <p><strong>UPDATE</strong> deployed the sleep-pod as suggested, (this time not relying on the auto-injection of the demo namespace) but manually as described in the sleep sample</p> <pre><code>kubectl -n demo get deployment sleep -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR sleep 1 1 1 1 2m sleep,istio-proxy tutum/curl,docker.io/istio/proxyv2:1.0.1 app=sleep </code></pre> <p>Also changed the Virtualservice to 0/100 to see if it works at first glance . Unfortunately this did not change much:</p> <pre><code>export SLEEP_POD=$(kubectl get -n demo pod -l app=sleep -o jsonpath={.items..metadata.name}) kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld {"user- agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"} kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld {"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"} kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld {"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2"} kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld {"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v1"} kubectl -n demo exec -it $SLEEP_POD -c sleep curl http://helloworld {"user-agent":"curl/7.35.0","host":"helloworld","accept":"*/*","version":"v2" </code></pre>
<p>Found the solution, one of the prerequisites (i forgot) is that a proper routing requires named ports: @see <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="noreferrer">https://istio.io/docs/setup/kubernetes/spec-requirements/</a>. </p> <p>Wrong:</p> <pre><code>spec: ports: - port: 80 protocol: TCP targetPort: 3000 </code></pre> <p>Right:</p> <pre><code>spec: ports: - name: http port: 80 protocol: TCP targetPort: 3000 </code></pre> <p>After using name http everything works like a charm</p>
<p>I am trying to run my kafka and zookeeper in kubernetes pods. </p> <p>Here is my <code>zookeeper-service.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.1.0 (36652f6) creationTimestamp: null labels: io.kompose.service: zookeeper-svc name: zookeeper-svc spec: ports: - name: "2181" port: 2181 targetPort: 2181 selector: io.kompose.service: zookeeper status: loadBalancer: {} </code></pre> <p>Below is <code>zookeeper-deployment.yaml</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.1.0 (36652f6) creationTimestamp: null labels: io.kompose.service: zookeeper name: zookeeper spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: zookeeper spec: containers: - image: wurstmeister/zookeeper name: zookeeper ports: - containerPort: 2181 resources: {} restartPolicy: Always status: {} </code></pre> <p><code>kafka-deployment.yaml</code> is as below:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert -f docker-compose.yml kompose.version: 1.1.0 (36652f6) creationTimestamp: null labels: io.kompose.service: kafka name: kafka spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: kafka spec: containers: - env: - name: KAFKA_ADVERTISED_HOST_NAME value: kafka - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: KAFKA_PORT value: "9092" - name: KAFKA_ZOOKEEPER_CONNECT_TIMEOUT_MS value: "60000" image: wurstmeister/kafka name: kafka ports: - containerPort: 9092 resources: {} restartPolicy: Always status: {} </code></pre> <p>I first start the zookeeper service and deployment. Once the zookeeper is started and <code>kubectl get pods</code> shows it in running state, I start kafka deployment. Kafka deployment starts failing and restarting again and again, due to restartPolicy as always. When I checked the logs from kafka docker, I found that it is not able to connect to zookeeper service and the connection timesout. Here are the logs from kafka container.</p> <pre><code>[2018-09-03 07:06:06,670] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING atkafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$ waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:230) at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226) at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251) at kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:226) at kafka.zookeeper.ZooKeeperClient.&lt;init&gt;(ZooKeeperClient.scala:95) at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1580) at kafka.server.KafkaServer.kafka$server$KafkaServer$$createZkClient$1(KafkaServer.scala:348) at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:372) at kafka.server.KafkaServer.startup(KafkaServer.scala:202) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2018-09-03 07:06:06,671] INFO shutting down (kafka.server.KafkaServer) [2018-09-03 07:06:06,673] WARN (kafka.utils.CoreUtils$) java.lang.NullPointerException atkafka.server.KafkaServer$$anonfun$shutdown$5.apply$mcV$sp(KafkaServer.scala:579) at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86) at kafka.server.KafkaServer.shutdown(KafkaServer.scala:579) at kafka.server.KafkaServer.startup(KafkaServer.scala:329) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:75) at kafka.Kafka.main(Kafka.scala) [2018-09-03 07:06:06,676] INFO shut down completed (kafka.server.KafkaServer) [2018-09-03 07:06:06,677] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [2018-09-03 07:06:06,678] INFO shutting down (kafka.server.KafkaServer) </code></pre> <p>What could be the reason for this ? and solutions ? </p> <p>Edit: logs from zookeeper pod:</p> <pre><code>2018-09-03 10:32:39,562 [myid:] - INFO [main:ZooKeeperServerMain@96] - Starting server 2018-09-03 10:32:39,567 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT 2018-09-03 10:32:39,567 [myid:] - INFO [main:Environment@100] - Server environment:host.name=zookeeper-7594d99b-sgm6p 2018-09-03 10:32:39,567 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.7.0_65 2018-09-03 10:32:39,567 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation 2018-09-03 10:32:39,567 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre 2018-09-03 10:32:39,567 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper- 3.4.9/bin/../build/classes:/opt/zookeeper- 3.4.9/bin/../build/lib/*.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j- log4j12-1.6.1.jar:/opt/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6. 1.ja r:/opt/zookeeper-3.4.9/bin/../lib/netty- 3.10.5.Final.jar:/opt/zookeeper-3.4.9/bin/../lib/log4j- 1.2.16.jar:/opt/zookeeper-3.4.9/bin/../lib/jline- 0.9.94.jar:/opt/zookeeper-3.4.9/bin/../zookeeper- 3.4.9.jar:/opt/zookeeper- 3.4.9/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.9/bin/../conf: 2018-09-03 10:32:39,567 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp 2018-09-03 10:32:39,569 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=&lt;NA&gt; 2018-09-03 10:32:39,569 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux 2018-09-03 10:32:39,569 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64 2018-09-03 10:32:39,569 [myid:] - INFO [main:Environment@100] - Server environment:os.version=4.15.0-20-generic 2018-09-03 10:32:39,569 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root 2018-09-03 10:32:39,569 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root 2018-09-03 10:32:39,569 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.9 2018-09-03 10:32:39,570 [myid:] - INFO [main:ZooKeeperServer@815] - tickTime set to 2000 2018-09-03 10:32:39,571 [myid:] - INFO [main:ZooKeeperServer@824] - minSessionTimeout set to -1 2018-09-03 10:32:39,571 [myid:] - INFO [main:ZooKeeperServer@833] - maxSessionTimeout set to -1 2018-09-03 10:32:39,578 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181 </code></pre> <p>Edit: starting logs from kafka container:</p> <pre><code>Excluding KAFKA_HOME from broker config [Configuring] 'advertised.host.name' in '/opt/kafka/config/server.properties' [Configuring] 'port' in '/opt/kafka/config/server.properties' [Configuring] 'broker.id' in '/opt/kafka/config/server.properties' Excluding KAFKA_VERSION from broker config [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties' [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties' [Configuring] 'zookeeper.connect.timeout.ms' in '/opt/kafka/config/server.properties' [2018-09-05 10:47:22,036] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [2018-09-05 10:47:23,145] INFO starting (kafka.server.KafkaServer) [2018-09-05 10:47:23,148] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) [2018-09-05 10:47:23,288] INFO [ZooKeeperClient] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) [2018-09-05 10:47:23,300] INFO Client environment:zookeeper.version=3.4.13- 2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper) [2018-09-05 10:47:23,300] INFO Client environment:host.name=kafka -757dc6c47b-zpzfz (org.apache.zookeeper.ZooKeeper) [2018-09-05 10:47:23,300] INFO Client environment:java.version=1.8.0_171 (org.apache.zookeeper.ZooKeeper) [2018-09-05 10:47:23,301] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper) [2018-09-05 10:47:23,301] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper) [2018-09-05 10:47:23,301] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation- 1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0- b42.jar:/opt/kafka/bin/../libs/argparse4j- 0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations- 0.5.0.jar:/opt/kafka/bin/../libs/commons-lang3- 3.5.jar:/opt/kafka/bin/../libs/connect-api- 2.0.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension- 2.0.0.jar:/opt/kafka/bin/../libs/connect-file- 2.0.0.jar:/opt/kafka/bin/../libs/connect-json- 2.0.0.jar:/opt/kafka/bin/../libs/connect-runtime- 2.0.0.jar:/opt/kafka/bin/../libs/connect-transforms- 2.0.0.jar:/opt/kafka/bin/../libs/guava- 20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0- b42.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0- b42.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0- b42.jar:/opt/kafka/bin/../libs/jackson-annotations- 2.9.6.jar:/opt/kafka/bin/../libs/jackson-core- 2.9.6.jar:/opt/kafka/bin/../libs/jackson-databind- 2.9.6.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider- 2.9.6.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations- CR2.jar:/opt/kafka/bin/../libs/javax.annotation-api- 1.2.jar:/opt/kafka/bin/../libs/javax.inject- 1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0- b42.jar:/opt/kafka/bin/../libs/javax.servlet-api- 3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api- 2.1.jar:/opt/kafka/bin/../libs/jaxb-api- 2.3.0.jar:/opt/kafka/bin/../libs/jersey-client- 2.27.jar:/opt/kafka/bin/../libs/jersey-common- 2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet -2.27.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core- 2.27.jar:/opt/kafka/bin/../libs/jersey-hk2- 2.27.jar:/opt/kafka/bin/../libs/jersey-media-jaxb- 2.27.jar:/opt/kafka/bin/../libs/jersey-server -2.27.jar:/opt/kafka/bin/../libs/jetty-client -9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-continuation- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-http- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-io- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-security- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-server- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-servlet- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-servlets- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jetty-util- 9.4.11.v20180605.jar:/opt/kafka/bin/../libs/jopt-simple- 5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients- 2.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender- 2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams- 2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples- 2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.11- 2.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils- 2.0.0.jar:/opt/kafka/bin/../libs/kafka-tools- 2.0.0.jar:/opt/kafka/bin/../libs/kafka_2.11-2.0.0- sources.jar:/opt/kafka/bin/../libs/kafka_2.11-2 .0.0.jar:/opt/kafka/bin/../libs/log4j 1.2.17.jar:/opt/kafka/bin/../libs/lz4-java- 1.4.1.jar:/opt/kafka/bin/../libs/maven-artifact- 3.5.3.jar:/opt/kafka/bin/../libs/metrics-core- 2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator- 1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils- 3.1.0.jar:/opt/kafka/bin/../libs/reflections- 0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni- 5.7.3.jar:/opt/kafka/bin/../libs/scala-library- 2.11.12.jar:/opt/kafka/bin/../libs/scala-logging_2.11- 3.9.0.jar:/opt/kafka/bin/../libs/scala-reflect- 2.11.12.jar:/opt/kafka/bin/../libs/slf4j-api- 1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12- 1.7.25.jar:/opt/kafka/bin/../libs/snappy-java- 1.1.7.1.jar:/opt/kafka/bin/../libs/validation-api- 1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient- 0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.13.jar (org.apache.zookeeper.ZooKeeper) </code></pre> <p>output for <code>kubectl get svc -o wide</code> is as follows:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 50m &lt;none&gt; zookeeper ClusterIP 10.98.180.138 &lt;none&gt; 2181/TCP 48m io.kompose.service=zookeeper </code></pre> <p>output from <code>kubectl get pods -o wide</code>:</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE kafka-757dc6c47b-zpzfz 0/1 CrashLoopBackOff 15 1h 10.32.0.17 administrator-thinkpad-l480 zookeeper-7594d99b-784n9 1/1 Running 0 1h 10.32.0.19 administrator-thinkpad-l480 </code></pre> <p>Edit: output from <code>kubectl describe pod kafka-757dc6c47b-zpzfz</code>:</p> <pre><code>Name: kafka-757dc6c47b-zpzfz Namespace: default Node: administrator-thinkpad-l480/10.11.17.86 Start Time: Wed, 05 Sep 2018 16:17:06 +0530 Labels: io.kompose.service=kafka pod-template-hash=3138727036 Annotations: &lt;none&gt; Status: Running IP: 10.32.0.17 Controlled By: ReplicaSet/kafka-757dc6c47b Containers: kafka: Container ID:docker://2bdc06d876ae23437c61f4e95539a67903cdb61e88fd9c68377b47c7705293a3 Image: wurstmeister/kafka Image ID: docker-pullable://wurstmeister/kafka@sha256:2e3ff64e70ea983530f590282f36991c0a1b105350510f53cc3d1a0279b83c28 Port: 9092/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 05 Sep 2018 17:29:06 +0530 Finished: Wed, 05 Sep 2018 17:29:14 +0530 Ready: False Restart Count: 18 Environment: KAFKA_ADVERTISED_HOST_NAME: kafka KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_PORT: 9092 KAFKA_ZOOKEEPER_CONNECT_TIMEOUT_MS: 160000 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-nhb9z (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-nhb9z: Type: Secret (a volume populated by a Secret) SecretName: default-token-nhb9z Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 3m (x293 over 1h) kubelet, administrator-thinkpad-l480 Back-off restarting failed container </code></pre>
<blockquote> <p>What could be the reason for this ? and solutions ?</p> </blockquote> <p>The reason is hidden behind following log line:</p> <pre><code>INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) </code></pre> <p>Kafka is searching for <code>zookeeper</code> while it should search for your service name which is <code>zookeeper-svc</code>.</p> <p>The solution is then straightforward: to either rename your zookeeper service from <code>zookeeper-svc</code> to <code>zookeeper</code> in your <code>zookeeper-service.yaml</code> like so:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.1.0 (36652f6) creationTimestamp: null labels: io.kompose.service: zookeeper name: zookeeper spec: ports: - name: "2181" port: 2181 targetPort: 2181 selector: io.kompose.service: zookeeper status: loadBalancer: {} </code></pre> <p>or, more clean, to keep service name as it is and reconfigure Kafka to search for <code>zookeeper-svc</code> instead of <code>zookeeper</code>.</p> <p><strong>Note</strong>: since your minimal example was enough to reproduce the issue (thank you, it is nice to see working minimal example!), although pod is up and running (not in error state) it still shows: <code>java.io.IOException: Can't resolve address: kafka:9092</code> which is tied to another issue (service not covering Kafka deployment) and is out of the scope of this question, just to let you know. </p>
<p>I have a node JS app which i am deploying to kubernetes. </p> <p>I have made changes to node JS app and am redeploying the app to K8s. </p> <p>However, I notice that the deployment is not making through. </p> <p>I checked my docker hub and yes the latest image is being deployed. This is my service.yaml file below</p> <pre><code>apiVersion: v1 kind: Service metadata: name: fourthapp spec: type: LoadBalancer #Exposes the service as a node port ports: - port: 3000 protocol: TCP targetPort: 3000 selector: app: webapp </code></pre> <p>and this is my deploy.yaml file</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: fourthapp spec: replicas: 2 template: metadata: labels: app: webapp spec: containers: - name: webapp image: index.docker.io/leexha/nodejsapp:latest ports: - containerPort: 3000 resources: requests: memory: 500Mi cpu: 0.5 limits: memory: 500Mi cpu: 0.5 imagePullPolicy: Always </code></pre> <p>when i run the service.yaml it reads</p> <pre><code>C:\Users\adrlee\Desktop\oracle\Web_projects&gt;kubectl apply -f service.yml service "fourthapp" unchanged </code></pre> <p>Anything im doing wrong?</p>
<p>If I understood the question you should update the <em>Deployment</em> instead. The <em>service</em> is just a kind of LB which dispatch traffic between your pods.</p> <p>First, you should add <code>imagePullPolicy: Always</code> to the deployment to force k8s to download the newest image.</p> <p>If you want to update the <em>deployment</em> you can run</p> <p><code>kubectl apply -f deploy.yml </code></p> <p>or performing a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">Rolling Update</a> </p>
<p>Hei,</p> <p>I'm looking for the documentation for Kubernetes's configuration files. The ones used by kubectl (e.g. <code>kubectl create -f whatever.yaml</code>).</p> <p>Basically, the Kubernetes equivalent of this <a href="https://docs.docker.com/compose/compose-file/" rel="nofollow noreferrer">Docker Compose</a> document.</p> <p>I did search a lot but I didn't find much, or 404 links from old stackoverflow questions.</p>
<p>You could use the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/" rel="nofollow noreferrer">official API docs</a> but a much more user-friendly way on the command line is the <code>explain</code> command, for example, I never remember what exactly goes into the spec of a pod, so I do:</p> <pre><code>$ kubectl explain Deployment.spec.template.spec </code></pre>
<p>I'm tying to create a deployment with 3 replicas, whcih will pull image from a private registry. I have stored the credentials in a secret and using the imagePullSecrets in the deployment file. Im getting below error in the deploy it.</p> <p>error: error validating "private-reg-pod.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "containers" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "imagePullSecrets" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "template" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false</p> <p>Any help on this?</p> <p>Below is my deployment file :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test-pod-deployment labels: app: test-pod spec: replicas: 3 selector: matchLabels: app: test-pod template: metadata: labels: app: test-pod spec: containers: - name: test-pod image: &lt;private-registry&gt; imagePullSecrets: - name: regcred </code></pre> <p>Thanks, Sundar</p>
<p>Image section should be placed in container specification. ImagePullSecret should be placed in spec section so proper yaml file looks like this (please note indent):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test-pod-deployment labels: app: test-pod spec: replicas: 3 selector: matchLabels: app: test-pod template: metadata: labels: app: test-pod spec: containers: - name: test-pod image: &lt;private-registry&gt; imagePullSecrets: - name: regcred </code></pre>
<p>I want to profile my play application on my Kubernetes cluster.</p> <p>I am using VisualVM, and the steps that I have taken are as follows:</p> <ol> <li>Image is built on ubuntu latest</li> <li><p>Running my play application with the following args:</p> <pre><code>"-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.port=1098" </code></pre></li> <li><p>My Image has <code>apt-get install -y visualvm</code> </p></li> <li>I do <code>kubectl port-forward &lt;Container&gt; 1098</code></li> <li>Open VisualVM, And I don't see the process. </li> </ol> <p>I am not sure what I am doing wrong here. When running the application on localhost (not via IDE, straight from the startup script) everything works fine.</p> <p><strong>Update 1, deployment and service</strong></p> <blockquote> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myApp labels: name: myApp spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: name: myApp labels: name: myApp spec: containers: - name: myApp image: ... args: ["-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.port=1098"] ports: - containerPort: 9000 env: ... </code></pre> </blockquote> <pre><code>apiVersion: v1 kind: Service metadata: name: myApp labels: name: myApp spec: selector: name: myApp ports: - port: 80 targetPort: 9000 </code></pre> <p><strong>Update 2 @marcospereira</strong></p> <p>File->Add JMX connection-> localhost:1098</p> <p>Cannot connect to localhost:1098 using service jmx:rmi...</p>
<p>It can be executed in the same form as QA below.</p> <p><a href="https://stackoverflow.com/questions/35184558/multiple-app-nodes-how-to-expose-jmx-in-kubernetes">multiple app nodes how to expose jmx in kubernetes?</a></p> <p>Please set <code>java.rmi.server.hostname</code> System Property.</p> <pre><code>"-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.ssl=false", "-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.port=1098" "-Djava.rmi.server.hostname=127.0.0.1" #add </code></pre> <p>Jmx connect to <code>localhost:1098</code> .</p> <p>I confirmed that I could connect.</p>
<p>we do have deployed a Kubernetes Cluster behind a proxy and successfully configured docker daemon to use our proxy for puling images as described at the following page: <a href="https://docs.docker.com/config/daemon/systemd/#httphttps-proxy" rel="noreferrer">https://docs.docker.com/config/daemon/systemd/#httphttps-proxy</a></p> <p>We do have configured the Docker client to set the environemnt paramaters "https_proxy", "http_proxy" and "no_proxy" as defined at the following page: <a href="https://docs.docker.com/network/proxy/#configure-the-docker-client" rel="noreferrer">https://docs.docker.com/network/proxy/#configure-the-docker-client</a></p> <p>The Kubernetes cluster setup is as follows:</p> <pre><code>aadigital1:~ # kubectl get node NAME STATUS ROLES AGE VERSION aadigital1 Ready master,node 9d v1.10.4 aadigital2 Ready node 9d v1.10.4 aadigital3 Ready node 9d v1.10.4 aadigital4 Ready node 9d v1.10.4 aadigital5 Ready node 9d v1.10.4 </code></pre> <p><strong>Docker container run manually - ENV Parameters set correctly</strong></p> <p>The environment parameters for docker containers which are manually deployed are set as defined:</p> <pre><code>aadigital1:~ # docker run -i -t odise/busybox-curl ash / # printenv HTTPS_PROXY=http://ssnproxy.ssn.xxx.com:80/ no_proxy=localhost,127.0.0.0,127.0.1.1,127.0.1.1,local.home,80.250.142.64,80.250.142.65,80.250.142.66,80.250.142.69,80.250.142.70,80.250.142.71,aadigital1.aan.xxx.com,aadigita2.ssn.xxx.com,aadigital3.ssn.xxx.com,aadigital4.ssn.xxx.com,aadigita5.ssn.xxx.com,aadigital6.ssn.xxx.com HOSTNAME=0360a9dcd20b SHLVL=1 HOME=/root NO_PROXY=localhost,127.0.0.0,127.0.1.1,127.0.1.1,local.home,80.250.142.64,80.250.142.65,80.250.142.66,80.250.142.69,80.250.142.70,80.250.142.71,aadigital1.aan.xxx.com,aadigita2.ssn.xxx.com,aadigital3.ssn.xxx.com,aadigital4.ssn.xxx.com,aadigita5.ssn.xxx.com,aadigital6.ssn.xxx.com https_proxy=http://ssnproxy.ssn.xxx.com:80/ http_proxy=http://ssnproxy.ssn.xxx.com:80/ TERM=xterm PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/ HTTP_PROXY=http://ssnproxy.ssn.xxx.com:80/ </code></pre> <p><strong>Kubernetes PODs - ENV Parameters not set</strong></p> <p>The same docker image used above as a Kubernetes POD does not have the proxy environment paramaters (same machine aadigital1):</p> <pre><code>aadigital1:~ # kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE busybox-6d4df8f8b7-m62m2 1/1 Running 3 2d 10.0.0.16 aadigital3 busybox-curl 1/1 Running 0 16m 10.0.1.59 aadigital1 busybox-dns 1/1 Running 9 6h 10.0.1.53 aadigital1 aadigital1:~ # kubectl exec -it busybox-curl -- ash / # printenv KUBERNETES_PORT=tcp://10.0.128.1:443 NGINX_NODEPORT_PORT=tcp://10.0.204.167:80 KUBERNETES_SERVICE_PORT=443 NGINX_NODEPORT_SERVICE_PORT=80 HOSTNAME=busybox-curl SHLVL=1 HOME=/root NGINX_NODEPORT_PORT_80_TCP_ADDR=10.0.204.167 NGINX_NODEPORT_PORT_80_TCP_PORT=80 NGINX_NODEPORT_PORT_80_TCP_PROTO=tcp TERM=xterm NGINX_NODEPORT_PORT_80_TCP=tcp://10.0.204.167:80 KUBERNETES_PORT_443_TCP_ADDR=10.0.128.1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP=tcp://10.0.128.1:443 KUBERNETES_SERVICE_PORT_HTTPS=443 PWD=/ KUBERNETES_SERVICE_HOST=10.0.128.1 NGINX_NODEPORT_SERVICE_HOST=10.0.204.167 </code></pre> <p>How could we configure Kubernetes / Docker that the proxy environment parameters are set correctly for the PODs?</p> <p>Thank you very much!</p>
<p>The reason of that state is that environment variables with proxy are feature of docker client. Docker is divided into 2 parts: API exposed on socket by docker daemon and docker client CLI using which you can run container docker run.... so that command will hit docker daemon API making 'something'. Sadly Kubernetes is another API client what means that Kubernetes doesn't use docker client to schedule container (Kubernetes access API directly using SDK) so that's why you don't see expected environment variables.</p> <p>To work around that problem I would suggest to create ConfigMap with that proxy values e.g.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: your-config-map-name labels: app: your-best-app data: HTTPS_PROXY: http://ssnproxy.ssn.xxx.com:80/ HTTP_PROXY: http://ssnproxy.ssn.xxx.com:80/ </code></pre> <p>and mount them to deployment as environment variables using</p> <pre><code>envFrom: - configMapRef: name: your-config-map-name </code></pre>
<p>I'm trying to setup a Kubernetes system in our lab at work. I have gone through all the steps, but fail when trying to do the kubeadm. </p> <p>It appears to be an issue with pulling the images:</p> <p>[root@kubemaster ~]# kubeadm config images pull --kubernetes-version=v1.11.2 failed to pull image "k8s.gcr.io/kube-apiserver-amd64:v1.11.2": exit status 1</p> <p>I am able to pull Docker images such as hello-world, Ubuntu, and CentOS without issue. </p> <p>I believe this may be a proxy issue or something like that as I had to add the --kubernetes-version tag since I was getting X.509 errors when trying to install otherwise. </p> <p>If I try to pull the Kubernetes images with Docker I get the following:</p> <p>[root@kubemaster ~]# docker pull k8s.gcr.io/kube-apiserver-amd64:v1.11.2 v1.11.2: Pulling from kube-apiserver-amd64 8c5a7da1afbc: Pulling fs layer 5d75b555908b: Pulling fs layer error pulling image configuration: Get <a href="https://storage.googleapis.com/us.artifacts.google-containers.appspot.com/containers/images/sha256:821507941e9c72afd5df91ddb3dceea58ea31a8e3895a06df794c0fd785edae2" rel="nofollow noreferrer">https://storage.googleapis.com/us.artifacts.google-containers.appspot.com/containers/images/sha256:821507941e9c72afd5df91ddb3dceea58ea31a8e3895a06df794c0fd785edae2</a>: x509: certificate signed by unknown authority</p> <p>Any help would be appreciated. </p> <p>Thanks, Doug </p>
<p>There are 2 possibilities why you have problem with trust for official google's site:</p> <ol> <li>Your company is doing man-in-the-middle by decrypting your traffic and dynamically issuing self-signed certificate for Google domains which you want to access from within your company's network</li> <li>You don't have Google certificates placed in CA files directory on OS where you want to pull images - it means that somebody deleted that cert because of something.</li> </ol> <p>In both cases you should download Google CA cert and place it in your trusted certificates in system where you want to run Kubernetes - more info for Ubuntu: <a href="https://askubuntu.com/questions/645818/how-to-install-certificates-for-command-line">https://askubuntu.com/questions/645818/how-to-install-certificates-for-command-line</a> </p>
<p>I am unable to make the auto-scaling work with targetcpuutilization setting. My configuration is as follows:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: pod namespace: pod spec: minReplicas: 1 maxReplicas: 5 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: pod metrics: - type: Resource resource: name: cpu targetAverageUtilization: 10 </code></pre> <blockquote> <pre><code> message: 'the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)' reason: FailedGetResourceMetric </code></pre> <p>I have verified that the metrics server is running. When I check hpa I am getting the following:</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE pod-name Deployment/pod-name &lt;unknown&gt;/10% 1 5 1 15h </code></pre> </blockquote> <p>The event log for the namespace shows this:</p> <pre><code>LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 46m 15h 1721 pod-name.155162c884d417be HorizontalPodAutoscaler Warning FailedComputeMetricsReplicas horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API 1m 15h 1811 pod-name.155162c884a5caa2 HorizontalPodAutoscaler Warning FailedGetResourceMetric horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API </code></pre> <p>Looks like the pods are not able to get to the metrics server.</p> <p>Logs on the metrics server:</p> <pre><code>unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:host.com: unable to fetch metrics from Kubelet host.com (host.com): Get https://host.com:10250/stats/summary/: x509: certificate signed by unknown authority, unable to fully scrape metrics from source kubelet_summary:host.ibaset.com: unable to fetch metrics from Kubelet host.ibaset.com (host.ibaset.com): Get https://host.com:10250/stats/summary/: x509: certificate signed by unknown authority, unable to fully scrape metrics from source kubelet_summary:host.com: unable to fetch metrics from Kubelet host.com (host.com): Get https://host.com:10250/stats/summary/: x509: certificate signed by unknown authority] </code></pre>
<p>It looks like the metrics server can't receive metrics from the Kubelet endpoint because of an invalid certificate.</p> <p>Try adding following argument to the metrics-server: <code>--kubelet-insecure-tls</code></p>
<p>I'm trying to add centralized logging to my kubernetes cluster using EFK. I found out that my logs from pods that are written to system out are stored in journald in my master node. But I want to store those logs in elasticsearch.</p> <p>Can I change docker config to redirect logs to var logs? I try to avoid to log from my pods in custom location.</p> <p>Or can I intercept the logs from journald using fluentd or anything else?</p> <p>Thanks</p>
<p>Please try this <a href="https://www.elastic.co/blog/shipping-kubernetes-logs-to-elasticsearch-with-filebeat" rel="nofollow noreferrer">link</a></p> <p>The basic idea is to run Filebeat as a daemonset, which will collect and ship logs from containers and push to Elasticsearch.</p>
<p><s>I am facing a problem with my current k8s setup. In production, I spin up three replicas of each of our services and put them in a pod. When the pods speak to each other, we would like the pods to speak to each container in the pod in a round-robin fashion. Unfortunately, the connection between pods is never terminated thanks to TLS keep alive - and we don't want to change that part specifically - but we do want to have each container in a pod communicate properly. This is sort of what we have now:</p> <p><a href="https://i.stack.imgur.com/DJ37b.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/DJ37b.jpg" alt="How Services Talk"></a></p> <p>If the API is trying to talk to, say, pod OSS, it will talk to the first container only. I want API to be able to talk to all three in a round-robin fashion. </p> <p>How do I do this? I understand that I will need an Ingress Controller, like nginx. But is there some real tutorial that breaks down how I can achieve this? I am unsure and somewhat new to k8s. Any help would be appeciated!</p> <p>By the way, I am working locally on minikube.</p> <p></s> Edit:</p> <p>In production, we spin up three replicas of each service. When service <code>A</code> needs to speak to service <code>B</code>, a pod <code>B1</code> from service <code>B</code> is selected and manages whatever request it receives. However, that pod <code>B1</code> becomes the only pod from service <code>B</code> that handles any communication; in other words, pods <code>B2</code> and <code>B3</code> are never spoken to. I am trying to solve this problem with nginx because it seems like we need a load balancer to help with this, but I'm not sure how to do it. Can anyone provide some detailed explanation on what needs to be done? Specifically, how can I set up nginx with my services so that all pods are used in a service (in some round-robin fashion), unlike what is happening now where only one pod is used? This is a problem because in production, the one pod gets overloaded with requests and dies when we have two other pods sitting there doing nothing. I'm developing locally on minikube. </p>
<p>I'm assuming that you have a microservice architecture underneath your pods, right? Have you considered the use of <a href="https://istio.io" rel="nofollow noreferrer">Istio</a> with Kubernetes? It's open sourced and developed by Google, IBM and Lyft -- intention is to give developers a vendor-neutral way (which seems to be what you are looking for) to connect, secure, manage, and monitor networks of different microservices on cloud platforms (AWS, Azure, Google, etc).</p> <blockquote> <p>At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your development teams. It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.</p> </blockquote> <p>This is the <a href="https://istio.io/docs/setup/kubernetes/multicluster-install/" rel="nofollow noreferrer">link to Istio's documentation</a>, explaining how to set up a <strong>multi cluster environment</strong> in details, which is what you are looking for.</p> <p>There's a note in the documentation that I would like to highlight -- it may be related to your issue:</p> <blockquote> <p>Since Kubernetes pods don’t have stable IPs, restart of any Istio service pod in the control plane cluster will cause its endpoint to be changed. Therefore, any connection made from remote clusters to that endpoint will be broken. This is documented in Istio <a href="https://github.com/istio/istio/issues/4822" rel="nofollow noreferrer">issue #4822</a>.</p> <p>There are a number of ways to either avoid or resolve this scenario. This section provides a high level overview of these options.</p> <ul> <li>Update the DNS entries</li> <li>Use a load balancer service type</li> <li>Expose the Istio services via a gateway</li> </ul> </blockquote> <p>I'm quoting the load balancer solution, since it seems to be what you want:</p> <blockquote> <p>In Kubernetes, you can declare a service with a service type to be <code>LoadBalancer</code>. A simple solution to the pod restart issue is to use load balancers for the Istio services. You can then use the load balancer IPs as the Istio services’s endpoint IPs to configure the remote clusters.</p> </blockquote> <p>I hope it helps, and if you have any question, shoot!</p>
<p>I have created an AKS kubernetes cluster with <code>az</code> CLI :</p> <pre><code>az aks create \ --name abcdefAKSCluster \ --resource-group abcdef \ --node-count 5 \ --generate-ssh-keys \ --service-principal &lt;...&gt; \ --client-secret &lt;...&gt; \ --location westeurope </code></pre> <p>(I followed the steps on <a href="https://learn.microsoft.com/fr-fr/azure/aks/tutorial-kubernetes-deploy-cluster" rel="nofollow noreferrer">this documentation</a>)</p> <p>I deployed a bunch of docker, based on unix images. Everything works fine (nestjs and angular apps, but this is not relevant).</p> <p>Now I have the requirement to deploy a docker image, but based on <strong>windows</strong>. This image is built and uploaded to our azure container registry. I want to run this image in the kubernetes azure cluster. But for that, I need, somehow, to tell kubernetes to run this docker inside a windows-based node.</p> <p>So I've found in <a href="https://anthonychu.ca/post/hybrid-kubernetes-linux-windows-cluster-easy-steps/" rel="nofollow noreferrer">this blog post</a> that I need to have a <code>osType:windows</code> entry in the <code>agentPoolProfiles</code> array of json describing the cluster. When the cluster will have a windows agent pool profile, I guess I'll be able to tell kubernetes to target a windows-based machine to run this windows-based docker image. Not sure about how to implement that last bit though...</p> <p>Anyway my question is how to update an existing AKS cluster on azure to add a windows machine ? It seems this is not doable either with the <code>az</code> CLI nor with the azure portal UI.</p> <p>Thanks.</p>
<p>Unfortunately, Windows container is not yet supported on AKS.</p>
<p>I have generated a python script that opens a deployment config_file.yaml, modifies some parameters and saves it again, using pyyaml. This python script will be executed in the master node of a Kubernetes cluster.</p> <p>Once is generated the new file, my intention is to execute </p> <pre><code>kubectl apply -f config_file.yaml </code></pre> <p>in the python script to apply the modifications to the deployment. </p> <p>I have been reading how to do it using kubernetes python client, but it seems it is not prepare to execute kubectl apply.</p> <p>So the other option is to create a bash script and execute it from python script.</p> <p>Bash scripts:</p> <pre><code>#!/bin/bash sudo kubectl apply -f config_file.yaml </code></pre> <p>I give it permissions chmod +x shell_scipt.sh</p> <p>Python script: </p> <pre><code>import subprocess subprocess.call(['./shell_script.sh']) </code></pre> <p>But an error appears: File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child raise child_exception OSError: [Errno 13] Permission denied</p> <p>I don't know how to resolve this error, I have tested givin permissions to the bash script, but nothing worked.</p>
<p>I do not know anything about Kubernetes but I think I might help.</p> <p>I am basically suggesting that you run the command directly from Python script, not having Python running a bash script which runs a command.</p> <pre><code>import os command = 'kubectl apply -f config_file.yaml' password = 'yourpassword' p = os.system('echo %s|sudo -S %s' % (passs, command)) </code></pre>
<p>We have set up a kubernetes cluster for our laravel application on google cloud platform. Containers:</p> <ul> <li>application code + php-fpm</li> <li>apache2</li> <li>others not related to the issue</li> </ul> <p>(We run under nginx-ingress-controller but this seems unrelated to the issue)</p> <p>We run a jmeter stress tests on a simple laravel route that returns "ok" and we noticed terrible response times.</p> <p><a href="https://i.stack.imgur.com/l8fNm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l8fNm.png" alt="enter image description here"></a></p> <p>Afterwards we run the same test on an index2.php (inside public dir το slide over the framework) which just returns 'ok'. </p> <p>And we got this result(!): <a href="https://i.stack.imgur.com/r30Gr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r30Gr.png" alt="enter image description here"></a></p> <p>After digging we found out that the composer's autoloading stuff cause this slowness.</p> <p>Any advice on how this could be resolved will be highly appreciated.</p> <p>Thanks</p>
<p>Ok. We found out that we had no opcache enabled. As documented about composer optimize-autoloader:</p> <blockquote> <p>On PHP 5.6+, the class map is also cached in opcache which improves the initialization time greatly. If you make sure <em>opcache is enabled</em>, then the class map should load almost instantly and then class loading is fast.</p> </blockquote>
<p>I have a problem that my pods in minikube cluster are not able to see the service through the domain name.</p> <p>to run my minikube i use the following commands (running on windows 10):<br> <code>minikube start --vm-driver hyperv;</code><br> <code>minikube addons enable kube-dns;</code><br> <code>minikube addons enable ingress;</code> </p> <p>This is my <code>deployment.yaml</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: hello-world name: hello-world namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: run: hello-world strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: run: hello-world spec: containers: - image: karthequian/helloworld:latest imagePullPolicy: Always name: hello-world ports: - containerPort: 80 protocol: TCP resources: {} dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 </code></pre> <p>this is the <code>service.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: run: hello-world name: hello-world namespace: default selfLink: /api/v1/namespaces/default/services/hello-world spec: ports: - nodePort: 31595 port: 80 protocol: TCP targetPort: 80 selector: run: hello-world sessionAffinity: None type: ExternalName externalName: minikube.local.com status: loadBalancer: {} </code></pre> <p>this is my <code>ingress.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: minikube-local-ingress spec: rules: - host: minikube.local.com http: paths: - path: / backend: serviceName: hello-world servicePort: 80 </code></pre> <p>So, if i go inside the <code>hello-world</code> pod and from <code>/bin/bash</code> will run <code>curl minikube.local.com</code> or <code>nslookup minikube.local.com</code>.</p> <p>So how can i make sure that the pods can resolve the DNS name of the service? I know i can specify <code>hostAlias</code> in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?</p>
<p>So, you want to expose your app on Minikube? I've just tried it using the default <code>ClusterIP</code> service type (essentially, removing the <code>ExternalName</code> stuff you had) and with <a href="https://gist.github.com/mhausenblas/37e43f1755f2895a2f87719bb4144daa" rel="nofollow noreferrer">this YAML file</a> I can see your service on <code>https://192.168.99.100</code> where the Ingress controller lives:</p> <p><a href="https://i.stack.imgur.com/9FhbL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9FhbL.png" alt="screen shot of hello-world app in browser"></a></p> <p>The service now looks like so:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: run: hello-world name: hello-world spec: ports: - port: 80 targetPort: 80 selector: run: hello-world </code></pre> <p>And the ingress is:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: minikube-local-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: http: paths: - path: / backend: serviceName: hello-world servicePort: 80 </code></pre> <p>Note: Within the cluster your service is now available via <code>hello-world.default</code> (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say <code>hello-world.local</code> to 192.168.99.100 in your <code>/etc/hosts</code> file on your host machine. </p> <p>Alternatively, if you change the <code>Ingress</code> resource to <code>- host: hello-world.local</code> then you can (from the host) reach your service using this FQDN like so: <code>curl -H "Host: hello-world.local" 192.168.99.100</code>.</p>
<p>I am trying to determine the default CPU and memory allocation for Minikube (version &gt; 1.0).</p> <p>When running the following:</p> <pre><code>$ minikube config get memory &amp;&amp; minikube config get cpu Error: specified key could not be found in config </code></pre> <p>values are not returned unless explicitly set with the <code>--cpus</code> and <code>--memory</code> options.</p>
<p>The default memory constant is <code>2048</code> (megabytes) as seen <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L102" rel="noreferrer">here</a>. </p> <p>This doesn't automatically change with the vm-driver.</p>
<p>I am configuring Jenkins on Kubernetes system. It works fine to build. But in order to deploy, we need to call kubectl or helm. Currently, I am using</p> <ul> <li>lachlanevenson/k8s-kubectl:v1.8.8</li> <li>lachlanevenson/k8s-helm:latest</li> </ul> <p>It is fail and throw exception: "Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:jenkins:default" cannot list pods in the namespace "jenkins""</p> <p>The jenkins script is simple:</p> <pre><code>def label = "worker-${UUID.randomUUID().toString()}" podTemplate(label: label,containers: [ containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true) ], volumes: [ hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock') ]){ node(label) { stage('Run kubectl') { container('kubectl') { sh "kubectl get pods" } } } } </code></pre> <p>Could you please let me know what is wrong?</p> <p>Thanks,</p>
<p>The Kubernetes (k8s) master, as of Kubernetes v1.8, by default implements <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">role-based access control (RBAC)</a> security controls on accesses to its API. The RBAC controls limit access to the k8s API by your workloads to only those resources and methods which you have explicitly permitted.</p> <p>You should create a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#role-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">role</a> which permits access to the <code>pod</code> resource's <code>list</code> verb (and any other resources you require<sup>1</sup>), create a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#serviceaccount-v1-core" rel="nofollow noreferrer">service account</a> object, and finally create a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#rolebinding-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">role binding</a> which assigns the role to the service account.</p> <p>Finally, provide the service account to your Jenkins deployment by supplying its name in the <code>serviceAccountName</code> property of the Pod template. Ensure <code>automountServiceAccountToken</code> is <code>true</code> to have k8s install an API key into your Pod. Attempts to access the k8s API using the native k8s API wrappers and libraries should find this key and automatically authenticate your requests.</p> <p><sup>1</sup><sub>If you are planning to make deployments from Jenkins, you will certainly require more than the ability to list Pods, as you will be required to mutate objects in the system. However, if you use Helm, it is Helm's Tiller pod which influences the downstream k8s objects for your deployments, so the set of permissions you require for the Helm Tiller and for Jenkins to communicate with the Tiller will vary.</sub></p>
<p>My namespace has some custom metadata labels. Some have the labels some don't. Is there any way to get the namespaces which has a particular label using kubectl?</p>
<p>Yes. Like so:</p> <pre><code>$ kubectl create ns nswithlabels $ kubectl label namespace nswithlabels this=thing $ kubectl describe ns/nswithlabels Name: nswithlabels Labels: this=thing Annotations: &lt;none&gt; Status: Active No resource quota. No resource limits. $ kubectl get ns -l=this NAME STATUS AGE nswithlabels Active 6m </code></pre> <p>Note: I could have also used <code>-l=this=thing</code> in the last command to specify both key and value required to match.</p>
<p>I am trying to use local persistent volume mentioned in <a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="noreferrer">https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/</a> for creating my statefulset pod. But when my pod tries to claim volume. I am getting following error :</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4s (x243 over 20m) default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate. </code></pre> <p>Following are storage classes and persistent volumes I have created:</p> <p><strong>storageclass-kafka-broker.yml</strong></p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: kafka-broker provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p><strong>storageclass-kafka-zookeeper.yml</strong></p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: kafka-zookeeper provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p><strong>pv-zookeeper.yml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-local-pv-zookeeper spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: kafka-zookeeper local: path: /D/kubernetes-mount-path nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - my-node </code></pre> <p><strong>pv-kafka.yml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-local-pv spec: capacity: storage: 200Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: kafka-broker local: path: /D/kubernetes-mount-path nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - my-node </code></pre> <p>Following is the pod <strong>50pzoo.yml</strong> using this volume :</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: pzoo namespace: kafka spec: selector: matchLabels: app: zookeeper storage: persistent serviceName: "pzoo" replicas: 1 updateStrategy: type: OnDelete template: metadata: labels: app: zookeeper storage: persistent annotations: spec: terminationGracePeriodSeconds: 10 initContainers: - name: init-config image: solsson/kafka-initutils@sha256:18bf01c2c756b550103a99b3c14f741acccea106072cd37155c6d24be4edd6e2 command: ['/bin/bash', '/etc/kafka-configmap/init.sh'] volumeMounts: - name: configmap mountPath: /etc/kafka-configmap - name: config mountPath: /etc/kafka - name: data mountPath: /var/lib/zookeeper/data containers: - name: zookeeper image: solsson/kafka:2.0.0@sha256:8bc5ccb5a63fdfb977c1e207292b72b34370d2c9fe023bdc0f8ce0d8e0da1670 env: - name: KAFKA_LOG4J_OPTS value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties command: - ./bin/zookeeper-server-start.sh - /etc/kafka/zookeeper.properties ports: - containerPort: 2181 name: client - containerPort: 2888 name: peer - containerPort: 3888 name: leader-election resources: requests: cpu: 10m memory: 100Mi readinessProbe: exec: command: - /bin/sh - -c - '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]' volumeMounts: - name: config mountPath: /etc/kafka - name: data mountPath: /var/lib/zookeeper/data volumes: - name: configmap configMap: name: zookeeper-config - name: config emptyDir: {} volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: kafka-zookeeper resources: requests: storage: 1Gi </code></pre> <p>Following is the <code>kubectl get events</code> command output </p> <pre><code>[root@quagga kafka-kubernetes-testing-single-node]# kubectl get events --namespace kafka LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 1m 1m 1 pzoo.15517ca82c7a4675 StatefulSet Normal SuccessfulCreate statefulset-controller create Claim data-pzoo-0 Pod pzoo-0 in StatefulSet pzoo success 1m 1m 1 pzoo.15517ca82caed9bc StatefulSet Normal SuccessfulCreate statefulset-controller create Pod pzoo-0 in StatefulSet pzoo successful 13s 1m 9 data-pzoo-0.15517ca82c726833 PersistentVolumeClaim Normal WaitForFirstConsumer persistentvolume-controller waiting for first consumer to be created before binding 9s 1m 22 pzoo-0.15517ca82cb90238 Pod Warning FailedScheduling default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate. </code></pre> <p>Output of <code>kubectl get pv</code> is :</p> <pre><code>[root@quagga kafka-kubernetes-testing-single-node]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-local-pv 200Gi RWO Retain Available kafka-broker 4m example-local-pv-zookeeper 2Gi RWO Retain Available kafka-zookeeper 4m </code></pre>
<p>it was a silly mistake. I was mentioning <code>my-node</code> in node name values in <code>pv</code> files. Modifying it to correct node name solved my issue.</p>
<p>I have created a Kubernetes read-only many persistent volume from a gcePersistentDisk like so:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: ferret-pv-1 spec: capacity: storage: 500Gi accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain gcePersistentDisk: pdName: data-1 partition: 1 fsType: ext4 </code></pre> <p>It creates the persistent volume from the existing gcePersistentDisk partition which already has an ext4 filesystem on it:</p> <pre><code>$ kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE ferret-pv-1 500Gi ROX Retain Bound default/ferret-pvc 5h </code></pre> <p>I then create a Kubernetes read-only many persistent volume claim like so:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ferret-pvc spec: accessModes: - ReadOnlyMany resources: requests: storage: 500Gi </code></pre> <p>It binds to the read-only PV I created above:</p> <pre><code>$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE ferret-pvc Bound ferret-pv-1 500Gi ROX 5h </code></pre> <p>I then create a Kubernetes deployment with 2 replicas using the PVC I just created like so:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ferret2-deployment spec: replicas: 2 template: metadata: labels: name: ferret2 spec: containers: - image: us.gcr.io/centered-router-102618/ferret2 name: ferret2 ports: - name: fjds containerPort: 1004 hostPort: 1004 volumeMounts: - name: ferret-pd mountPath: /var/ferret readOnly: true volumes: - name: ferret-pd persistentVolumeClaim: claimName: ferret-pvc </code></pre> <p>The deployment is created:</p> <pre><code>$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ferret2-deployment 2 2 2 1 4h </code></pre> <p>However, when I look at the corresponding two pods from the deployment, only the first one came up:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE ferret2-deployment-1336109949-2rfqd 1/1 Running 0 4h ferret2-deployment-1336109949-yimty 0/1 ContainerCreating 0 4h </code></pre> <p>Looking at the second pod which didn't come up:</p> <pre><code>$ kubectl describe pod ferret2-deployment-1336109949-yimty Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 4h 1m 128 {kubelet gke-sim-cluster-default-pool-e38a7605-kgdu} Warning FailedMount Unable to mount volumes for pod "ferret2-deployment-1336109949-yimty_default(d1393a2d-9fc9-11e6-a873-42010a8a009e)": timeout expired waiting for volumes to attach/mount for pod "ferret2-deployment-1336109949-yimty"/"default". list of unattached/unmounted volumes=[ferret-pd] 4h 1m 128 {kubelet gke-sim-cluster-default-pool-e38a7605-kgdu} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "ferret2-deployment-1336109949-yimty"/"default". list of unattached/unmounted volumes=[ferret-pd] 4h 55s 145 {controller-manager } Warning FailedMount Failed to attach volume "ferret-pv-1" on node "gke-sim-cluster-default-pool-e38a7605-kgdu" with: googleapi: Error 400: The disk resource 'data-1' is already being used by 'gke-sim-cluster-default-pool-e38a7605-fyx4' </code></pre> <p>It's refusing to start up the second pod because it thinks the first one has exclusive use of the PV. However, when I login to the first pod which claimed the PV, I see it has mounted the volume as read-only:</p> <pre><code>$ kubectl exec -ti ferret2-deployment-1336109949-2rfqd -- bash root@ferret2-deployment-1336109949-2rfqd:/opt/ferret# mount | grep ferret /dev/sdb1 on /var/ferret type ext4 (ro,relatime,data=ordered) </code></pre> <p>Am I missing something regarding mounting a PV read-only across multiple pods in a deployment using the same PVC? The disk is not mounted by any other containers. Since it mounted read-only on the first pod I would have expected the second and any other replicas in the deployment to have no problem claiming/mounting it. Also - how would I get ReadWriteOnce to work properly and how do I specify which pod mounts the volume rw?</p>
<p>The PV/PVC access mode is only used for binding PV/PVCs.</p> <p>In your pod template, make sure that you set <code>spec.volumes.persistentVolumeClaim.readOnly</code> to <code>true</code>. This ensures the volume is attached in readonly mode.</p> <p>Also in your pod template, make sure that you set <code>spec.containers.volumeMounts[x].readOnly</code> to true. This ensure the volume is mounted in readonly mode.</p> <p>Also, since you are pre-provisioning your PVs. Make sure to set on <code>claimRef</code> field on your PV, to make sure no other PVC accidentally gets bound to it. See <a href="https://stackoverflow.com/a/34323691">https://stackoverflow.com/a/34323691</a></p>
<p>I have 3-node kubernetes, host names are host_1, host_2, host_3.</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION host_1 Ready master 134d v1.10.1 host_2 Ready &lt;none&gt; 134d v1.10.1 host_3 Ready &lt;none&gt; 134d v1.10.1 </code></pre> <p>I have defined 3 local persistent volumes of size 100M, mapped to a local directory on each node. I used the following descriptor 3 times where <code>&lt;hostname&gt;</code> is one of: host_1, host_2, host_3:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: test-volume-&lt;hostname&gt; spec: capacity: storage: 100M volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /opt/jnetx/volumes/test-volume nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - &lt;hostname&gt; </code></pre> <p>After applying three such yamls, I have the following:</p> <pre><code>$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE test-volume-host_1 100M RWO Delete Available local-storage 58m test-volume-host_2 100M RWO Delete Available local-storage 58m test-volume-host_3 100M RWO Delete Available local-storage 58m </code></pre> <p>Now, I have a very simple container that writes to a file. The file should be located on the local persistent volume. I deploy it as a statefulset with 1 replica and map volumes via statefulset's volumeClaimTemplates:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: filewriter spec: serviceName: filewriter ... replicas: 1 template: spec: containers: - name: filewriter ... volumeMounts: - mountPath: /test/data name: fw-pv-claim volumeClaimTemplates: - metadata: name: fw-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 100M </code></pre> <p>The volume claim seems to have been created ok and bound to pv on the first host:</p> <pre><code>$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE test-volume-host_1 100M RWO Delete Bound default/fw-pv-claim-filewriter-0 local-storage 1m test-volume-host_2 100M RWO Delete Available local-storage 1h test-volume-host_3 100M RWO Delete Available local-storage 1h </code></pre> <p>But, the pod hangs in Pending state:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE filewriter-0 0/1 Pending 0 4s </code></pre> <p>If we describe, we can see the following errors:</p> <pre><code>$ kubectl describe pod filewriter-0 Name: filewriter-0 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2s (x8 over 1m) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict. </code></pre> <p>Can you help me figure out what is wrong? Why can't it just create the pod?</p>
<p>It seems that the one node where the PV is available has a taint that your StatefulSet does not have toleration for.</p>
<p>Download the file <a href="https://github.com/openshift/origin/blob/master/examples/hello-openshift/hello-pod.json" rel="nofollow noreferrer">https://github.com/openshift/origin/blob/master/examples/hello-openshift/hello-pod.json</a> and execute the following commands:</p> <pre><code>oc cluster up oc create -f hello-pod.json oc get pod hello-openshift -o yaml |grep podIP </code></pre> <p>it will return the IP address, let say:</p> <pre><code>podIP: 172.17.0.6 </code></pre> <p>Execute the command: </p> <pre><code>curl 172.17.0.6:8080 </code></pre> <p>It will return <strong>curl: (7) Failed to connect to 172.17.0.6 port 8080: Operation timed out</strong></p> <p>Info:</p> <pre><code>oc v3.10.0+dd10d17 kubernetes v1.10.0+b81c8f8 features: Basic-Auth Server https://127.0.0.1:8443 openshift v3.10.0+e3465d0-44 kubernetes v1.10.0+b81c8f8 </code></pre>
<p>Your command <code>curl 172.17.0.6:8080</code> would work from inside a pod.</p> <p>If you want to connect from your terminal (localhost), you have these ways:</p> <ol> <li><p><code>oc port-forward &lt;pod_name&gt; 9999:8080</code> and in another terminal <code>curl localhost:9999</code>, <a href="https://docs.openshift.com/enterprise/3.0/dev_guide/port_forwarding.html" rel="nofollow noreferrer">here</a> the command reference</p></li> <li><p>setup an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress</a></p></li> </ol> <p>The quickest way to debug is option 1.</p>
<p>Am working on Azure Resource Manager Templates(ARM Templates) and VSTS CI&amp;CD. With the help of ARM Templates, I want to deploy AKS (Azure kubernete Service). So before going to deploy, I need to validate my ARM Template in the CI-Build by applying a PowerShell task. But here, at the time of validating my ARM Template “It’s not stopping CI-Build even when the validation fails”. Its giving output as “Validation Completed” as shown in the below picture . Is there any solution to resolve this issue, i.e. I wanted to stop my CI-Build running if any validation fails.</p> <p><a href="https://i.stack.imgur.com/ExZ9V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ExZ9V.png" alt="enter image description here"></a></p>
<p>Not sure how does your powershell script look like. But according to the screenshot, the powershell script is executed successfully without any error code return. You can update your powershell script to check the validate result and set the exit code to "1" if the result is "InvalidTemplate". This will make the powershell task fail when the template is valid.</p>
<p>I'm trying to specify Local SSD in a Google Cloud as a <code>PersistedVolume</code>. I followed the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#example_local_pvs" rel="nofollow noreferrer">docs</a> to set up the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd#running_the_local_volume_static_provisioner" rel="nofollow noreferrer">automated SSD provisioning</a>, and running <code>kubectl get pv</code> returns a valid volume:</p> <pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-9721c951 368Gi RWO Delete Available local-scsi 1h </code></pre> <p>The problem is that I cannot get my pod to bind to it. The <code>kubectl get pvc</code> keeps showing this:</p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mapdata Pending local-scsi 7m </code></pre> <p>and <code>kubectl get events</code> gives me these:</p> <pre><code>LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 7m 7m 1 v3tiles.1551c0bbcb23d983 Service Normal EnsuredLoadBalancer service-controller Ensured load balancer 2m 8m 24 maptilesbackend-8645566545-x44nl.1551c0ae27d06fca Pod Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. 2m 8m 26 mapdata.1551c0adf908e362 PersistentVolumeClaim Normal WaitForFirstConsumer persistentvolume-controller waiting for first consumer to be created before binding </code></pre> <p>What would i need to do to bind that SSD to my pod? Here's the code I have been experimenting with:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: maptilesbackend namespace: default spec: selector: matchLabels: app: maptilesbackend strategy: type: RollingUpdate template: metadata: labels: app: maptilesbackend spec: containers: - image: klokantech/openmaptiles-server imagePullPolicy: Always name: maptilesbackend volumeMounts: - mountPath: /data name: mapdata readOnly: true volumes: - name: mapdata persistentVolumeClaim: claimName: mapdata readOnly: true --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: "local-scsi" provisioner: "kubernetes.io/no-provisioner" volumeBindingMode: "WaitForFirstConsumer" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mapdata spec: storageClassName: local-scsi accessModes: - ReadOnlyMany resources: requests: storage: 300Gi </code></pre>
<p><code>ReadOnlyMany</code> doesn't make sense for local SSDs</p> <p>As per the docs:</p> <blockquote> <p>ReadOnlyMany – the volume can be mounted read-only by many nodes</p> </blockquote> <p>You can't mount a local SSD on many nodes because it's local to one node only.</p>
<p>I'd like to solve the following problem using command line:</p> <p>I'm trying to run the following PoC script from a GCE VM in project-a.</p> <pre><code>gcloud config set project project-b gcloud compute instances create gce-vm-b --zone=us-west1-a gcloud compute ssh --zone=us-west1-a gce-vm-b -- hostname </code></pre> <p>The VM is created successfully:</p> <pre><code>NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gce-vm-b us-west1-a n1-standard-16 10.12.34.56 12.34.56.78 RUNNING </code></pre> <p>But get the following error when trying to SSH:</p> <pre><code>WARNING: The public SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist. WARNING: You do not have an SSH key for gcloud. WARNING: SSH keygen will be executed to generate a key. Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/google_compute_engine. Your public key has been saved in /root/.ssh/google_compute_engine.pub. The key fingerprint is: ... Updating project ssh metadata... .....................Updated [https://www.googleapis.com/compute/v1/projects/project-b]. &gt;.done. &gt;Waiting for SSH key to propagate. &gt;ssh: connect to host 12.34.56.78 port 22: Connection timed out &gt;ERROR: (gcloud.compute.ssh) Could not SSH into the instance. It is possible that your SSH key has not propagated to the instance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set to accept ssh traffic. </code></pre> <p>Running <code>gcloud compute config-ssh</code> hasn't changed anything in the error message. It's still <code>ssh: connect to host 12.34.56.78 port 22: Connection timed out</code></p> <p>I've tried adding a firewall rule to the project:</p> <pre><code>gcloud compute firewall-rules create default-allow-ssh --allow tcp:22 </code></pre> <p>.</p> <pre><code>Creating firewall... ...........Created [https://www.googleapis.com/compute/v1/projects/project-b/global/firewalls/default-allow-ssh]. done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY default-allow-ssh default INGRESS 1000 tcp:22 </code></pre> <p>The error is now <code>Permission denied (publickey)</code>.</p> <pre><code>gcloud compute ssh --zone=us-west1-a gce-vm-b -- hostname </code></pre> <p>.</p> <pre><code>Pseudo-terminal will not be allocated because stdin is not a terminal. Warning: Permanently added 'compute.4123124124324242' (ECDSA) to the list of known hosts. Permission denied (publickey). ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. </code></pre> <p>P.S. The project-a "VM" is a container run by Prow cluster (which is run by G<strong>K</strong>E).</p>
<p>"Permission denied (publickey)" means it is unable to validate the public key for the username. </p> <p>You haven't specified the user in your command, so the user from the environment is selected and it may not be allowed into the instance gce-vm-b. Specify a valid user for the instance in your command according to the <a href="https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#edit-ssh-metadata" rel="nofollow noreferrer">public SSH key metadata</a>.</p>
<p>My PersistentVolumeClaim will not use the PersistentVolume I have prepared for it.</p> <p>I have this <code>PersistentVolume</code> in <code>monitoring-pv.yaml</code></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: monitoring-volume labels: usage: monitoring spec: capacity: storage: 50Gi accessModes: - ReadWriteOnce hostPath: path: /data/k8data/monitoring </code></pre> <p>After I have done </p> <pre><code>kubectl apply -f monitoring-pv.yaml </code></pre> <p>I can check that it exists with <code>kubectl get pv</code></p> <pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE monitoring-volume 50Gi RWO Retain Available 5m </code></pre> <p>My <code>PersistentVolumeClaim</code> in <code>monitoring-pvc.yaml</code> looks like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: monitoring-claim namespace: monitoring spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 50Gi selector: matchLabels: usage: monitoring </code></pre> <p>When I do <code>kubectl apply -f monitoring-pvc.yaml</code> it gets created.</p> <p>I can look at my new <code>PersistentVolumeClaim</code> with <code>get pvc -n monitoring</code>and I see</p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE monitoring-claim Pending manual 31s </code></pre> <p>When I look at my <code>PersistentVolume</code> with <code>kubectl get pv</code> I can see that it's still available:</p> <pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE monitoring-volume 50Gi RWO Retain Available 16m </code></pre> <p>I had expected the <code>PersistentVolume</code> to be <code>Bound</code>but it isn't. When I use a ´PersistentVolumeClaim´ with the same name as this, a new <code>PersistentVolumeClaim</code> is created that is written in <code>/tmp</code> and therefore not very persistent.</p> <p>When I do the same operations without a namespace for my <code>PersistentVolumeClaim</code> everything seems to work.</p> <p>I'm on minikube on a Ubuntu 18.04.</p> <p>What do I need to change to be able to connect the volume with the claim?</p>
<p>When I reviewed my question and compared it to a working solution, I noticed that I had missed <code>storageClassName</code> that was set to <code>manual</code> in an example without a namespace that I was able to use.</p> <p>My updated <code>PersistentVolume</code>now looks like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: monitoring-volume labels: usage: monitoring spec: storageClassName: manual capacity: storage: 50Gi accessModes: - ReadWriteOnce hostPath: path: /data/k8data/monitoring </code></pre> <p>The only difference is</p> <pre><code> storageClassName: manual </code></pre> <p>My preliminary findings is that this was the silly mistake I had done.</p>
<p>I have a container with a backend processing application that only connects to other services, but does not expose any ports it listens to. For example in my case it connects to a JMS broker and uses the Rest API of another service. </p> <p>I want to deploy that container along with the JMS broker and the server with the Rest API to kubernetes. Therefore I'm currently having these kubernetes API objects for the backend processing application:</p> <pre><code>--- kind: "Deployment" apiVersion: "extensions/v1beta1" metadata: name: "foo-processing-module" namespace: "foo-4" labels: foo.version: "0.0.1-SNAPSHOT" k8s-app: "foo-processing-module" annotations: deployment.kubernetes.io/revision: "1" description: "Processing Modules App for foo" spec: replicas: 1 selector: matchLabels: foo.version: "0.0.1-SNAPSHOT" k8s-app: "foo-processing-module" template: metadata: name: "foo-processing-module" labels: foo.version: "0.0.1-SNAPSHOT" k8s-app: "foo-processing-module" annotations: description: "Processing Modules App for foo" spec: containers: - name: "foo-processing-module" image: "foo/foo-processing-module-docker:0.0.1-SNAPSHOT" resources: {} terminationMessagePath: "/dev/termination-log" terminationMessagePolicy: "File" imagePullPolicy: "IfNotPresent" securityContext: privileged: false restartPolicy: "Always" terminationGracePeriodSeconds: 30 dnsPolicy: "ClusterFirst" securityContext: {} schedulerName: "default-scheduler" strategy: type: "RollingUpdate" rollingUpdate: maxUnavailable: "25%" maxSurge: "25%" revisionHistoryLimit: 10 progressDeadlineSeconds: 600 --- kind: "Service" apiVersion: "v1" metadata: name: "foo-processing-module" namespace: "foo-4" labels: foo.version: "0.0.1-SNAPSHOT" k8s-app: "foo-processing-module" annotations: description: "Processing Modules App for foo" spec: selector: foo.version: "0.0.1-SNAPSHOT" k8s-app: "foo-processing-module" type: "LoadBalancer" sessionAffinity: "None" externalTrafficPolicy: "Cluster" </code></pre> <p>However when I use <code>kubectl create</code> I get the following error message when the above API objects should be created:</p> <pre><code>Error from server (Invalid): error when creating "foo.yml": Service "foo-processing-module" is invalid: spec.ports: Required value error converting YAML to JSON: yaml: line 22: did not find expected &lt;document start&gt; </code></pre> <p>What do I have to do to resolve the error? Is a kubernetes <code>Service</code> even the correct API object to use in this case?</p>
<p>Simply remove the entire <code>Service</code> object. Since you have an app that doesn't need to communicate via the network, you don't need a service. Think of the service as a kind of specialized load-balancer in front of an (HTTP?) API your pods expose. Since you don't have that API, you don't need it. The <code>Deployment</code> does the actual supervision of the worker pods, that is, whatever goes on in <code>foo/foo-processing-module-docker:0.0.1-SNAPSHOT</code>.</p> <p>Also, always use <code>kubectl apply</code> and not <code>create</code> and if you want to keep track of the revisions deployed, add the <code>--record</code> option so you can access the history.</p>
<p>What is the best way to mount an external directory in a pod such that:</p> <ol> <li>the pod access to the directory is readonly</li> <li>whole directory tree is accessible in the pod</li> <li>any change to the directory contents or creation of new files and directories in this directory is reflected to the pod instantly</li> <li>all replicas of the pod see the same directory tree</li> </ol> <p>Apparently any solution with ConfigMaps does not satisfy conditions 2 and 3.</p>
<p>Using configMap-s one can achieve all requirements (1) thru (4) except for part of (3) - changes to existing files content (by changing their configMap-s) will be reflected in the pods (almost) instantly but new files or directories won't.</p> <p>The following script - see <a href="https://github.com/apimastery/APISimulator/blob/master/kubernetes-deployment/apisim-kubernetes-deploy.sh" rel="nofollow noreferrer">here</a> - implements the approach.</p> <p>Notice that:</p> <ul> <li>(1) is now the default starting with 1.9.6 - see <a href="https://github.com/kubernetes/kubernetes/issues/62099" rel="nofollow noreferrer">this</a> for a discussion. </li> <li>Key for (2) is the use of <a href="https://kubernetes.io/docs/concepts/storage/volumes/#projected" rel="nofollow noreferrer">projected volumes</a>.</li> <li>(4) is out-of-the-box feature as documented <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">here</a>, except for when using subPath.</li> <li>Using a single configMap for all files may cause "...ERROR: The ConfigMap "" is invalid: []: Too long: must have at most 1048576 characters".</li> <li>Using a configMap per file also has a size limitation of ~1MB for the file content (it is an etcd limitation).</li> </ul>
<p>On doing K8s updates on GCP we lose the link between the nodes and their external IPs. That causes some issues afterwards on K8s apps communicating with other clouds secured by firewalls. </p> <p>I have to assign them manually afterwards again. Why is this? Can I prevent this somehow? </p>
<p>First of all, ensure you have set your IP to static in the cloud console -> Networking -> External IP addresses.</p> <p>Once it's set to static you can assign your Service to the static IP using the <code>loadBalancerIP</code> property. Note that your Service should be a LoadBalancer type. See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a> for more information.</p> <p>If you don't require a Loadbalancer you could also try out <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a></p>
<p>my pvc.yaml</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: database-disk labels: stage: production name: database app: mysql spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi </code></pre> <p>running <code>kubectl apply -f pvc.yaml</code> in <a href="https://github.com/ubuntu/microk8s" rel="nofollow noreferrer">microk8s</a> got following error: </p> <blockquote> <p>error validating data:ValidationData(PersistentVolumeClaim): unknown field "storage" in io.k8s.api.core.v1.PersistenVolumeClaim if choose to ignore these errors turn validation off with --validate=false</p> </blockquote> <p><strong>Edit: storage indentation wrong when I copied text on my VM :( ,its working fine now</strong> </p>
<p>You forgot to specify the <code>volumeMode</code>. Add the <code>volumeMode</code> option and it should work.</p> <p>Like this:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: database-disk labels: stage: production name: database app: mysql spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 20Gi </code></pre>
<p>I have asked myself this question and invested time researching it. Running out of time. Can someone point me in the right direction? I have created a kubernetes cluster on minikube, with its Ingress, Services and Deployments. There is a whole configuration of services in there. Can, now, I point this kubectl command to another provider like VMWareFusion, AWS , Azure, not to forget Google Cloud. I know about kops. My understanding is that although this is the design goal of kops but presently it only supports AWS. </p>
<p>Yes, you can use different clusters via the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">context</a>. List them using <code>kubectl config get-contexts</code> and switch between them using <code>kubectl config use-context</code>.</p>
<p>I'm running Openshift Container Platform 3.9 where I'm deploying three containers; a postgres database container, a qpid message broker container, and a server that needs to connect to both.</p> <p>I need to set environment variables at pod creation in order to allow all three containers to connect. For example, I need to set DB_HOST and BROKER_HOST variables with the corresponding pod addresses. I was going to use pod presets to accomplish this, but per the documentation, <code>As of OpenShift Container Platform 3.7, pod presets are no longer supported</code>.</p> <p>What is the best method to set these type of addresses during pod creation?</p>
<p>the quick answer is: you don't</p> <p>If you want to consume some service, define a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> object for it so you get a fixed dns name you can use to refer to that service. And thenm you know the values of DB_HOST or BROKER_HOST in advance and set them in Pod as any other</p>
<p>What is the best way to preload large files into a local PersistentVolume SSD before it gets used by Kubernetes pods?</p> <p>The goal is to have multiple pods (could be multiple instances of the same pod, or different), share the same local SSD drive in a read-only mode. The drive would need to be initialized somehow with a large dataset.</p> <p>Google <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd" rel="nofollow noreferrer">Local SSD docs</a> describes the <code>Running the local volume static provisioner</code>, but that approach only creates a PersistedVolume, but does not initialize it.</p>
<p>Basically, you can add an <code>init</code> container to your pod that initializes the SSD: add data, etc.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: "test-ssd" spec: initContainers: - name: "init" image: "ubuntu:14.04" command: ["/bin/init_my_ssd.ssh"] volumeMounts: - mountPath: "/test-ssd/" name: "test-ssd: containers: - name: "shell" image: "ubuntu:14.04" command: ["/bin/sh", "-c"] args: ["echo 'hello world' &gt; /test-ssd/test.txt &amp;&amp; sleep 1 &amp;&amp; cat /test-ssd/test.txt"] volumeMounts: - mountPath: "/test-ssd/" name: "test-ssd" volumes: - name: "test-ssd" hostPath: path: "/mnt/disks/ssd0" nodeSelector: cloud.google.com/gke-local-ssd: "true" </code></pre>
<p>In my cluster, I have one node vm1, with label "kubernetes.io/hostname: vm-1". Can I configure to assign all Pod slaves to vm-1 node? I tries to set "Node Selector" in Jenkin > Configuration > cloud but it does not work.</p> <p>Thanks,</p>
<p>All you need to do is specify this in the <code>Deployment</code> of your jenkins slave with <code>nodeAffinity</code>, like so:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: jenkins-slave namespace: ci labels: app: jenkins role: slave spec: selector: matchLabels: app: jenkins role: slave template: metadata: labels: app: jenkins role: slave spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - vm-1 </code></pre> <p>You can see some examples <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">here</a></p> <p>However, I am not sure if <code>kubernetes.io/hostname</code> is a valid label to be used when selecting node affinity, maybe you will need to create one, such as <code>role</code>, <code>dedicated</code> or <code>type</code>.</p>
<p>Is there a way to add node labels when deploying worker nodes in EKS. I do not see an option in the CF template available for worker nodes.</p> <p><a href="https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml/" rel="noreferrer">EKS-CF-Workers</a></p> <p>The only option I see right now is to use kubectl label command to add labels which is post cluster setup. However, the need to have complete automation which means applications are deployed automatically post cluster deployments and labels help in achieving the segregation.</p>
<p>With the new EKS-optimized AMIs(amazon-eks-node-vXX) and Cloudformation template refactors provided by AWS it is now possible to add node labels as simple as providing arguments to the <code>BootstrapArguments</code> parameter of the <code>[amazon-eks-nodegroup.yaml][1]</code> Cloudfomation template. For example <code>--kubelet-extra-args --node-labels=my-key=my-value</code>. For more details check the AWS announcement: <a href="https://aws.amazon.com/blogs/opensource/improvements-eks-worker-node-provisioning/" rel="noreferrer">Improvements for Amazon EKS Worker Node Provisioning</a></p>