prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>When I try to use <code>kubectl top nodes</code> I get this error: </p> <pre><code>Error from server (NotFound): the server could not find the requested resource (get services http:heapster:) </code></pre> <p>But heapster is deprecated and I'm using kubernetes 1.11. I installed metrics-server and I still get the same error, when I try to check metrics-server's logs I see this error: </p> <pre><code>E1019 12:33:55.621691 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-ei3: unable to fetch metrics from Kubelet elegant-ardinghelli-ei3 (elegant-ardinghelli-ei3): Get https://elegant-ardinghelli-ei3:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-ei3 on 10.245.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-aab: unable to fetch metrics from Kubelet elegant-ardinghelli-aab (elegant-ardinghelli-aab): Get https://elegant-ardinghelli-aab:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-aab on 10.245.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-e4z: unable to fetch metrics from Kubelet elegant-ardinghelli-e4z (elegant-ardinghelli-e4z): Get https://elegant-ardinghelli-e4z:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-e4z on 10.245.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-e41: unable to fetch metrics from Kubelet elegant-ardinghelli-e41 (elegant-ardinghelli-e41): Get https://elegant-ardinghelli-e41:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-e41 on 10.245.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-ein: unable to fetch metrics from Kubelet elegant-ardinghelli-ein (elegant-ardinghelli-ein): Get https://elegant-ardinghelli-ein:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-ein on 10.245.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-aar: unable to fetch metrics from Kubelet elegant-ardinghelli-aar (elegant-ardinghelli-aar): Get https://elegant-ardinghelli-aar:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-aar on 10.245.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-aaj: unable to fetch metrics from Kubelet elegant-ardinghelli-aaj (elegant-ardinghelli-aaj): Get https://elegant-ardinghelli-aaj:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-aaj on 10.245.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:elegant-ardinghelli-e49: unable to fetch metrics from Kubelet elegant-ardinghelli-e49 (elegant-ardinghelli-e49): Get https://elegant-ardinghelli-e49:10250/stats/summary/: dial tcp: lookup elegant-ardinghelli-e49 on 10.245.0.10:53: no such host] </code></pre>
<p>It is reported here.</p> <p>Github Issues:</p> <blockquote> <p>This PR implements support for the kubectl top commands to use the metrics-server as an aggregated API, instead of requesting the metrics from heapster directly. If the metrics.k8s.io API is not served by the apiserver, then this still falls back to the previous behavior.</p> </blockquote> <p>Merged in <a href="https://github.com/kubernetes/kubernetes/pull/56206" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/56206</a></p> <p>Maybe fixed in 1.12 or scheduled for next version.</p>
<p>I have reactjs app running on my pod and I have mounted source code from the host machine to the pod. It works fine but when I change my code in the host machine, pod source code also changes but when I run the site it has not affected the application. here is my manifest, what I'm doing wrong?</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp spec: replicas: 1 minReadySeconds: 15 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 template: metadata: labels: app: webapp tier: frontend phase: development spec: containers: - name: webapp image: xxxxxx command: - npm args: - run - dev env: - name: environment value: dev - name: AUTHOR value: webapp ports: - containerPort: 3000 volumeMounts: - mountPath: /code name: code imagePullSecrets: - name: regcred volumes: - name: code hostPath: path: /hosthome/xxxx/development/react-app/src </code></pre> <p>and i know for a fact npm is not watching my changes, how can i resolve it in pods?</p>
<p>Basically, you need to reload your application everytime you change your code and your pods don't reload or restart when you change the code under the <code>/code</code> directory. You will have to re-create your pod since you are using a deployment you can either:</p> <pre><code>kubectl delete &lt;pod-where-your-app-is-running&gt; </code></pre> <p>or </p> <pre><code>export PATCH='{"spec":{"template":{"metadata":{"annotations":{"timestamp":"'$(date)'"}}}}}' kubectl patch deployment webapp -p "$PATCH" </code></pre> <p>Your pods should restart after that.</p>
<p><code>pod.yml</code></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: hello-pod labels: zone: prod version: v1 spec: containers: - name: hello-ctr image: hello-world:latest ports: - containerPort: 8080 </code></pre> <p><code>kubectl create -f pod.yml</code></p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE hello-pod 0/1 CrashLoopBackOff 5 5m </code></pre> <p>Why <code>CrashLoopBackOff</code>?</p>
<p>In this case the expected behavior is correct. The hello-world container is meant to print some messages and then exit after completion. So this is why you are getting <code>CrashLoopBackOff</code> - </p> <p>Kubernetes runs a pod - the container inside runs the expected commands and then exits. </p> <p>Suddenly there is nothing running underneath - so the pod is ran again -> same thing happens and the number of <code>restarts</code> grows. </p> <p>You can see that in<code>kubectl describe pod</code> where <code>Terminated</code> state is visible and the <code>Reason</code> for it is status <code>Completed</code>. If you would choose a container image which does not exit after completion the pod would be in running state. </p>
<p>I'm trying to make a simple example of ingress-nginx on google cloud, but it's not matching the subpaths:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /one backend: serviceName: test-one-backend servicePort: 80 - path: /two backend: serviceName: test-two-backend servicePort: 80 </code></pre> <p>When I call, <a href="http://server/one" rel="nofollow noreferrer">http://server/one</a> works, but when I call <a href="http://server/one/path" rel="nofollow noreferrer">http://server/one/path</a> I get a 404. I'd tried several things like using regex, but is simply not working</p> <p>The backends are just, echo servers that reply always on any path.</p>
<p>You need to use a <code>/*</code> at the end of the path:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /one/* backend: serviceName: test-one-backend servicePort: 80 - path: /two backend: serviceName: test-two-backend servicePort: 80 </code></pre> <p>It's not really <a href="https://github.com/kubernetes/ingress-nginx/issues/1120" rel="nofollow noreferrer">documented widely as of today</a>, but in essence the <code>path</code> translates to a <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#location" rel="nofollow noreferrer"><code>location {}</code></a> block in the nginx.conf</p>
<p>I'm following the Istio doc (<a href="https://istio.io/docs/examples/advanced-egress/egress-gateway/" rel="nofollow noreferrer">https://istio.io/docs/examples/advanced-egress/egress-gateway/</a>) to set up an egress gateway. The results I got is different from what the doc describes and I wonder how can I fix it.</p> <p>I have a simply docker container with a sidecar injected. After I applied a gateway config for <code>google.com</code> similar to the one provided by the doc:</p> <pre><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: google spec: hosts: - google.com ports: - number: 80 name: http-port protocol: HTTP - number: 443 name: https protocol: HTTPS resolution: DNS EOF </code></pre> <p>I still can't reach it from within the container:</p> <pre><code>$ kubectl exec -it $SOURCE_POD -c $CONTAINER_NAME -- curl -sL -o /dev/null -D - http://google.com HTTP/1.1 301 Moved Permanently location: http://www.google.com/ content-type: text/html; charset=UTF-8 ... HTTP/1.1 404 Not Found date: Thu, 18 Oct 2018 22:55:57 GMT server: envoy content-length: 0 </code></pre> <p>however, <code>curl</code> from <code>istio-proxy</code> works:</p> <pre><code>$ kubectl exec -it $SOURCE_POD -c istio-proxy -- curl -sL -o /dev/null -D - http://google.com HTTP/1.1 301 Moved Permanently Location: http://www.google.com/ Content-Type: text/html; charset=UTF-8 ... HTTP/1.1 200 OK Date: Thu, 18 Oct 2018 22:55:43 GMT Expires: -1 ... </code></pre> <p>Checked that the gateway exists:</p> <pre><code>$ kubectl describe serviceentry/google Name: google Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.istio.io/v1alpha3","kind":"ServiceEntry","metadata":{"annotations":{},"name":"google","namespace":"default"},"sp... API Version: networking.istio.io/v1alpha3 Kind: ServiceEntry Metadata: Cluster Name: Creation Timestamp: 2018-10-18T22:36:34Z Generation: 1 Resource Version: 2569394 Self Link: /apis/networking.istio.io/v1alpha3/namespaces/default/serviceentries/google UID: 4482d584-... Spec: Hosts: google.com Ports: Name: http-port Number: 80 Protocol: HTTP Name: https Number: 443 Protocol: HTTPS Resolution: DNS Events: &lt;none&gt; </code></pre> <p>Any ideas? </p>
<p>Your problem is that the curl request is getting a 301 redirect to <code>www.google.com</code>, but your ServiceEntry has only exposed <code>google.com</code>. You can fix it by adding <code>www.google.com</code> as another host in your ServiceEntry like this:</p> <pre><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: google spec: hosts: - google.com - www.google.com ports: - number: 80 name: http-port protocol: HTTP - number: 443 name: https protocol: HTTPS resolution: DNS EOF </code></pre>
<p>I was trying unsuccessfully access Kubernetes API via HTTPS ingress and now started to wonder if that is possible at all? </p> <p>Any working detailed guide for a direct remote access (without using ssh -> kubectl proxy to avoid user management on Kubernetes node) would be appreciated. :)</p> <p>UPDATE: </p> <p>Just to make more clear. This is bare metal on premise deployment (NO GCE, AWZ, Azure or any other) and there is intension that some environments will be totally offline (which will add additional issues with getting the install packages).</p> <p>Intention is to be able to use kubectl on client host with authentication via Keycloak (which also fails if followed by the step by step instructions). Administrative access using SSH and then kubectl is not suitable fir client access. So it looks I will have to update firewall to expose API port and create NodePort service.</p> <p>Setup:</p> <blockquote> <p>[kubernetes - env] - [FW/SNAT] - [me]</p> </blockquote> <p>FW/NAT allows only 22,80 and 443 port access </p> <p>So as I set up an ingress on Kubernetes, I cannot create a firewall rule to redirect 443 to 6443. Seems the only option is creating an https ingress to point access to "api-kubernetes.node.lan" to kubernetes service port 6443. Ingress itself is working fine, I have created a working ingress for Keycloak auth application.</p> <p>I have copied .kube/config from the master node to my machine and placed it into .kube/config (Cygwin environment)</p> <p>What was attempted:</p> <ul> <li>SSL passthrough. Could not enable as kubernetes-ingress controller was not able to start due to not being able to create intermediary cert. Even if started, most likely would have crippled other HTTPS ingresses.</li> <li>Created self-signed SSL cert. As a result via browser, I could get an API output, when pointing to <a href="https://api-kubernetes.node.lan/api" rel="nofollow noreferrer">https://api-kubernetes.node.lan/api</a>. However, kubectl throws an error due to unsigned cert, which is obvious. </li> <li>Put apiserver.crt into ingress tls: definition. Got an error due to cert is not suitable for api-kubernetes.node.lan. Also obvious. </li> <li><p>Followed guide [1] to create kube-ca signed certificate. Now the browser does not show anything at all. Using curl to access <a href="https://api-kubernetes.node.lan/api" rel="nofollow noreferrer">https://api-kubernetes.node.lan/api</a> results in an empty output (I can see an HTTP OK when using -v). Kubectl now gets the following error:</p> <pre><code>$ kubectl.exe version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"windows/amd64"} Error from server: the server responded with the status code 0 but did not return more information </code></pre></li> </ul> <p>When trying to compare apiserver.pem and my generated cert I see the only difference:</p> <pre><code>apiserver.pem X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment generated.crt X509v3 Extended Key Usage: TLS Web Server Authentication </code></pre> <p>Ingress configuration:</p> <pre><code>--- kind: Ingress apiVersion: extensions/v1beta1 metadata: name: kubernetes-api namespace: default labels: app: kubernetes annotations: kubernetes.io/ingress.class: nginx spec: tls: - secretName: kubernetes-api-cert hosts: - api-kubernetes.node.lan rules: - host: api-kubernetes.node.lan http: paths: - path: "/" backend: serviceName: kubernetes servicePort: 6443 </code></pre> <p>Links: [1] <a href="https://db-blog.web.cern.ch/blog/lukas-gedvilas/2018-02-creating-tls-certificates-using-kubernetes-api" rel="nofollow noreferrer">https://db-blog.web.cern.ch/blog/lukas-gedvilas/2018-02-creating-tls-certificates-using-kubernetes-api</a></p>
<p>You should be able to do it as long as you expose the <code>kube-apiserver</code> pod in the <code>kube-system</code> namespace. I tried it like this:</p> <pre><code>$ kubectl -n kube-system expose pod kube-apiserver-xxxx --name=apiserver --port 6443 service/apiserver exposed $ kubectl -n kube-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE apiserver ClusterIP 10.x.x.x &lt;none&gt; 6443/TCP 1m ... </code></pre> <p>Then go to a cluster machine and point my <code>~/.kube/config</code> context IP <code>10.x.x.x:6443</code></p> <pre><code>clusters: - cluster: certificate-authority-data: [REDACTED] server: https://10.x.x.x:6443 name: kubernetes ... </code></pre> <p>Then:</p> <pre><code>$ kubectl version --insecure-skip-tls-verify Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I used <code>--insecure-skip-tls-verify</code> because <code>10.x.x.x</code> needs to be valid on the server certificate. You can actually fix it like this: <a href="https://stackoverflow.com/questions/52859876/configure-aws-publicip-for-a-master-in-kubernetes/52862243#52862243">Configure AWS publicIP for a Master in Kubernetes</a></p> <p>So maybe a couple of things in your case:</p> <ol> <li>Since you are initially serving SSL on the Ingress you need to use the same kubeapi-server certificates under <code>/etc/kubernetes/pki/</code> on your master</li> <li>You need to add the external IP or name to the certificate where the Ingress is exposed. Follow something like this: <a href="https://stackoverflow.com/questions/52859876/configure-aws-publicip-for-a-master-in-kubernetes/52862243#52862243">Configure AWS publicIP for a Master in Kubernetes</a></li> </ol>
<p>I am trying to implement ClusterIP-Service on each of my Deployment. <strong>FYI, I am setting up kubernetes on my own server at the office (not using cloud for some reason)</strong>. Previously here are the network/infrastructure that I could think of :</p> <pre><code>Ingress -&gt; Service -&gt; Deployment </code></pre> <p>I am not sure why my Ingress does not work as intended. I am using <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a> as my Ingress Controller. I also applied Bare-metal Service config from <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p> <p>And below is my simple Ingress and ClusterIP like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / backend: serviceName: simpleweb-service servicePort: 80 --- apiVersion : v1 kind : Service metadata: name: simpleweb-service spec: type: ClusterIP ports: - port: 80 targetPort: 80 selector: component: web </code></pre> <p>I tried accessing <code>http://&lt;server-internal-ip&gt;:80</code> but I got connection refused instead of getting routed to my apps inside the particular service. Is anything I did above could possibly gone wrong ?</p> <p>Do I need to have <code>LoadBalancer</code> before Ingress like below ? (which one is ideal)</p> <pre><code>LoadBalancer -&gt; Ingress -&gt; Service -&gt; Deployment </code></pre> <p>or maybe</p> <pre><code>LoadBalancer -&gt; Service -&gt; Deployment </code></pre> <p>Thanks in advance.</p>
<p>You have a number of options to expose your service. I suggest <a href="https://metallb.universe.tf/%20%22metallb" rel="nofollow noreferrer">metallb</a>, it allows you to expose services with LoadBalancer. With ClusterIP, the service is not exposed to the outside world, see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Publishing services - service types</a> for details. Ingress is not mandatory, but without it, you can only have one service+port / IP address, while ingress allows you to have name or / and path based routing.</p>
<p>I would like to be able to test my docker application on local before sending it to the cluster. I want to use mini Kube for this. Meanwhile, instead of having multiple kube config files which would define env variables for the cloud environment and for my local machine, I would like to override some of the env variables when running in local. I can see that you can do something like that with docker compose: </p> <pre><code>docker-compose up -f docker-compose.yml -f docker-compose.e2e.yml. </code></pre> <p>The second file would only have the overriding values. Yes, there are two files but I find it clean. </p> <p>Is there a way to do something similar with Kube/minikube? Or even something better ???</p>
<p>I think you are asking how to pass different environment values into your Pods depending upon which environment they are deployed to. One pattern to achieve this is to deploy with <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a>. Then you use templated versions of your kubernetes descriptors for deployment. You also have a values.yaml file that contains values to be injected into the descriptors. You can <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/values_files.md#values-files" rel="nofollow noreferrer">switch and overlay values.yaml files</a> at the time of install to control which values are injected for a given installation.</p> <p>If you are asking how to switch whether a <code>kubectl</code> command runs against local or cloud without having to keep switching your kubeconfig file then you can <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">add both contexts to your kubeconfig</a> and use <code>kubectl context</code> to switch between them, as @<a href="https://stackoverflow.com/users/4550110/ijaz-khan">Ijaz Khan</a> suggests</p>
<p>Is there a way to scale the pods based on pod disk I/O pressure/utilization/IO wait, rather than simple plain CPU and RAM usage? OR maybe a combination of disk IO along with RAM and CPU usage.</p> <p>something like:</p> <pre><code>metrics: - type: Resource resource: name: disk target: type: Utilization averageUtilization: 60 </code></pre>
<p>You should be able to do it with <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis" rel="nofollow noreferrer">custom metrics</a>. So if you have something like <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> in your cluster which is one of the most popular Kubernetes monitoring solutions you can use the <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">k8s-prometheus-adapter</a></p> <p>There also a walkthrough <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/walkthrough.md" rel="nofollow noreferrer">here</a>.</p> <p>For that walkthrough, you will have to use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer"><code>Pod</code> type of metric</a> in your <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a>.</p> <p>For Disk I/O not sure if you mean system disk I/O or Pod disk I/O or PV disk I/O, I don't think there's a solution for all of those together. On the node side, you could scrape the I/O node metrics from the <a href="https://github.com/prometheus/node_exporter" rel="nofollow noreferrer">nodeexporter</a>. I don't think there's a <a href="https://github.com/brancz/kube-pod-exporter" rel="nofollow noreferrer">pod exporter</a> that exports metrics at the pod level.</p> <p>Also, for metrics that are not Pod type, they will have to be defined as the Object type in your HPA.</p>
<p>I am trying to create an automated pipeline that would run multiple pods one after another on a namespace. The current issue is, between runs I want to wait for a pod to be fully deleted before running the next. Are there any ways to check if a given pod is fully deleted?</p> <p>Current script:</p> <pre><code>kubectl delete -f pod.yaml sleep 10 kubectl create -f pod.yaml error when creating "pod.yaml": object is being deleted: pods "test-pod" already exists </code></pre>
<p>You can do something like this:</p> <pre><code>kubectl delete -f pod.yaml until kubectl get pod &lt;pod-name&gt; 2&gt;&amp;1 &gt;/dev/null; do sleep 10; done kubectl create -f pod.yaml </code></pre> <p>Basically, wait until <code>kubectl get pod &lt;pod-name&gt;</code> returns an error because it doesn't exist.</p>
<p>I would like to have a Prometheus plot in Grafana to show (as a column chart) the number of restarts of the pods</p> <p>How could achieve that?</p> <p>Thank you</p>
<p>You can deploy the kube-state-metrics container that publishes the restart metric for pods: <a href="https://github.com/kubernetes/kube-state-metrics" rel="noreferrer">https://github.com/kubernetes/kube-state-metrics</a></p> <blockquote> <p>The metrics are exported through the Prometheus golang client on the HTTP endpoint /metrics on the listening port (default 80).</p> </blockquote> <p>The metric name is: <code>kube_pod_container_status_restarts_total</code> </p> <p>See all the pod metrics <a href="https://github.com/kubernetes/kube-state-metrics/tree/master/docs" rel="noreferrer">here</a></p>
<p>How can I set Size (ie: cols and rows) using Rest API WebSockets Exec? API doc does not mention it. </p> <p><code>kubectl exec -v=99</code> doesn't give me a clue how it's setting size. </p> <p>I've read some people setting environment variables COLUMNS and ROWS when running <code>kubectl exec -it $container env COLUMNS=$COLUMNS LINES=$LINES TERM=$TERM bash</code>, but there is nothing documented for exec method api in order to set variables either. </p>
<p>Answering my own question: You have to send those envs mentioned in my question as multiple command's in your query.</p> <p>ie: to Execute bash with 80 columns and 24 rows: <code>&amp;command=env&amp;command=COLUMNS%3D80&amp;command=ROWS%3D24&amp;command=/bin/bash</code></p>
<p>On 1.10.9, kops, AWS, I am looking for a way to stop a user from creating a service that uses type:loadbalancer unless it has </p> <pre><code>annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0. </code></pre> <p>Is that possible?</p>
<p>A custom role for users , with rules like this:</p> <pre><code>rules: - apiGroups: [""] resources: ["service"] verbs: ["get", "watch", "list"] . ( dont put create ,update,patch etc) </code></pre> <p>This will prevent the users.</p> <p>Then use <a href="https://github.com/kubermatic/nodeport-exposer" rel="nofollow noreferrer">NodePort-Exposer</a> to do the second part automatically without involving the users.</p> <p><em>The NodePort-Exposer watches Services with the annotation <strong><code>nodeport-exposer.k8s.io/expose="true"</code></strong> and exposes them via a Service of type LoadBalancer.</em></p>
<p>Created a new ASP.NET Core 2.0 project and it runs fine locally. Then after running it in a Docker container locally it also works fine. But when I try to use the Docker image in a Kubernetes pod, it will run for a couple minutes and then give me this:</p> <pre><code> Unhandled Exception: System.InvalidOperationException: A path base can only be configured using IApplicationBuilder.UsePathBase(). at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder. &lt;BindAddressAsync&gt;d__7.MoveNext() </code></pre> <p>Here is my <code>Program.cs</code>:</p> <pre><code>public class Program { public static void Main(string[] args) { BuildWebHost(args).Run(); } public static IWebHost BuildWebHost(string[] args) =&gt; WebHost.CreateDefaultBuilder(args) .UseStartup&lt;Startup&gt;() .Build(); } </code></pre> <p><code>Startup.cs</code>:</p> <pre><code>public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public IConfiguration Configuration { get; } // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddMvc(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions { HotModuleReplacement = true, ReactHotModuleReplacement = true }); } else { app.UseExceptionHandler("/Home/Error"); } app.UseStaticFiles(); app.UseMvc(routes =&gt; { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); routes.MapSpaFallbackRoute( name: "spa-fallback", defaults: new { controller = "Home", action = "Index" }); }); } } </code></pre>
<p>When this error was happening, we were using the <code>FROM microsoft/aspnetcore-build:1.1</code> base image as our build and runtime. At the time we were encountering the error, we had simply tried to upgrade to <code>FROM microsoft/aspnetcore-build:2.0</code>. I'm not certain what specifically the issue with this image was, but Kubernetes didn't like it.</p> <p>At a later date, we switched the dockerfile to multistage; building with <code>FROM microsoft/aspnetcore-build:1.1</code> and running with <code>FROM microsoft/dotnet:1.1-runtime</code>, and when we upgraded that to the corresponding 2.0 versions, we didn't encounter this error again.</p>
<p>I work in a multi-tenant node app, I know to create a new namespace in Kubernetes is possible to run a kubectl command as follow: <code>kubectl create namespace &lt;namespace name&gt;</code></p> <p>How can I create a new namespace from node Microservices when a new customer make a sign up for a new account?</p> <p>Is there some kubectl API to make a request from an external app?</p> <p>Is necessary for the user to log out from app, destroy the pods created in kubernetes?</p>
<p>It could be as simple as calling from a shell in your app:</p> <pre><code>kubectl create namespace &lt;your-namespace-name&gt; </code></pre> <p>Essentially, kubectl talks to the kube-apiserver.</p> <p>You can also directly call the kube-apiserver. This is an example to list the pods:</p> <pre><code>$ curl -k -H 'Authorization: Bearer &lt;token&gt;' \ https://$KUBERNETES_SERVICE_HOST:6443/api/&lt;api-version&gt;/namespaces/default/pods </code></pre> <p>More specifically to create a namespace:</p> <pre><code>$ curl -k -H -X POST -H 'Content-Type: application/json' \ -H 'Authorization: Bearer &lt;token&gt;' \ https://$KUBERNETES_SERVICE_HOST:6443/api/v1/namespaces/ -d ' { "apiVersion": "v1", "kind": "Namespace", "metadata": { "name": "mynewnamespace" } }' </code></pre> <p>In case you are wondering about the <code>&lt;token&gt;</code>, it's a Kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Secret</a> typically belonging to a ServiceAccount and bound to a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="noreferrer"><code>ClusterRole</code></a> that allows you to create namespaces.</p> <p>You can create a Service Account like this:</p> <pre><code>$ kubectl create serviceaccount namespace-creator </code></pre> <p>Then you'll see the token like this (a token is automatically generated):</p> <pre><code>$ kubectl describe sa namespace-creator Name: namespace-creator Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Image pull secrets: &lt;none&gt; Mountable secrets: namespace-creator-token-xxxxx Tokens: namespace-creator-token-xxxxx Events: &lt;none&gt; </code></pre> <p>Then you would get the secret:</p> <pre><code>$ kubectl describe secret namespace-creator-token-xxxxx Name: namespace-creator-token-xxxx Namespace: default Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name: namespace-creator kubernetes.io/service-account.uid: &lt;redacted&gt; Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 7 bytes token: &lt;REDACTED&gt; &lt;== This is the token you need for Authorization: Bearer </code></pre> <p>Your <code>ClusterRole</code> should look something like this:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: namespace-creator rules: - apiGroups: ["*"] resources: ["namespaces"] verbs: ["create"] </code></pre> <p>Then you would bind it like this:</p> <pre><code>$ kubectl create clusterrolebinding namespace-creator-binding --clusterrole=namespace-creator --serviceaccount=namespace-creator </code></pre> <p>When it comes to writing code you can use any HTTP client library in any language to call the same endpoints.</p> <p>There are also libraries like the <a href="https://github.com/kubernetes/client-go" rel="noreferrer">client-go</a> library that takes care of the plumbing of connecting to a kube-apiserver.</p>
<p>I am trying to implement ClusterIP-Service on each of my Deployment. <strong>FYI, I am setting up kubernetes on my own server at the office (not using cloud for some reason)</strong>. Previously here are the network/infrastructure that I could think of :</p> <pre><code>Ingress -&gt; Service -&gt; Deployment </code></pre> <p>I am not sure why my Ingress does not work as intended. I am using <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a> as my Ingress Controller. I also applied Bare-metal Service config from <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p> <p>And below is my simple Ingress and ClusterIP like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / backend: serviceName: simpleweb-service servicePort: 80 --- apiVersion : v1 kind : Service metadata: name: simpleweb-service spec: type: ClusterIP ports: - port: 80 targetPort: 80 selector: component: web </code></pre> <p>I tried accessing <code>http://&lt;server-internal-ip&gt;:80</code> but I got connection refused instead of getting routed to my apps inside the particular service. Is anything I did above could possibly gone wrong ?</p> <p>Do I need to have <code>LoadBalancer</code> before Ingress like below ? (which one is ideal)</p> <pre><code>LoadBalancer -&gt; Ingress -&gt; Service -&gt; Deployment </code></pre> <p>or maybe</p> <pre><code>LoadBalancer -&gt; Service -&gt; Deployment </code></pre> <p>Thanks in advance.</p>
<p><strong>Access Options:</strong></p> <ul> <li>using the k8s service ( clusterIP , nodeport , loadbalancer ( in aws , gcp environement ))</li> </ul> <blockquote> <p>External load balanacer (optional) ->Service of type nodeport -> Deploymenet</p> </blockquote> <ul> <li>using the ingress</li> </ul> <blockquote> <p>External load balanacer (optional) -> Ingress -> Service ( clusterIP)-> Deploymenet</p> </blockquote> <p>In you case , you can test it by first using the nodeport and access it directly , then if works , then use the clusterIp , and curl it inside the cluster to make sure it is running on port 80 , then expose it on ingress if you want to use ingress. Also debug and descirbe the ingress service.</p> <p>If you are getting connection refused , there can be a problem with port.</p> <p>Also make sure you are using the correct labels for selector.</p> <p>docs: <a href="https://console.bluemix.net/docs/containers/cs_ingress.html#ingress" rel="nofollow noreferrer">https://console.bluemix.net/docs/containers/cs_ingress.html#ingress</a></p>
<p>I'm trying to deploy Jenkins that is fronted by an nginx-ingress via Helm. The goal is to secure Jenkins behind HTTPs with SSL termination at nginx. I'm currently using a self-signed cert but will eventually use cert-manager and LetsEncrypt. Jenkins and Nginx-Ingress are deployed in the default namespace.</p> <p>Below is my deployment script:</p> <pre><code>gcloud config set compute/zone us-central1-f gcloud container clusters create jenkins-cd \ --machine-type n1-standard-2 --num-nodes 2 \ --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw,cloud-platform" wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz tar zxfv helm-v2.9.1-linux-amd64.tar.gz cp linux-amd64/helm . kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [email protected] kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=xx.xx.xxxx.com" kubectl create secret tls jenkins-ingress-ssl --key /tmp/tls.key --cert /tmp/tls.crt kubectl describe secret jenkins-ingress-ssl ./helm init --service-account=tiller --wait ./helm update ./helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true ./helm install --name jenkins stable/jenkins --values values.yaml --version 0.19.0 --wait ADMIN_PWD=$(kubectl get secret --namespace default cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode) </code></pre> <p>Below is my values.yaml file:</p> <pre><code>Master: InstallPlugins: - kubernetes:1.12.6 - workflow-job:2.24 - workflow-aggregator:2.5 - credentials-binding:1.16 - git:3.9.1 - google-oauth-plugin:0.6 - google-source-plugin:0.3 Cpu: "1" Memory: "3500Mi" JavaOpts: "-Xms3500m -Xmx3500m" ServiceType: ClusterIP HostName: "xx.xx.xxxx.com" Ingress: Annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/ingress.allow-http: "false" TLS: - secretName: jenkins-ingress-ssl hosts: - xx.xx.xxxx.com Agent: Enabled: true Persistence: Size: 100Gi NetworkPolicy: ApiVersion: networking.k8s.io/v1 rbac: install: true serviceAccountName: cd-jenkins </code></pre> <p>Deployments (default namespace)</p> <pre><code>xxx@cloudshell:~/stub-jenkins2.0 (automation-stub)$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE jenkins 1 1 1 1 6m nginx-ingress-controller 1 1 1 1 6m nginx-ingress-default-backend 1 1 1 1 6m </code></pre> <p>Services (default namespace)</p> <pre><code>xxx@cloudshell:~/stub-jenkins2.0 (automation-stub)$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins ClusterIP 10.11.240.123 &lt;none&gt; 8080/TCP 7m jenkins-agent ClusterIP 10.11.250.174 &lt;none&gt; 50000/TCP 7m kubernetes ClusterIP 10.11.240.1 &lt;none&gt; 443/TCP 8m nginx-ingress-controller LoadBalancer 10.11.253.104 104.198.179.176 80:31453/TCP,443:32194/TCP 7m nginx-ingress-default-backend ClusterIP 10.11.245.149 &lt;none&gt; 80/TCP 7m </code></pre> <p>Ingress (default namespace)</p> <pre><code>xxx@cloudshell:~/stub-jenkins2.0 (automation-stub)$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE jenkins xx.xx.xxxx.com 35.193.17.244 80, 443 7m </code></pre> <p>Ingress .yaml (generated by helm)</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.allow-http: "false" kubernetes.io/ingress.class: nginx creationTimestamp: 2018-10-19T17:35:16Z generation: 1 name: jenkins namespace: default resourceVersion: "845" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/jenkins uid: 57b76091-d3c5-11e8-b9e9-42010a8001de spec: rules: - host: xx.xx.xxxx.com http: paths: - backend: serviceName: jenkins servicePort: 8080 tls: - hosts: - xx.xx.xxxx.com secretName: jenkins-ingress-ssl status: loadBalancer: ingress: - ip: 35.193.17.244 </code></pre> <p>When hitting the ingress IP (<a href="https://104.198.179.176" rel="nofollow noreferrer">https://104.198.179.176</a>), I hit the default nginx backend service with a "default backend - 404" error. I suspect there might be something wrong with the ingress configuration. According to the ingress setup, there is the incorrect ingress IP (35.193.17.244) whereas the nginx-ingress-controller IP is 104.198.179.176. </p>
<p>If you hit the nginx ingress with <code>https://104.198.179.176</code> you will always hit the default backend. You either need to hit it with <code>https://xx.xx.xxxx.com</code> or with something like this:</p> <pre><code>$ curl -H 'Host: xx.xx.xxxx.com' https://104.198.179.176 </code></pre> <p>With respect to the ingress IP address being incorrect, I would check that your backend service has endpoints and that each is listening on port <code>8080</code>.</p> <pre><code>$ kubectl describe svc jenkins </code></pre> <p>or/and</p> <pre><code>$ kubectl describe ep </code></pre> <p>I would also check the events in the Ingress:</p> <pre><code>$ kubectl describe ingress jenkins </code></pre> <p>Finally, I would check the logs in the ingress controller:</p> <pre><code>$ kubectl logs nginx-ingress-controller </code></pre>
<p>I already had NGINX handling my reverse-proxy and load balancing for bare-metals and VMs, wonder if I can use the same instance for my Kubernetes cluster exposing services in load-balancer mode. If so, could I use it for both L4 and L7?</p>
<p>You can't use it as type <a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a> because there's no <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer"><code>cloud-provider</code></a> API to handle an external Nginx instance. You can do a couple of things I can think of:</p> <ol> <li><p>Create Kubernetes Service exposed on a <a href="https://kubernetes.io/docs/concepts/services-networking/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>. So your architecture will look like this:</p> <pre><code>External NGINX -&gt; Kubernetes NodePort Service -&gt; Pods </code></pre></li> <li><p>Create a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress managed</a> by an ingress controller. The most popular happens to be <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">Nginx</a>. So your architecture will look something like this:</p> <pre><code>External NGINX -&gt; Kubernetes Service (has to be NodePort) -&gt; Ingress (NGINX) -&gt; Backend Service -&gt; Pods </code></pre></li> </ol>
<p>TLDR: Something keeps recreating containers with an image from my Kubernetes master machine and I cant figure out what!!?!?</p> <p>I created a deployment(Web Project) and a service(HTTPS Service). The deployment created 3 replica sets of my app 'webProject'.</p> <p>After I ran kubectl create -f webproject.yml, it spun everything up but then my Docker images got stuck somewhere during 'Rollout'. </p> <p>So I kubectl delete deployments/webproject which then removed my deployments. I also removed the https service as well.</p> <pre><code>kubectl get pods No resources found. kubectl get deployments No resources found. kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h38m kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster100 Ready master 3h37m v1.12.1 </code></pre> <p>As you can see it says there are no pods or worker nodes. So when I connect to the worker node to troubleshoot the images, I noticed that it still had containers running with my deployment name. </p> <p>After I </p> <pre><code>docker stop 'container' docker rm 'container' docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 259a77058d24 39825e5b6bcd "/usr/sbin/apache2ct…" 22 seconds ago Up 20 seconds k8s_webServer_webServer-deployment-7696fdd44c-dcjjd_default_fcf8fde0-d0c6-11e8-9f67-bc305be7abdb_2 </code></pre> <p>They are instantly getting recreated again. Why?</p>
<p>So if you delete a node on Kubernetes it just deletes it from etcd where Kubernetes keeps its state. However, the kubelet is still running on your node and may hold a cache (not 100% sure about it). I would try:</p> <pre><code>systemctl stop kubelet </code></pre> <p>or</p> <pre><code>pkill kubelet </code></pre> <p>verify that is not running:</p> <pre><code>ps -Af | grep kubelet # should not return anything. </code></pre> <p>Then stop and remove your container like you did initially.</p>
<p>To my understanding <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> is a container orchestration service comparable to <a href="https://aws.amazon.com/ecs/" rel="nofollow noreferrer">AWS ECS</a> or <a href="https://docs.docker.com/engine/swarm/" rel="nofollow noreferrer">Docker Swarm</a>. Yet there are several <a href="https://stackoverflow.com/questions/32047563/kubernetes-vs-cloudfoundry/32238148">high rated questions</a> on stackoverflow that compare it to <a href="https://www.cloudfoundry.org" rel="nofollow noreferrer">CloudFoundry</a> which is a plattform orchestration service. </p> <p>This means that CloudFoundry can take care of the VM layer, updating and provisioning VMs while moving containers avoiding downtime. Therefore the comparison to Kubernetes makes limited sense to my understanding. </p> <p>Am I misunderstanding something, does Kubernetes support provisioning and managing the VM layer too?</p>
<p>Yes, you can manage VMs with <a href="https://github.com/kubevirt" rel="nofollow noreferrer">KuberVirt</a> as @AbdennourTOUMI pointed out. However, Kubernetes focuses on container orchestration and it also interacts with <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer">cloud providers</a> to provision things like Load Balancers that can direct traffic to a cluster.</p> <p><a href="https://docs.cloudfoundry.org/concepts/overview.html" rel="nofollow noreferrer">Cloud Foundry</a> is a PaaS that provides much more than Kubernetes at the lower level. Kubernetes can run on top of an IaaS like AWS together with something like <a href="https://www.openshift.com/" rel="nofollow noreferrer">OpenShift</a></p> <p>This is a diagram showing some of the differences:</p> <p><a href="https://i.stack.imgur.com/pXSXP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pXSXP.png" alt="Diagram"></a></p>
<p>I am using the <code>flannel</code> network plugin in my k8s cluster. And there is one special node which has one internal IP address and one public ip address which make it possible to ssh into it. </p> <p>After I add the node using <code>kubeadm</code> I found out that the <code>k get node xx -o yaml</code> returns the <code>flannel.alpha.coreos.com/public-ip</code> annotation with the public IP address and <strong>which makes the internal Kubernetes pod unaccessible from other nodes</strong>.</p> <pre><code>apiVersion: v1 kind: Node metadata: annotations: flannel.alpha.coreos.com/backend-data: '{"VtepMAC":"xxxxxx"}' flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: "true" flannel.alpha.coreos.com/public-ip: &lt;the-public-ip, not the internal one&gt; kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: "0" volumes.kubernetes.io/controller-managed-attach-detach: "true" </code></pre> <p>I try to use <code>k edit node xxx</code> to change the <code>public-ip</code> in the annotation it works in just one minute and then it will change back to the original one.</p> <p>So...my question is just like the title: How can I change the Kubernetes node annotation <code>flannel.alpha.coreos.com/public-ip</code> without modifying back?</p>
<p>Do the modification using <code>kubectl</code> and you will have two ways:</p> <ul> <li><p><strong>kubectl annotate</strong>: </p> <pre><code>kubectl annotate node xx --overwrite flannel.alpha.coreos.com/public-ip=new-value </code></pre></li> <li><p>or <strong>kubectl patch</strong> : </p> <pre><code>kubectl patch node xx -p '{"metadata":{"annotations":{"flannel.alpha.coreos.com/public-ip":"new-value"}}}' </code></pre></li> </ul>
<p>Trying to teach myself on how to use Kubernetes, and having some issues. </p> <p>I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).</p> <p>My next step was to try to use a service of type LoadBalancer to try to access nginx.</p> <p>I set up a new cluster and deployed the nginx image.</p> <pre><code>kubectl \ create deployment my-nginx-deployment \ --image=nginx </code></pre> <p>I then set up the service for the LoadBalancer</p> <pre><code>kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic </code></pre> <p>Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.</p> <p>Not really sure where I went wrong.</p> <p>results of kubectl get svc</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 100.64.0.1 &lt;none&gt; 443/TCP 7h nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h </code></pre>
<p>From the nginx dockerhub page , I see that the container is using port 80.</p> <p><a href="https://hub.docker.com/_/nginx/" rel="nofollow noreferrer">https://hub.docker.com/_/nginx/</a></p> <p>It should be like this:</p> <pre><code>kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic </code></pre> <p>Also, make sure the service type loadbalancer is available in your environement.</p> <p><strong>Known Issues for minikube installation</strong></p> <pre><code>Features that require a Cloud Provider will not work in Minikube. These include: LoadBalancers Features that require multiple nodes. These include: Advanced scheduling policies </code></pre>
<p>A <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer">Kubernetes Pod</a> and an <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html" rel="nofollow noreferrer">AWS ECS Task Definition</a> both support multiple different container images: each instance of the pod / task will run all images as containers together.</p> <p>Does CloudFoundry support a similar concept to allow apps that consist of multiple, separate processes? </p>
<p>Actually, CloudFoundry has a community project for container orchestration tools based on Kubernetes, so that will accept pods the same way Kubernetes does.</p> <p>You can read more about it <a href="https://www.cloudfoundry.org/container-runtime/" rel="nofollow noreferrer">here</a></p> <p>CloudFoundry also has a <code>CF Application Runtime</code> which is pretty much their PaaS that allows you to deploy applications <a href="https://www.heroku.com/" rel="nofollow noreferrer">Heroku</a> style which under the hood run as 'containers'. It's not clear from the docs what type of containers, but I presume you could find out more reading the code, but that's not exposed to the users, neither it's exposed as Pods.</p>
<p>Can someone explain why some of these resources are both in apps and extensions api-group.</p> <pre><code>C02W84XMHTD5:~ iahmad$ kubectl api-resources --api-group=apps NAME SHORTNAMES APIGROUP NAMESPACED KIND controllerrevisions apps true ControllerRevision daemonsets ds apps true DaemonSet deployments deploy apps true Deployment replicasets rs apps true ReplicaSet statefulsets sts apps true StatefulSet C02W84XMHTD5:~ iahmad$ C02W84XMHTD5:~ iahmad$ C02W84XMHTD5:~ iahmad$ kubectl api-resources --api-group=extensions NAME SHORTNAMES APIGROUP NAMESPACED KIND daemonsets ds extensions true DaemonSet deployments deploy extensions true Deployment ingresses ing extensions true Ingress networkpolicies netpol extensions true NetworkPolicy podsecuritypolicies psp extensions false PodSecurityPolicy replicasets rs extensions true ReplicaSet </code></pre>
<p>It's part of backward compatibility. Generally, features/resources are introduced as <code>extensions</code> and when they graduate on later Kubernetes releases they become part of the <code>core</code> or <code>apps</code> or something else API. Refer to the <a href="https://kubernetes.io/docs/reference/using-api/deprecation-policy/" rel="nofollow noreferrer">deprecation policy</a> to see how it works with respect to Kubernetes releases.</p> <p>In case you are wondering the general rule is something like this from older to newer.</p> <ul> <li><code>extensions</code> generally older than <code>code</code>, <code>apps</code>, etc.</li> <li>v1alphav1 -> v1alphav2 -> v1alphavN -> v1betav1 -> v1betav2 -> v1betavN -> v1core/v1apps/etc -> v2alpha/v2beta/v2core -> vNalpha/vNbeta/vNcore/etc</li> </ul>
<p>To my understanding <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> is a container orchestration service comparable to <a href="https://aws.amazon.com/ecs/" rel="nofollow noreferrer">AWS ECS</a> or <a href="https://docs.docker.com/engine/swarm/" rel="nofollow noreferrer">Docker Swarm</a>. Yet there are several <a href="https://stackoverflow.com/questions/32047563/kubernetes-vs-cloudfoundry/32238148">high rated questions</a> on stackoverflow that compare it to <a href="https://www.cloudfoundry.org" rel="nofollow noreferrer">CloudFoundry</a> which is a plattform orchestration service. </p> <p>This means that CloudFoundry can take care of the VM layer, updating and provisioning VMs while moving containers avoiding downtime. Therefore the comparison to Kubernetes makes limited sense to my understanding. </p> <p>Am I misunderstanding something, does Kubernetes support provisioning and managing the VM layer too?</p>
<p>As for <strong>VM</strong>, my answer is <strong>YES</strong>; you can run VM as workload in k8s cluster.</p> <p>Indeed, Redhat team figured out how to run VM in the kubernetes cluster by adding the patch <a href="https://kubevirt.io/docs/workloads/controllers/virtualmachine.html" rel="nofollow noreferrer">KubeVirt</a>.</p> <p>example from the link above.</p> <pre><code>apiVersion: kubevirt.io/v1alpha2 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-cirros name: vm-cirros spec: running: false template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-cirros spec: domain: devices: disks: - disk: bus: virtio name: registrydisk volumeName: registryvolume - disk: bus: virtio name: cloudinitdisk volumeName: cloudinitvolume machine: type: "" resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - name: registryvolume registryDisk: image: kubevirt/cirros-registry-disk-demo:latest - cloudInitNoCloud: userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK name: cloudinitvolume </code></pre> <p>Then: </p> <pre><code>kubectl create -f vm.yaml virtualmachine "vm-cirros" created </code></pre>
<p>Ok so this might be a basic question, but i'm new to kubernetes and tried to install wordpress using helm unto it, using the stable/wordpress chart, but i keep getting an error "pod has unbound immediate PersistentVolumeClaims (repeated 2 times)" is this because of the requirement in here <a href="https://github.com/helm/charts/tree/master/stable/wordpress" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/wordpress</a>. "PV provisioner support in the underlying infrastructure" how do i enable this in my infrastructure, i have setup my cluster across three nodes on digitalocean, i've tried searching for tutorials on this, with no luck until now. Please let me know what i'm missing, thanks.</p>
<p><strong>PersistentVolume</strong> types are implemented as plugins. Kubernetes currently supports the following plugins:</p> <pre><code>GCEPersistentDisk AWSElasticBlockStore AzureFile AzureDisk FC (Fibre Channel) Flexvolume Flocker NFS iSCSI RBD (Ceph Block Device) CephFS Cinder (OpenStack block storage) Glusterfs VsphereVolume Quobyte Volumes HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) Portworx Volumes ScaleIO Volumes StorageOS </code></pre> <p>You can enable support for PVs or Dynamic PVs using thoese plugins.</p> <p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">detail reference</a></p> <p>On Digital Ocean you can use block storage for volumes.</p> <p><a href="https://www.digitalocean.com/docs/kubernetes/how-to/add-persistent-storage/" rel="nofollow noreferrer">details</a></p>
<p>By default all the data stored in etcd is not encrypted, for the production deployments, some of the data stored in etcd need to be encrypted such as secrets, Is there a way to store the secrets, in an encrypted way, in etcd, by default.</p>
<p>To have encryption you need to instruct <code>apiserver</code> service with this parameter:</p> <p><code>--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml</code></p> <p>where the yaml file contains this:</p> <pre><code>kind: EncryptionConfig apiVersion: v1 resources: - resources: - secrets providers: - aescbc: keys: - name: key1 secret: ${ENCRYPTION_KEY} - identity: {} </code></pre> <p>here the provider is aescbc (the strongest encryption) and the variable is generated before:</p> <pre><code>ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) </code></pre> <p>Take a look to these documents:</p> <ul> <li><p><a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/</a></p></li> <li><p><a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/06-data-encryption-keys.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/06-data-encryption-keys.md</a> (and the following md files)</p></li> </ul>
<p>I am learning Kubernetes and planning to do continuous deployment of my apps with Kubernetes manifests. </p> <p>I'd like to have my app defined as a <code>Deployment</code> and a <code>Service</code> in a manifest, and have my CD system run <code>kubectl apply -f</code> on the manifest file. </p> <p>However, our current setup is to tag our Docker images with the SHA of the git commit for that version of the app. </p> <p>Is there a Kubernetes-native way to express the image tag as a variable, and have the CD system set that variable to the correct git SHA?</p>
<p>You should consider <a href="/questions/tagged/helm" class="post-tag" title="show questions tagged &#39;helm&#39;" rel="tag">helm</a> charts in this case, where you separate between the skeleton of templates (or what you called maniest) and its values which are changed from release to another.</p> <p>In <strong>templates/deployment.yaml</strong> :</p> <pre><code>spec: containers: - name: {{ template "nginx.name" . }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" </code></pre> <p>And in <strong>values.yaml</strong> : </p> <pre><code>image: repository: nginx tag: 1.11.0 </code></pre> <p>See the full example <a href="https://github.com/helm/helm/blob/master/docs/examples/nginx" rel="nofollow noreferrer">here</a></p>
<p>I have a service of ingress type LoadBalancer. </p> <pre><code>spec: type: LoadBalancer </code></pre> <p>GKE create a loadbalancer and forwarding rules. The Load Balancer created by GKE/GCloud is tcp. I want Google Managed SSL Certs. I created the certs using gcloud </p> <pre><code>gcloud beta compute ssl-certificates create... </code></pre> <p>How do I attach this cert to the LoadBalancer defined by GKE. There is no section in the console to edit the load balancer/front end to add SSL certs? Can I do it using the gcloud CLI</p> <p>Thanks</p>
<p>If you want to terminate SSL on your GCE load balancer it can't be a TCP load balancer because a TCP load balancer is a Layer 4 load balancer, and SSL is at layer 7 in the network stack. For this type of load balancer, you can set up a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> with an ingress controller like <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx</a> or <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">Traefik</a> and terminate SSL on the Ingress.</p> <p>GCE support layer 7 load balancers but they are not supported in Kubernetes yet (afaik). However, you could actually terminate SSL on GCE if you'd like to, by provisioning an <a href="https://cloud.google.com/load-balancing/docs/https/setting-up-https" rel="nofollow noreferrer">HTTPS or L7</a> load balancer, but you will have to manually point it to a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">Service of the NodePort</a> type.</p> <p>Update:</p> <p>Another <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">doc</a> about Ingresses with Google managed SSL certs.</p>
<p>My Main Question: Why is the schema-registry crashing?</p> <p>Peripheral Question: Why are two pods launching for each of zookeeper/kafka/schema-registry if I've configured one server for each? Does everything thing else look basically right?</p> <pre><code>➜ helm repo update &lt;snip&gt; ➜ helm install --values values.yaml --name my-confluent-oss confluentinc/cp-helm-charts &lt;snip&gt; ➜ helm list NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE my-confluent-oss 1 Sat Oct 20 19:09:08 2018 DEPLOYED cp-helm-charts-0.1.0 1.0 default ➜ kubectl get pods NAME READY STATUS RESTARTS AGE my-confluent-oss-cp-kafka-0 2/2 Running 0 20m my-confluent-oss-cp-schema-registry-59d8877584-c2jc7 1/2 CrashLoopBackOff 7 20m my-confluent-oss-cp-zookeeper-0 2/2 Running 0 20m </code></pre> <p>My <code>values.yaml</code> is as follows. I've tested this out with <code>helm install --debug --dry-run</code>. I'm just disabling persistence, setting a single server (this is a development setup for running in a VM), and disabling the extra services for the moment until I get the basics working:</p> <pre><code>cp-kafka: brokers: 1 persistence: enabled: false cp-zookeeper: persistence: enabled: false servers: 1 cp-zookeeper: persistence: enabled: false servers: 1 cp-kafka-connect: enabled: false cp-kafka-rest: enabled: false cp-ksql-server: enabled: false </code></pre> <p>Here are the logs for the failing schema-registry:</p> <pre><code>➜ kubectl logs my-confluent-oss-cp-schema-registry-59d8877584-c2jc7 cp-schema-registry-server &lt;snip&gt; [2018-10-21 00:28:14,738] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser) [2018-10-21 00:28:14,738] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser) [2018-10-21 00:28:14,751] INFO Cluster ID: ofJRwpXzRn-ltDn8b_6h3A (org.apache.kafka.clients.Metadata) [2018-10-21 00:28:14,753] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread) [2018-10-21 00:28:14,756] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread) [2018-10-21 00:28:14,800] INFO [Consumer clientId=KafkaStore-reader-_schemas, groupId=my-confluent-oss] Resetting offset for partition _schemas-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher) [2018-10-21 00:28:14,821] INFO Cluster ID: ofJRwpXzRn-ltDn8b_6h3A (org.apache.kafka.clients.Metadata) [2018-10-21 00:28:14,857] INFO Wait to catch up until the offset of the last message at 7 (io.confluent.kafka.schemaregistry.storage.KafkaStore) [2018-10-21 00:28:14,930] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry) [2018-10-21 00:28:14,939] INFO Kafka version : 2.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser) [2018-10-21 00:28:14,940] INFO Kafka commitId : 4b1dd33f255ddd2f (org.apache.kafka.common.utils.AppInfoParser) [2018-10-21 00:28:14,953] INFO Cluster ID: ofJRwpXzRn-ltDn8b_6h3A (org.apache.kafka.clients.Metadata) [2018-10-21 00:29:14,945] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication) io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException: Timed out waiting for join group to complete at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:220) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:63) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:41) at io.confluent.rest.Application.createServer(Application.java:169) at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43) Caused by: io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryTimeoutException: Timed out waiting for join group to complete at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector.init(KafkaGroupMasterElector.java:202) at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:215) ... 4 more [2018-10-21 00:29:14,948] INFO Shutting down schema registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry) [2018-10-21 00:29:14,949] INFO [kafka-store-reader-thread-_schemas]: Shutting down (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread) [2018-10-21 00:29:14,950] INFO [kafka-store-reader-thread-_schemas]: Stopped (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread) [2018-10-21 00:29:14,951] INFO [kafka-store-reader-thread-_schemas]: Shutdown completed (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread) [2018-10-21 00:29:14,953] INFO KafkaStoreReaderThread shutdown complete. (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread) [2018-10-21 00:29:14,953] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer) [2018-10-21 00:29:14,959] ERROR Unexpected exception in schema registry group processing thread (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector) org.apache.kafka.common.errors.WakeupException at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:498) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:284) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:161) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:243) at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.ensureCoordinatorReady(SchemaRegistryCoordinator.java:207) at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:97) at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector$1.run(KafkaGroupMasterElector.java:192) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) </code></pre> <p>I'm using minikube 0.30.0 and a fresh, clean minikube vm:</p> <pre><code>➜ kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-22T05:40:33Z", GoVersion:"go1.9.7", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>Your schema registry can't join your Kafka group. You'll have to check the configs, your schema registry needs to perform a leader election initially and that leader election could be either through <a href="https://docs.confluent.io/current/schema-registry/docs/index.html" rel="nofollow noreferrer">Zookeeper or Kafka</a>.</p> <p>Looks like the Helm chart installs the schema registry using <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-schema-registry/templates/deployment.yaml#L67" rel="nofollow noreferrer">Kafka leader election</a>, and you can also see that you can <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-schema-registry/templates/_helpers.tpl#L51" rel="nofollow noreferrer">manually pass the Kafka broker parameter</a> or it picks it from <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-schema-registry/templates/_helpers.tpl#L49" rel="nofollow noreferrer">.Values.kafka.bootstrapServers</a>, but also the value for <a href="https://github.com/confluentinc/cp-helm-charts/blob/8d51b6dede4b08bb68fd8eb6fc0855e1b9d7e858/charts/cp-schema-registry/values.yaml#L36" rel="nofollow noreferrer">.bootstrapServers</a> appears empty. You can see what config value is in your deployment by simply running something like:</p> <pre><code>$ kubectl get deployment my-confluent-oss-cp-schema-registry -o=yaml </code></pre> <p>Then you can change it to point the internal Kubernetes my-confluent-oss-cp-kafka service endpoint with:</p> <pre><code>$ kubectl edit deployment cp-schema-registry </code></pre> <p>Also, note that as of this writing the <a href="https://github.com/confluentinc/cp-helm-charts" rel="nofollow noreferrer">cp-helm-charts</a> are in developer preview so use it at your own risk.</p> <p>The other parameter you can configure is <a href="https://github.com/confluentinc/schema-registry/blob/master/core/src/main/java/io/confluent/kafka/schemaregistry/masterelector/kafka/KafkaGroupMasterElector.java#L174" rel="nofollow noreferrer">SCHEMA_REGISTRY_KAFKASTORE_INIT_TIMEOUT_CONFIG</a> since <a href="https://github.com/confluentinc/schema-registry/blob/master/core/src/main/java/io/confluent/kafka/schemaregistry/masterelector/kafka/KafkaGroupMasterElector.java#L205" rel="nofollow noreferrer">this is</a> exactly where you are seeing the error. So the Kafka Schema registry maybe timing out while trying to connect to the Kafka store. (maybe related to minikube). What's kind of odd is that it should retry.</p>
<p>So putting everything in detail here for better clarification. My service consist of following attributes in dedicated namespace (Not using ServiceEntry)</p> <ol> <li>Deployment (1 deployment)</li> <li>Configmaps (1 configmap)</li> <li>Service</li> <li>VirtualService</li> <li>GW</li> </ol> <p>Istio is enabled in namespace and when I create / run deployment it create 2 pods as it should. Now as stated in issues subject I want to allow all outgoing traffic for deployment because my serives needs to connect with 2 service discovery server: </p> <ol> <li>vault running on port 8200</li> <li>spring config server running on http</li> <li>download dependencies and communicate with other services (which are not part of vpc/ k8)</li> </ol> <p>Using following deployment file will not open outgoing connections. Only thing works is simple <code>https request on port 443</code> like when i run <code>curl https://google.com</code> its success but no response on <code>curl http://google.com</code> Also logs showing connection with vault is not establishing as well.</p> <p>I have used almost all combinations in deployment but non of them seems to work. Anything I am missing or doing this in wrong way? would really appreciate contributions in this :) </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: my-application-service name: my-application-service-deployment namespace: temp-nampesapce annotations: traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0 spec: replicas: 1 template: metadata: labels: app: my-application-service-deployment spec: containers: - envFrom: - configMapRef: name: my-application-service-env-variables image: image.from.dockerhub:latest name: my-application-service-pod ports: - containerPort: 8080 name: myappsvc resources: limits: cpu: 700m memory: 1.8Gi requests: cpu: 500m memory: 1.7Gi apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-application-service-ingress namespace: temp-namespace spec: hosts: - my-application.mydomain.com gateways: - http-gateway http: - route: - destination: host: my-application-service port: number: 80 kind: Service apiVersion: v1 metadata: name: my-application-service namespace: temp-namespace spec: selector: app: api-my-application-service-deployment ports: - port: 80 targetPort: myappsvc protocol: TCP apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: http-gateway namespace: temp-namespace spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - "*.mydomain.com" </code></pre> <p>Namespace with istio enabled:</p> <pre><code>Name: temp-namespace Labels: istio-injection=enabled Annotations: &lt;none&gt; Status: Active No resource quota. No resource limits. </code></pre> <p>Describe pods showing that istio and sidecare is working. </p> <pre><code>Name: my-application-service-deployment-fb897c6d6-9ztnx Namespace: temp-namepsace Node: ip-172-31-231-93.eu-west-1.compute.internal/172.31.231.93 Start Time: Sun, 21 Oct 2018 14:40:26 +0500 Labels: app=my-application-service-deployment pod-template-hash=964537282 Annotations: sidecar.istio.io/status={"version":"2e0c897425ef3bd2729ec5f9aead7c0566c10ab326454e8e9e2b451404aee9a5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs... Status: Running IP: 100.115.0.4 Controlled By: ReplicaSet/my-application-service-deployment-fb897c6d6 Init Containers: istio-init: Container ID: docker://a47003a092ec7d3dc3b1d155bca0ec53f00e545ad1b70e1809ad812e6f9aad47 Image: docker.io/istio/proxy_init:1.0.2 Image ID: docker-pullable://istio/proxy_init@sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: -p 15001 -u 1337 -m REDIRECT -i * -x -b 8080, -d State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 21 Oct 2018 14:40:26 +0500 Finished: Sun, 21 Oct 2018 14:40:26 +0500 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: &lt;none&gt; Containers: my-application-service-pod: Container ID: docker://1a30a837f359d8790fb72e6b8fda040e121fe5f7b1f5ca47a5f3732810fd4f39 Image: image.from.dockerhub:latest Image ID: docker-pullable://848569320300.dkr.ecr.eu-west-1.amazonaws.com/k8_api_env@sha256:98abee8d955cb981636fe7a81843312e6d364a6eabd0c3dd6b3ff66373a61359 Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 21 Oct 2018 14:40:28 +0500 Ready: True Restart Count: 0 Limits: cpu: 700m memory: 1932735283200m Requests: cpu: 500m memory: 1825361100800m Environment Variables from: my-application-service-env-variables ConfigMap Optional: false Environment: vault.token: &lt;set to the key 'vault_token' in secret 'vault.token'&gt; Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-rc8kc (ro) istio-proxy: Container ID: docker://3ae851e8ded8496893e5b70fc4f2671155af41c43e64814779935ea6354a8225 Image: docker.io/istio/proxyv2:1.0.2 Image ID: docker-pullable://istio/proxyv2@sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: proxy sidecar --configPath /etc/istio/proxy --binaryPath /usr/local/bin/envoy --serviceCluster my-application-service-deployment --drainDuration 45s --parentShutdownDuration 1m0s --discoveryAddress istio-pilot.istio-system:15007 --discoveryRefreshDelay 1s --zipkinAddress zipkin.istio-system:9411 --connectTimeout 10s --statsdUdpAddress istio-statsd-prom-bridge.istio-system:9125 --proxyAdminPort 15000 --controlPlaneAuthPolicy NONE State: Running Started: Sun, 21 Oct 2018 14:40:28 +0500 Ready: True Restart Count: 0 Requests: cpu: 10m Environment: POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name) POD_NAMESPACE: temp-namepsace (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) ISTIO_META_POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name) ISTIO_META_INTERCEPTION_MODE: REDIRECT Mounts: /etc/certs/ from istio-certs (ro) /etc/istio/proxy from istio-envoy (rw) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-rc8kc: Type: Secret (a volume populated by a Secret) SecretName: default-token-rc8kc Optional: false istio-envoy: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory istio-certs: Type: Secret (a volume populated by a Secret) SecretName: istio.default Optional: true QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-certs" Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-rc8kc" Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-envoy" Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxy_init:1.0.2" already present on machine Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container Normal Scheduled 3m default-scheduler Successfully assigned my-application-service-deployment-fb897c6d6-9ztnx to ip-172-42-231-93.eu-west-1.compute.internal Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "image.from.dockerhub:latest" already present on machine Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxyv2:1.0.2" already present on machine Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container </code></pre>
<p>Issue was that I tried to adding <code>sidecar</code> in deployment not in pod by adding in pod resolved the issue. Got help from here:</p> <p><a href="https://github.com/istio/istio/issues/9304" rel="nofollow noreferrer">https://github.com/istio/istio/issues/9304</a></p>
<p>Using <code>kubeadm init</code> initializes the control plane with default configuration options.</p> <p>Is there a way to see what default values/configuration it will use for the control plane, how can I view that configuration file, and where is it stored?</p>
<p>Found the command: ( just in case someone needs it)</p> <pre><code>C02W84XMHTD5:~ iahmad$ kubectl get configMap kubeadm-config -o yaml --namespace=kube-system apiVersion: v1 data: MasterConfiguration: | api: advertiseAddress: 192.168.64.4 bindPort: 8443 controlPlaneEndpoint: localhost apiServerExtraArgs: admission-control: Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota auditPolicy: logDir: /var/log/kubernetes/audit logMaxAge: 2 path: "" authorizationModes: - Node - RBAC certificatesDir: /var/lib/minikube/certs/ cloudProvider: "" criSocket: /var/run/dockershim.sock etcd: caFile: "" certFile: "" dataDir: /data/minikube endpoints: null image: "" keyFile: "" imageRepository: k8s.gcr.io kubeProxy: config: bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: "" configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: minSyncPeriod: 0s scheduler: "" syncPeriod: 30s metricsBindAddress: 127.0.0.1:10249 mode: "" nodePortAddresses: null oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy udpIdleTimeout: 250ms kubeletConfiguration: {} kubernetesVersion: v1.10.0 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 noTaintMaster: true nodeName: minikube privilegedPods: false token: "" tokenGroups: - system:bootstrappers:kubeadm:default-node-token tokenTTL: 24h0m0s tokenUsages: - signing - authentication unifiedControlPlaneImage: "" </code></pre>
<p>My kube-dns pod is crashing:</p> <pre><code>kube-dns-6d4fc847dc-6bh59 1/3 CrashLoopBackOff 5844 7d13h </code></pre> <p>These are the logs from the kubedns container</p> <pre><code>kubectl logs kube-dns-6d4fc847dc-6bh59 --namespace kube-system -c kubedns I1021 00:56:37.547936 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:38.047923 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... E1021 00:56:38.048390 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:192: Failed to list *v1.Service: Get https://100.64.0.1:443/api/v1/services?resourceVersion=0: dial tcp 100.64.0.1:443: i/o timeout E1021 00:56:38.048405 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:189: Failed to list *v1.Endpoints: Get https://100.64.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 100.64.0.1:443: i/o timeout I1021 00:56:38.547922 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:39.047917 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:39.547923 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:40.047935 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:40.547928 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:41.047931 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:41.547929 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:42.047924 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:42.547931 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:43.047909 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:43.547921 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:44.047923 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:44.547932 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:45.047935 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:45.547927 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:46.047928 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:46.547929 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:47.047925 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:47.547933 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:48.047914 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:48.547930 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:49.047927 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:49.547929 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:50.047925 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:50.547928 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:51.047933 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:51.547928 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:52.047920 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:52.547930 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:53.047913 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:53.547931 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:54.047925 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:54.547930 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:55.047928 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:55.547938 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:56.047935 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:56.547937 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:57.047940 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:57.547941 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:58.047915 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:58.547934 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:59.047932 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:56:59.547937 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:00.047942 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:00.547934 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:01.047925 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:01.547929 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:02.047915 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:02.547924 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:03.047906 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:03.547927 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:04.047934 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:04.547927 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:05.047933 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:05.547925 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:06.047928 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:06.547926 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:07.047926 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... I1021 00:57:07.547930 1 dns.go:219] Waiting for [endpoints services] to be initialized from apiserver... F1021 00:57:08.047911 1 dns.go:209] Timeout waiting for initialization </code></pre> <p>Kindly help. </p>
<p>Looks like your <code>kube-dns</code> pods can't talk to your kube-apiserver. Port <code>443</code> on the kube-apiserver has been deprecated for a while. Your ConfigMap or config file is making it point to <code>&lt;kube-master:443&gt;</code>. </p> <p>You can try overriding it on your kube-dns deployment with the <code>--kube-master-url</code> option. Something like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 ... args: - --kube-master-url=https://&lt;master-ip&gt;:6443 - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 </code></pre> <p>You can get the from the output of:</p> <pre><code>$ kubectl cluster-info </code></pre>
<p>Today is my first day playing with GCR and GKE. So apologies if my question sounds childish.</p> <p>So I have created a new registry in GCR. It is private. Using <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#access_token" rel="noreferrer">this</a> documentation, I got hold of my Access Token using the command</p> <pre><code>gcloud auth print-access-token #&lt;MY-ACCESS_TOKEN&gt; </code></pre> <p>I know that my username is <code>oauth2accesstoken</code></p> <p>On my local laptop when I try</p> <pre><code>docker login https://eu.gcr.io/v2 Username: oauth2accesstoken Password: &lt;MY-ACCESS_TOKEN&gt; </code></pre> <p>I get:</p> <pre><code>Login Successful </code></pre> <p>So now its time to create a <code>docker-registry</code> secret in Kubernetes.</p> <p>I ran the below command:</p> <pre><code>kubectl create secret docker-registry eu-gcr-io-registry --docker-server='https://eu.gcr.io/v2' --docker-username='oauth2accesstoken' --docker-password='&lt;MY-ACCESS_TOKEN&gt;' --docker-email='&lt;MY_EMAIL&gt;' </code></pre> <p>And then my Pod definition looks like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: my-app image: eu.gcr.io/&lt;my-gcp-project&gt;/&lt;repo&gt;/&lt;my-app&gt;:latest ports: - containerPort: 8090 imagePullSecrets: - name: eu-gcr-io-registry </code></pre> <p>But when I spin up the pod, I get the ERROR:</p> <pre><code>Warning Failed 4m (x4 over 6m) kubelet, node-3 Failed to pull image "eu.gcr.io/&lt;my-gcp-project&gt;/&lt;repo&gt;/&lt;my-app&gt;:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication </code></pre> <p>I verified my secrets checking the YAML file and doing a <code>base64 --decode</code> on the <code>.dockerconfigjson</code> and it is correct.</p> <p>So what have I missed here ? </p>
<p><strong>If your GKE cluster &amp; GCR registry are in the same project:</strong> You don't need to configure authentication. GKE clusters are authorized to pull from private GCR registries in the same project with no config. (Very likely you're this!)</p> <hr> <p><strong>If your GKE cluster &amp; GCR registry are in different GCP projects:</strong> Follow these instructions to give "service account" of your GKE cluster access to read private images in your GCR cluster: <a href="https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry" rel="noreferrer">https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry</a></p> <p>In a nutshell, this can be done by:</p> <pre><code>gsutil iam ch serviceAccount:[PROJECT_NUMBER][email protected]:objectViewer gs://[BUCKET_NAME] </code></pre> <p>where <code>[BUCKET_NAME]</code> is the GCS bucket storing your GCR images (like <code>artifacts.[PROJECT-ID].appspot.com</code>) and <code>[PROJECT_NUMBER]</code> is the <strong>numeric</strong> GCP project ID hosting your GKE cluster.</p>
<p>I have configured EKS on AWS with 4 nodes. When deploying my application, I've noticed that some pods cannot be setup because of insufficient resources (getting error 0/4 nodes are available: 4 Insufficient pods.)</p> <p>When looking into k8s dashboard, I've noticed that only 10% of memory is used (see picture) <a href="https://i.stack.imgur.com/rzAkS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rzAkS.png" alt="node memory"></a></p> <p>I've used <a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html" rel="nofollow noreferrer">this</a> guide in order to set the things up.</p> <p>How can I increase this limit and make my node used on full capacity?</p> <p>Thanks</p>
<p>You could find the full answer in this <a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt" rel="nofollow noreferrer">source</a>. </p> <p>The short one, the kubelet pods capacity is set by default because you have a maximum IP per network interface.</p>
<p>I have recently installed minikube and VirtualBox on a new Mac using homebrew. I am following instructions from the <a href="https://kubernetes.io/docs/tutorials/hello-minikube" rel="noreferrer">official minikube tutorial</a>. </p> <p>This is how I am starting the cluster - </p> <pre><code>minikube start --vm-driver=hyperkit </code></pre> <p>On running <code>kubectl cluster-info</code> I get this</p> <pre><code>Kubernetes master is running at https://192.168.99.100:8443 CoreDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre> <p>Then I set the context of minikube</p> <pre><code>kubectl config use-context minikube </code></pre> <p>But when I run <code>minikube dashboard</code> it takes a lot of time to get any output and ultimately I get this output - </p> <pre><code>http://127.0.0.1:50769/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ is not responding properly: Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 </code></pre> <p>I am expecting to see a web UI for minikube clusters, but getting error output. Is there something I am doing wrong?</p> <p><strong>More-info -</strong><br> OS: macOS Mojave (10.14)<br> kubectl command was installed using gcloud sdk.</p> <p><strong>Update</strong><br> Output of <code>kubectl cluster-info dump</code></p> <pre><code>Unable to connect to the server: net/http: TLS handshake timeout </code></pre> <p>Output of <code>kubectl get pods</code> and <code>kubectl get pods --all-namespaces</code> both </p> <pre><code>The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port? </code></pre>
<p>stop the minikube:</p> <pre><code>minikube stop </code></pre> <p>clean up the current minikune config and data ( which is not working or gone bad)</p> <pre><code>rm -rf ~/.minikube </code></pre> <p>Start minikube again: ( a fresh instance )</p> <pre><code>minikube start </code></pre>
<p>I use Google Cloud for deploy company app.</p> <p>The goal: every branch deployed on some subdomain(example.com): task-123.example.com, etc.</p> <p>I copy Cloud DNS namespace to the domain registrar. I pass the static IP address(via <code>kubernetes.io/ingress.global-static-ip-name: "test-static-ip"</code>) for Ingress and pass it to domain registrar to A record. But I can't understand how to make subdomain works.</p> <p>Every branch creates Ingress with static IP, but with different URLs for the host.</p> <p>I made CNAME *.example.com which refers to example.com, but its not works.</p> <p>Help me, please. Sorry for my English.</p>
<p>You want *.example.com to point to the ingress controller so branch1.example.com and branch2.example.com will both hit the ingress controller. This is achieved with wildcard DNS.</p> <p>Each branch in your scenario should have its own routing rule (ingress resource) with a host section defined for its specific branch. The ingress controller is updated when a new ingress resource is created and its routing rules then reflect the additional rule. So creating a new branch with a new ingress resource for that host will tell the ingress controller to route traffic for that specific host to a Service specific to that branch. (Or you can define all the branch rules in one go with a fanout ingress - see <a href="https://stackoverflow.com/questions/52382658/ingress-nginx-create-one-ingress-per-host-or-combine-many-hosts-into-one-ingr/52382878#52382878">ingress-nginx - create one ingress per host? Or combine many hosts into one ingress and reload?</a> )</p> <p>That's 'how it works'. I'm not sure if that is your question though? It's hard to diagnose the problem you're having. Presumably you have an Ingress, a Service and a Deployment? To help with that I think you'd need to post those and explain (either as an update or a separate question) what behaviour you see (a 404 maybe)?</p>
<p>There are some methods natively supported such as basic auth , X509 certificates and webhook tokens.</p> <p>Is there some workaround/project to use LDAP for user authentication with Kubernetes. I need users to be grouped in LDAP , and then use role binding to bind the group to a specific namespace with a role.</p>
<p>Yes you can integrate with LDAP, for example:</p> <ul> <li>Using <a href="https://github.com/dexidp/dex#connectors" rel="noreferrer">dex</a></li> <li>With Torchbox's <a href="https://github.com/torchbox/kube-ldap-authn" rel="noreferrer">kube-ldap-authn</a> (hint: read this <a href="https://icicimov.github.io/blog/virtualization/Kubernetes-LDAP-Authentication/" rel="noreferrer">post</a>)</li> <li>Vis <a href="https://github.com/keycloak/keycloak-gatekeeper" rel="noreferrer">keycloak</a></li> </ul> <p>Also, there's a nice intro-level <a href="https://medium.com/@pmvk/step-by-step-guide-to-integrate-ldap-with-kubernetes-1f3fe1ec644e" rel="noreferrer">blog post</a> to get you started.</p>
<p>I've just installed OpenShift-Okd 3.11, and am trying out a persistent Postgres database.</p> <p>After attempting to create the database, I get the following error:</p> <pre><code>MountVolume.SetUp failed for volume "postgresql" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/b76a314a-d59a-11e8-a502-6c626d58b24d/volumes/kubernetes.io~nfs/postgresql --scope -- mount -t nfs apps.mydomain.com:/pg-data /var/lib/origin/openshift.local.volumes/pods/b76a314a-d59a-11e8-a502-6c626d58b24d/volumes/kubernetes.io~nfs/postgresql Output: Running scope as unit run-7329.scope. mount.nfs: Protocol not supported </code></pre> <p>I have also create the following persistent volume:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: postgresql spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: path: /pg-data server: apps.mydomain.com persistentVolumeReclaimPolicy: Retain </code></pre> <p>Even after creating the PV (using <code>oc create -f pv.yml</code>) I still get the above error.</p>
<p>Looks like you don't have an NFS server running on <code>apps.mydomain.com</code>, you need to have an NFS server exporting directories that can be mounted remotely an NFS client, in this case your Postgres pod.</p> <p>If you not sure how to setup an NFS server, you can follow <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">this guide</a> to install it in Kubernetes.</p> <p>You can also run an <a href="http://nfs.sourceforge.net/nfs-howto/ar01s03.html" rel="nofollow noreferrer">NFS server</a> outside Kubernetes if you'd like to. Here's another guide to setup and <a href="https://www.howtoforge.com/tutorial/setting-up-an-nfs-server-and-client-on-centos-7/" rel="nofollow noreferrer">NFS server</a> on RHEL 7.</p>
<p>I got a kubernetes cluster running on AWS using <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a>. I also got prometheus and grafana set up using <a href="https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a>.</p> <p>What I'm trying to do is to store metrics gathered by prometheus on EBS. My persistent volume claim yaml is:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: prometheus-data namespace: monitoring spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>And prometheus.yaml is:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: labels: prometheus: k8s name: k8s namespace: monitoring spec: alerting: alertmanagers: - name: alertmanager-main namespace: monitoring port: web baseImage: quay.io/prometheus/prometheus nodeSelector: beta.kubernetes.io/os: linux replicas: 2 resources: requests: memory: 400Mi volumeMounts: - name: prometheus-data mountPath: "/data" ruleSelector: matchLabels: prometheus: k8s role: alert-rules volumes: - name: prometheus-data persistentVolumeClaim: claimName: prometheus-data serviceAccountName: prometheus-k8s serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} version: v2.4.3 </code></pre> <p>The 10Gi EBS volume is getting created but it's state remains available. I also tried deleting prometheus pods hoping the data will be retained. Unfortunately that was not the case.</p>
<p>I am able to setup the kube-prometheus with the persistent storage. Please check the following json files:</p> <p>Promethues-deploy.json</p> <pre><code>{ "apiVersion": "monitoring.coreos.com/v1", "kind": "Prometheus", "metadata": { "labels": { "prometheus": "k8s" }, "name": "k8s-prom", "namespace": "monitoring" }, "spec": { "alerting": { "alertmanagers": [ { "name": "alertmanager-main", "namespace": "monitoring", "port": "web" } ] }, "baseImage": "quay.io/prometheus/prometheus", "replicas": 1, "resources": { "requests": { "memory": "400Mi" } }, "ruleSelector": { "matchLabels": { "prometheus": "k8s", "role": "alert-rules" } }, "securityContext": { "fsGroup": 0, "runAsNonRoot": false, "runAsUser": 0 }, "serviceAccountName": "prometheus", "serviceMonitorSelector": { "matchExpressions": [ { "key": "k8s-app", "operator": "Exists" } ] }, "storage": { "class": "", "resources": {}, "selector": {}, "volumeClaimTemplate": { "spec": { "resources": { "requests": { "storage": "10Gi" } }, "selector": { "matchLabels": { "app": "k8s-vol" } }, "storageClassName": "no-provision" } } }, "version": "v2.2.1" } } </code></pre> <p>Prometheus-pv.json</p> <pre><code>{ "apiVersion": "v1", "kind": "PersistentVolume", "metadata": { "labels": { "app": "k8s-vol" }, "name": "prometheus-vol", "namespace": "monitoring" }, "spec": { "storageClassName": "no-provision", "accessModes": [ "ReadWriteOnce" ], "capacity": { "storage": "10Gi" }, "hostPath": { "path": "/data" }, "persistentVolumeReclaimPolicy": "Retain" }, "status": { "phase": "Bound" } } </code></pre> <p>Hope it helps.</p>
<p>I want to deploy Spring Boot applications using Kinesis streams on Kubernetes cluster on AWS. </p> <p>I used <em>kops</em> in an AWS EC2 (Amazon Linux) instance to create my cluster and deploy it using <em>terraform</em>. </p> <p>I installed Spring Cloud Data Flow for Kubernetes using <em>Helm</em> chart. All my pods are up and running and I can access to the Spring Cloud Data Flow interface in order to register my dockerized apps. I am using ECR repositories to upload my Docker images. </p> <p>When I want to deploy the stream (composed of a time-source and a log-sink), a big nice red error message pops up. I checked the log of the <em>Skipper</em> pod and I have the following error message starting with :</p> <pre><code>org.springframework.cloud.skipper.SkipperException: Could not install AppDeployRequest </code></pre> <p>and finishing with :</p> <pre><code>Caused by: java.io.IOException: Cannot run program "docker" (in directory "/tmp/spring-cloud-deployer-5769885450333766520/time-log-kinesis-stream-1539963209716/time-log-kinesis-stream.log-sink-kinesis-app-v1"): error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) ~[na:1.8.0_111-internal] at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.start(LocalAppDeployer.java:386) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE] at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.start(LocalAppDeployer.java:414) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE] at org.springframework.cloud.deployer.spi.local.LocalAppDeployer$AppInstance.access$200(LocalAppDeployer.java:296) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE] at org.springframework.cloud.deployer.spi.local.LocalAppDeployer.deploy(LocalAppDeployer.java:199) ~[spring-cloud-deployer-local-1.3.7.RELEASE.jar!/:1.3.7.RELEASE] ... 54 common frames omitted Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) ~[na:1.8.0_111-internal] at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:247) ~[na:1.8.0_111-internal] at java.lang.ProcessImpl.start(ProcessImpl.java:134) ~[na:1.8.0_111-internal] at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029) ~[na:1.8.0_111-internal] ... 58 common frames omitted </code></pre> <p>I already had this error when I tried to deploy on a local k8s cluster on Windows 10 and I thought it was linked to Win10 platform.</p> <p>I am using <code>spring-cloud-dataflow-server-kubernetes</code> at version <strong>1.6.2.RELEASE</strong>.</p> <p>I really do not have any clues why this error is appearing. Thanks !</p>
<p>It looks like the <code>docker</code> command is not found by the SCDF <code>local</code> deployer's ProcessBuilder when it tries to run the docker <code>exec</code> from this path: </p> <p>/tmp/spring-cloud-deployer-5769885450333766520/time-log-kinesis-stream-1539963209716/time-log-kinesis-stream.log-sink-kinesis-app-v1</p> <p>The SCDF sets the above path as its working directory before running the <code>docker</code> command and hence <code>docker</code> is expected to run from this location.</p>
<p>I'm getting an error when I try to deploy an AKS cluster using an ARM template, if the vnetSubnetId in the agentPoolProfiles property is a reference. I've used this exact template before without problems (on October 4th) but now I'm seeing an error with multiple different clusters, and when I do it either through a VSTS pipeline, or manually using PowerShell.</p> <p>The property is set up like this:</p> <pre><code>"agentPoolProfiles": [ { "name": "agentpool", "count": "[parameters('agentCount')]", "vmSize": "[parameters('agentVMSize')]", "osType": "Linux", "dnsPrefix": "[variables('agentsEndpointDNSNamePrefix')]", "osDiskSizeGB": "[parameters('agentOsDiskSizeGB')]", "vnetSubnetID": "[reference(concat('Microsoft.Network/virtualNetworks/', variables('vnetName'))).subnets[0].id]" } ] </code></pre> <p>The variable 'vnetName' is based on an input parameter I use for the cluster name, and the vnet itself 100% exists, and is deployed as part of the same template. </p> <p>If I try to deploy a new cluster I get the following error:</p> <pre><code>Message: { "code": "InvalidParameter", "message": "The value of parameter agentPoolProfile.vnetSubnetID is invalid.", "target": "agentPoolProfile.vnetSubnetID" } </code></pre> <p>If I try to re-deploy a cluster, with no changes to the template or input parameters since it last worked, I get the following error:</p> <pre><code>Message: { "code": "PropertyChangeNotAllowed", "message": "Changing property 'agentPoolProfile.vnetSubnetID' is not allowed.", "target": "agentPoolProfile.vnetSubnetID" } </code></pre> <p>Has something changed that means I can no longer get the vnet ID at runtime? Does it need to be passed in as a parameter now? If something has changed, is there anywhere I can find out the details?</p> <p>Edit: Just to clarify, for re-deploying a cluster, I have checked and there are no new subnets, and I'm seeing the same behavior on 3 different clusters with different VNets.</p> <p>Switching from reference() to resourceId() did fix the problem so has been marked as the answer, but I'm still no clearer on why reference() stopped working, will update that here as well if I figure it out.</p>
<p>I think what happened is <code>subnets[0].id</code> returns wrong (<strong>DIFFERENT</strong>) subnetId. and this is what the error points out. You cannot change the subnetId after deploying the cluster.</p> <p>Probably somebody created a new subnet in the vnet. But I'd say that overall the approach is flawed. you should build the <code>resourceId()</code> function or just pass it as a parameter</p>
<p>I had created a Virtual switch with the name "Minikube2" . Previously I had created Virtual switch with the name "minikube" ,but deleted it later as there was config issue. </p> <p>Did all the required configuration -"sharing on ethernet .." </p> <p>Now when I try to run </p> <p>minikube start --kubernetes-version="v1.10.3" --vm-driver="hyperv" --hyperv-virtual-switch="minikube2"</p> <p>it downloads the ISO , but fails to configure the switch -</p> <p>it says vswitch "minikube2" not found</p>
<p>Short answer is to delete <code>C:\Users\%USERNAME%\.minikube</code> and try again. Below is my investigation:</p> <p>First I have created Virtual Switch "minikube", started the cluster and it worked as expected. Then I stopped minikube, created new "Minikube2" switch and started minikube</p> <pre><code>minikube start --kubernetes-version="v1.10.3" --vm-driver="hyperv" --hyperv-virtual-switch="minikube2" --v=9 </code></pre> <p>Appeared issue:</p> <blockquote> <p>Starting local Kubernetes v1.10.3 cluster... Starting VM... [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state [stdout =====>] : Off</p> <p>[stderr =====>] : [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM minikube [stdout =====>] : [stderr =====>] : Hyper-V\Start-VM : 'minikube' failed to start. Synthetic Ethernet Port (Instance ID AF9D08DC-2625-4F24-93E5-E09BAD904899): Error 'Insufficient system resources exist to complete the requested service.'. Failed to allocate resources while connecting to a virtual network. The Ethernet switch may not exist. 'minikube' failed to start. (Virtual machine ID 863D6558-78EC-4648-B712-C1FDFC907588) 'minikube' Synthetic Ethernet Port: Failed to finish reserving resources with Error 'Insufficient system resources exist to complete the requested service.' (0x800705AA). (Virtual machine ID 863D6558-78EC-4648-B712-C1FDFC907588) 'minikube' failed to allocate resources while connecting to a virtual network: Insufficient system resources exist to complete the requested service. (0x800705AA) (Virtual Machine ID 863D6558-78EC-4648-B712-C1FDFC907588). The Ethernet switch may not exist. Could not find Ethernet switch 'minikube'. At line:1 char:1 + Hyper-V\Start-VM minikube + ~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Start-VM], VirtualizationException + FullyQualifiedErrorId : Unspecified,Microsoft.HyperV.PowerShell.Commands.StartVM</p> <p>E1022 12:50:43.384867 6216 start.go:168] Error starting host: Error starting stopped host: exit status 1.</p> <p>Retrying. E1022 12:50:43.398832 6216 start.go:174] Error starting host: Error starting stopped host: exit status 1 PS C:\Windows\system32></p> </blockquote> <p>Then I deleted <code>C:\Users\%USERNAME%\.minikube</code> , minikube vm inside Hyper-V and started again:</p> <pre><code>C:\Windows\system32&gt; minikube start --kubernetes-version="v1.10.3" --vm-driver="hyperv" --hyperv-virtual-switch="minikube2" --v=9 </code></pre> <p>Result:</p> <blockquote> <p>Starting local Kubernetes v1.10.3 cluster... Starting VM...</p> <p>Downloading Minikube ISO</p> <p>170.78 MB / 170.78 MB [============================================] 100.00% 0s Creating CA: C:\Users\Vitalii.minikube\certs\ca.pem </p> <p>Creating client certificate: C:\Users\Vitalii.minikube\certs\cert.pem</p> <p>----- [stderr =====>] : Using switch "Minikube2"</p> <p>----- Moving files into cluster... </p> <p>Downloading kubeadm v1.10.3 </p> <p>Downloading kubelet v1.10.3 Finished </p> <p>Downloading kubeadm v1.10.3 Finished</p> <p>Finished Downloading kubelet v1.10.3 </p> <p>Setting up certs... Connecting to</p> <p>cluster... Setting up kubeconfig... </p> <p>Starting cluster components...</p> <p>Kubectl is now configured to use the cluster.</p> </blockquote> <pre><code>PS C:\Windows\system32&gt; kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-c4cffd6dc-cjzsm 1/1 Running 0 1m kube-system etcd-minikube 1/1 Running 0 56s kube-system kube-addon-manager-minikube 1/1 Running 0 13s kube-system kube-apiserver-minikube 1/1 Running 0 41s kube-system kube-controller-manager-minikube 1/1 Running 0 1m kube-system kube-dns-86f4d74b45-w62rv 2/3 Running 0 1m kube-system kube-proxy-psgss 1/1 Running 0 1m kube-system kube-scheduler-minikube 1/1 Running 0 21s kube-system kubernetes-dashboard-6f4cfc5d87-jz266 1/1 Running 0 1m kube-system storage-provisioner 1/1 Running 0 1m </code></pre>
<p>Using <code>kubectl get with -o yaml</code> on a resouce , I see that every resource is versioned:</p> <pre><code>kind: ConfigMap metadata: creationTimestamp: 2018-10-16T21:44:10Z name: my-config namespace: default resourceVersion: "163" </code></pre> <p>I wonder what is the significance of these versioning and for what purpose these are used? ( use cases )</p>
<p>A more detailed explanation, that helped me to understand exactly how this works:</p> <blockquote> <p>All the objects you’ve created throughout this book—Pods, <code>ReplicationControllers, Services, Secrets</code> and so on—need to be stored somewhere in a persistent manner so their manifests survive API server restarts and failures. For this, Kubernetes uses <code>etcd</code>, which is a fast, distributed, and consistent key-value store. The only component that talks to <code>etcd</code> directly is the Kubernetes API server. All other components read and write data to etcd indirectly through the API server.</p> <p>This brings a few benefits, among them a more robust <em>optimistic locking</em> system as well as validation; and, by abstracting away the actual storage mechanism from all the other components, it’s much simpler to replace it in the future. It’s worth emphasizing that etcd is the only place Kubernetes stores cluster state and metadata.</p> <p><em>Optimistic concurrency control</em> (sometimes referred to as <em>optimistic locking</em>) is a method where instead of locking a piece of data and preventing it from being read or updated while the lock is in place, the piece of data includes a version number. Every time the data is updated, the version number increases. When updating the data, the version number is checked to see if it has increased between the time the client read the data and the time it submits the update. If this happens, the update is rejected and the client must re-read the new data and try to update it again. The result is that when two clients try to update the same data entry, only the first one succeeds.</p> <p>The result is that when two clients try to update the same data entry, only the first one succeeds</p> </blockquote> <p>Marko Luksa, "Kubernetes in Action"</p> <p>So, all the Kubernetes resources include a <code>metadata.resourceVersion</code> field, which clients need to pass back to the API server when updating an object. If the version doesn’t match the one stored in <code>etcd</code>, the API server rejects the update</p>
<p>As the GCE Disk does not support <code>ReadWriteMany</code> , I have no way to apply change to Deployment but being stucked at <strong>ContainerCreating</strong> with <code>FailedAttachVolume</code> .</p> <p>So here's my setting:</p> <p><strong>1. PVC</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: mysql spec: storageClassName: "standard" accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p><strong>2. Service</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: type: ClusterIP ports: - protocol: TCP port: 3306 targetPort: 3306 selector: app: mysql </code></pre> <p><strong>3. Deployment</strong></p> <pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: mysql labels: app: mysql spec: selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: mysql/mysql-server name: mysql ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /mysql-data volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim </code></pre> <p>Which these are all fine for creating the PVC, svc and deployment. Pod and container started successfully and worked as expected.</p> <hr> <p><strong>However when I tried to apply change by:</strong> <code>kubectl apply -f mysql_deployment.yaml</code> </p> <p><strong>Firstly</strong>, the pods were sutcked with the existing one did not terminate and the new one would be creating forever.</p> <pre><code>NAME READY STATUS RESTARTS AGE mysql-nowhash 1/1 Running 0 2d mysql-newhash 0/2 ContainerCreating 0 15m </code></pre> <p>Secondly from the gCloud console, inside the pod that was trying to create, I got two crucial error logs:</p> <p><strong>1 of 2 FailedAttachVolume</strong></p> <pre><code>Multi-Attach error for volume "pvc-&lt;hash&gt;" Volume is already exclusively attached to one node and can't be attached to another FailedAttachVolume </code></pre> <p><strong>2 of 2 FailedMount</strong></p> <pre><code>Unable to mount volumes for pod "&lt;pod name and hash&gt;": timeout expired waiting for volumes to attach/mount for pod "default"/"&lt;pod name and hash&gt;". list of unattached/unmounted volumes=[mysql-persistent-storage] </code></pre> <p>What I could immediately think of is the <code>ReadWriteOnce</code> capability of gCloud PV. Coz the kubernetes engine would create a new pod before terminating the existing one. So under ReadWriteOnce it can never create a new pod and claim the existing pvc...</p> <p>Any idea or should I use some other way to perform deployment updates? appreciate for any contribution and suggestion =)</p> <p>remark: my current work-around is to create an interim NFS pod to make it like a ReadWriteMany pvc, this worked but sounds stupid... requiring an additional storage i/o overhead to facilitate deployment update ?.. =P</p>
<p>The reason is that if you are applying UpdateStrategy: RollingUpdate (as it is default) k8s waits for the new Container to become ready before shutting down the old one. You can change this behaviour by applying UpdateStrategy: Recreate </p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy</a></p>
<pre><code>minikube start </code></pre> <p>giving below error messaage on Manjaro Deepin ( Arch Linux ). </p> <blockquote> <p>E1002 00:02:24.832976 26272 start.go:143] Error starting host: Temporary Error: Error configuring auth on host: OS type not recognized.</p> </blockquote> <p>tried it on VirtualBox with Ubuntu 16.04 and VT-X/AMD-v not enable error giving even Virtualbox preference set to VT-X/AMD-v enable. </p> <p>Note : Need to fix the issue on my local machine not the virtualBox</p> <p>uname -a output </p> <blockquote> <p>Linux xxx-pc 4.9.51-1-MANJARO #1 SMP PREEMPT Wed Sep 20 10:37:40 UTC 2017 x86_64 GNU/Linux</p> </blockquote>
<p>If working on <strong>Linux</strong>, follow the steps:</p> <p>Uninstall/delete all minikube related files</p> <pre><code>1. minikube delete 2. rm /usr/local/minikube 3. rm -rf ~/.minikube </code></pre> <p>Then do <code>minikube start</code> again. If it doesn't work uninstall &amp; reinstall minikube. </p> <p>For <strong>Windows</strong> user follow these steps: </p> <ol> <li>Do <code>minikube delete</code></li> <li>Delete <code>C:\Users\username\.minikube</code> folder.</li> <li>Do <code>minikube start</code> again. </li> </ol> <p><strong>Also</strong>, don't forget to stop all the process related to <strong><code>VirtualBox</code></strong> including <code>VBoxHeadless</code> before deleting minikube.</p>
<p>While doing <code>kubectl cluster-info dump</code> , I see alot of:</p> <pre><code>2018/10/18 14:47:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/18 14:48:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/18 14:48:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/18 14:49:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/18 14:49:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/18 14:50:17 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. 2018/10/18 14:50:47 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. </code></pre> <p>Maybe this is a bug that will be fixed in new version ( heapster is deprecated anyway in new versions) , but is there anyway to disable these checks to avoid these noisy messges.</p>
<p>You can find Heapster deprecation timeline <a href="https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md" rel="nofollow noreferrer">here</a>.</p> <p>I found that in Kubernetes cluster 1.10 version <code>kubernetes-dashboard</code> Pod produces such kind of error messages:</p> <p><code>kubectl --namespace=kube-system log &lt;kubernetes-dashboard-Pod&gt;</code></p> <blockquote> <p>2018/10/22 13:04:36 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.</p> </blockquote> <p>It seems that <code>kubernetes-dashboard</code> still requires Heapster service for metrics and graph purposes.</p>
<p>I am using a Kubernetes cluster version <code>1.10.4</code>. I want to update it to 1.12 but first, I need to update it to 1.11 how it is possible? </p> <p>I read this FAQ: <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/</a></p> <p>But it did not work. Steps try to update immediately to 1.12 and that ending with an error. :(</p> <p>Help!</p>
<p>Reproduced your issue by installing v.1.10.4 version and trying to upgrade it to v.1.11.0 using <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/</a> FAQ.</p> <p>The same error and attempt to upgrade to 1.12.1 instead of 1.11.0</p> <blockquote> <p>[upgrade/config] FATAL: invalid configuration: kind and apiVersion is mandatory information that needs to be specified in all YAML documents</p> </blockquote> <p>This is happening because you pass v1.12.1 to $VERSION while using below command:</p> <pre><code>export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) root@kube-update-11:~# echo $VERSION </code></pre> <blockquote> <p>v1.12.1</p> </blockquote> <p>What you should do is manually set proper version:</p> <pre><code>export VERSION=v1.11.0 export ARCH=amd64 curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm &gt; /usr/bin/kubeadm chmod a+rx /usr/bin/kubeadm </code></pre> <p>And try again</p> <pre><code>root@kube-update-11:~# kubeadm upgrade plan </code></pre> <blockquote> <p>[preflight] Running pre-flight checks.</p> <p>[upgrade] Making sure the cluster is healthy:</p> <p>[upgrade/config] Making sure the configuration is correct:</p> <p>[upgrade/config] Reading configuration from the cluster...</p> <p>[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'</p> <p>I1022 12:07:11.188895 20089 feature_gate.go:230] feature gates: &amp;{map[]}</p> <p>[upgrade] Fetching available versions to upgrade to</p> <p>[upgrade/versions] Cluster version: v1.10.4</p> <p>[upgrade/versions] kubeadm version: v1.11.0</p> <p>[upgrade/versions] Latest stable version: v1.12.1</p> <p>[upgrade/versions] Latest version in the v1.10 series: v1.10.9</p> </blockquote> <pre><code>root@kube-update-11:~# kubeadm upgrade apply v1.11.0 </code></pre> <blockquote> <p>[preflight] Running pre-flight checks.</p> <p>[upgrade] Making sure the cluster is healthy:</p> <p>[upgrade/config] Making sure the configuration is correct:</p> <p>...</p> <p>[upgrade/version] You have chosen to change the cluster version to "v1.11.0"</p> <p>[upgrade/versions] Cluster version: v1.10.4</p> <p>[upgrade/versions] kubeadm version: v1.11.0</p> <p>...</p> <p>[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.0". Enjoy!</p> <p>[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.</p> </blockquote>
<p>As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.</p>
<p><code>kubectl config current-context</code> does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).</p>
<p>I am deploying separate Kubernetes cluster for each lifecycle i.e. one cluster for dev and a separate cluster for production and will be using namespaces only for logical segregation of application based on there task e.g. all DB in data namespace, UI app in web namespace etc.</p> <p>Because of this design, there is no need to have namespaces in service or pod's FQDN as each service and deployment will be unique. Can anyone please suggest how I can remove namespaces from hostname e.g. if there is a service named my-service in default namespace I would like to resolve the service from other namespace using "my-service.svc.cluster.local" instead of "my-service.default.svc.cluster.local"</p> <p>I am running kubernetes 1.10 and using coreDNS.</p>
<p>you can't, that is not how kubernetes is designed. You can call it with just <code>my-service</code> though as long as you're in the same namespace</p>
<p>I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.</p> <p>The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log <code>ls: /testing/hello.txt: No such file or directory</code></p> <p>Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: kio namespace: kmlflow spec: # ttlSecondsAfterFinished: 5 template: spec: containers: - name: kio-ingester image: busybox volumeMounts: - name: test-volume mountPath: /testing imagePullPolicy: IfNotPresent command: ["ls"] args: ["-l", "/testing/hello.txt"] volumes: - name: test-volume hostPath: # directory location on host path: /tmp # this field is optional # type: Directory restartPolicy: Never backoffLimit: 4 </code></pre> <p>Thanks in advance for any assistance.</p>
<p>Looks like when the volume is mounted , the existing data can't be accessed.</p> <p>You will need to make use of init container to pre-populate the data in the volume.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: my-app image: my-app:latest volumeMounts: - name: config-data mountPath: /data initContainers: - name: config-data image: busybox command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", "&gt;","/data/config"] volumeMounts: - name: config-data mountPath: /data volumes: - name: config-data hostPath: {} </code></pre> <p>Reference:</p> <p><a href="https://medium.com/@jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519" rel="nofollow noreferrer">https://medium.com/@jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519</a></p>
<p>I have a cluster on Azure (AKS). I am have a <code>orientdb</code> service </p> <pre><code>apiVersion: v1 kind: Service metadata: name: orientdb labels: app: orientdb role: backend spec: selector: app: orientdb ports: - protocol: TCP port: 2424 name: binary - protocol: TCP port: 2480 name: http </code></pre> <p>which I want to expose to the outside, such that an app from the internet can send TCP traffic directly to this service. </p> <p>(In order to connect to orientdb you need to connect over TCP to port 2424)</p> <p>I am not good in networking so this is my understanding, which might as well be wrong. I tried the following: </p> <ol> <li>Setting up an Ingress <ul> <li>did not work, because ingress handles http, but is not well suited for tcp. </li> </ul></li> <li>I tried to set ExternalIP field in the service config in NodePort definition <ul> <li>did not work.</li> </ul></li> </ol> <p>So my problem is the following:<br> <strong>I cannot send tcp traffic to the service.</strong> Http traffic works fine.</p> <p>I would really appreciate if someone would show me how to expose my service such that I can sen TCP traffic directly to my oriented service.</p> <p>Thanks in advance.</p>
<p>You can use both the service of type Loadbalancer ( I assume AKS supports that) , or you can just use the node port.</p> <pre><code>kubectl expose deployment hello-world --type=LoadBalancer --name=my-service kubectl get services my-service </code></pre> <p>The output is similar to this:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service ClusterIP 10.3.245.137 104.198.205.71 8080/TCP 54s </code></pre> <p>Reference <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">here</a></p> <p>kubectl expose usage:</p> <pre><code>Usage $ expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] </code></pre> <p>You can make use of <code>--port= 2424 --target-port= 2424</code> options for correct ports in the <code>kubectl expose</code> command above</p>
<p>I have <code>~/.kube/config</code> with following content:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com name: kubernetes-jenkins - cluster: certificate-authority-data: REDACTED server: https://REDACTED.sk1.us-east-1.eks.amazonaws.com name: kuberntes-dev contexts: - context: cluster: kubernetes-dev user: aws-dev name: aws-dev - context: cluster: kubernetes-jenkins user: aws-jenkins name: aws-jenkins current-context: aws-dev kind: Config preferences: {} users: - name: aws-dev user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - EKS_DEV_CLUSTER command: heptio-authenticator-aws env: null - name: aws-jenkins user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - EKS_JENKINS_CLUSTER command: heptio-authenticator-aws env: null </code></pre> <p>But when I'm trying to <code>kubectl cluster-info</code> I get:</p> <pre><code>Kubernetes master is running at http://localhost:8080 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>As far as I understand something wrong in my kubeconfig, but I don't see what exactly. Also I tried to find any related issues, but with no luck.</p> <p>Could you suggest me something?</p> <p>Thanks.</p>
<p>You need to choose the context that you'd like to use. More informantion on how use multiple clusters with multiple users <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">here</a>.</p> <p>Essentially, you can view your current context (for the current cluster configured)</p> <pre><code>$ kubectl config current-context </code></pre> <p>To view, all the clusters configured:</p> <pre><code>$ kubectl config get-clusters </code></pre> <p>And to choose your cluster:</p> <pre><code>$ kubectl config use-context &lt;cluster-name&gt; </code></pre> <p>There are options to set different users per cluster in case you have them defined in your <code>~/kube/config</code> file.</p>
<p>I am using AKS(Azure k8),need k8s node.js client for this options</p> <p>Kill pod by name<br> Change deployments pods count<br> Restart all deployments pods</p> <p>I need only for this functions, witch lib is best for this?</p> <p>Please also provide examples using lib for some of this functions.</p> <p>Thank you </p> <p>UPDATE</p> <p>I liked this one <code>Node.js (TypeScript) github.com/Goyoo/node-k8s-client</code>,can you provide more information about service account and access ?</p>
<p>Here is the full list of all the client libraries.</p> <p><a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/client-libraries/</a></p> <p>You will need to create a service account , and role binding to configure the proper permissions to do these operations from the client library.</p> <p><strong><em>node.js specific libraries:</em></strong></p> <blockquote> <p>Node.js (TypeScript) github.com/Goyoo/node-k8s-client</p> <p>Node.js github.com/tenxcloud/node-kubernetes-client</p> <p>Node.js github.com/godaddy/kubernetes-client</p> </blockquote> <p><strong><em>Basic Example ( Using godaddy client)</em></strong></p> <pre><code>/* eslint no-console:0 */ // // Demonstrate some of the basics. // const Client = require('kubernetes-client').Client; const config = require('kubernetes-client').config; const deploymentManifest = require('./nginx-deployment.json'); async function main() { try { const client = new Client({ config: config.fromKubeconfig(), version: '1.9' }); // // Get all the Namespaces. // const namespaces = await client.api.v1.namespaces.get(); console.log('Namespaces: ', namespaces); // // Create a new Deployment. // const create = await client.apis.apps.v1.namespaces('default').deployments.post({ body: deploymentManifest }); console.log('Create: ', create); // // Fetch the Deployment we just created. // const deployment = await client.apis.apps.v1.namespaces('default').deployments(deploymentManifest.metadata.name).get(); console.log('Deployment: ', deployment); // // Change the Deployment Replica count to 10 // const replica = { spec: { replicas: 10 } }; const replicaModify = await client.apis.apps.v1.namespaces('default').deployments(deploymentManifest.metadata.name).patch({ body: replica }); console.log('Replica Modification: ', replicaModify); // // Modify the image tag // const newImage = { spec: { template: { spec: { containers: [{ name: 'nginx', image: 'nginx:1.8.1' }] } } } }; const imageSet = await client.apis.apps.v1.namespaces('default').deployments(deploymentManifest.metadata.name).patch({ body: newImage }); console.log('New Image: ', imageSet); // // Remove the Deployment we created. // const removed = await client.apis.apps.v1.namespaces('default').deployments(deploymentManifest.metadata.name).delete(); console.log('Removed: ', removed); } catch (err) { console.error('Error: ', err); } } main(); </code></pre>
<p>I've been trying to get Elasticsearch running on K8s using the <a href="https://github.com/helm/charts/tree/master/stable/elasticsearch" rel="nofollow noreferrer">newly-promoted-to-stable helm chart</a>, which works fine, BTW, for elasticsearch v 6.4.2. However, we're tied to a grails app that requires elasticsearch v 5.5.3, for which we don't have the ability to upgrade. I've downgraded the elasticsearch image version in the chart to 5.5.3 (and also tried v 5.6.12) but it fails to start.</p> <p>I looked into the <a href="https://github.com/upmc-enterprises/elasticsearch-operator" rel="nofollow noreferrer">elasticsearch operator</a>, but it's currently set up to work with AWS S3 storage types, out of the box, and GCP with a little work (although no snapshot ability). Before I dive into this, I'd like to know if it will work with 5.5.3, to begin with.</p> <p>Does anyone know if I can get elasticsearch v 5.5.3 running in a k8s cluster? I would say using a k8s StatefulSet at a minimum.</p> <p>Thanks!</p> <p><strong>Update</strong></p> <p>I suppose I should have given the errors that the existing helm chart is having when downgrading elasticsearch image to 5.5.3.</p> <p>master-0 pod fails to start with:</p> <p><code>Error injecting constructor, ElasticsearchException[java.io.IOException: failed to read [id:15, legacy:false, file:/usr/share/elasticsearch/data/nodes/0/_state/global-15.st]]; nested: IOException[failed to read [id:15, legacy:false, file:/usr/share/elasticsearch/data/nodes/0/_state/global-15.st]]; nested: ElasticsearchException[Unknown license version found, please upgrade all nodes to the latest elasticsearch-license plugin]; at org.elasticsearch.gateway.GatewayMetaState.&lt;init&gt;(Unknown Source) while locating org.elasticsearch.gateway.GatewayMetaState for parameter 4 at org.elasticsearch.gateway.GatewayService.&lt;init&gt;(Unknown Source) while locating org.elasticsearch.gateway.GatewayService Caused by: ElasticsearchException[java.io.IOException: failed to read [id:15, legacy:false, file:/usr/share/elasticsearch/data/nodes/0/_state/global-15.st]]; nested: IOException[failed to read [id:15, legacy:false, file:/usr/share/elasticsearch/data/nodes/0/_state/global-15.st]]; nested: ElasticsearchException[Unknown license version found, please upgrade all nodes to the latest elasticsearch-license plugin]; </code></p> <p>The client pods fail with:</p> <p><code>[2018-10-22T17:52:51,835][WARN ][o.e.d.z.UnicastZenPing ] [elasticsearch-client-6bf954c595-7zlpc] failed to resolve host [elasticsearch-discovery] java.net.UnknownHostException: elasticsearch-discovery </code></p> <p>Clearly, it's expecting a later elasticsearch version.</p>
<p>The short answer here is that ElasticSearch 5.5.3 should work with Kubernetes. Note the configs for 5.5.3 are slightly different, I believe they changed after 5.6 where for example the made <a href="https://www.elastic.co/guide/en/x-pack/current/xpack-introduction.html" rel="nofollow noreferrer">x-pack</a> enabled by default, and yes use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> or start from the existing stable Helm chart.</p>
<p>I have a HELM values file which looks like so:</p> <pre><code>service: environment: dev spring_application_json: &gt;- { "spring" : { "boot" : { "admin" : { "client" : { "enabled" : "false", "url" : "http://website1", "instance" : { "service-base-url" : "http://website2", "management-base-url" : "http://website3" } } } } } } </code></pre> <p>And a corresponding template file which grabs this value and inserts it as an environment variable to a container.</p> <pre><code>spec: replicas: {{ .Values.replicaCount }} template: spec: imagePullSecrets: - name: {{ .Values.image.pullSecret }} containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: ENVIRONMENT value: "{{ .Values.service.environment }}" - name: SPRING_APPLICATION_JSON value: "{{ .Values.service.spring_application_json }}" </code></pre> <p>However when I run the helm install I get the following error:</p> <pre><code>Error: YAML parse error on deployment.yaml: error converting YAML to JSON: yaml: line 40: did not find expected key </code></pre> <p>Which points to the line:</p> <pre><code>value: "{{ .Values.service.spring_application_json }}" </code></pre> <p>I believe its a problem with the way I'm trying to parse in a json string as a multiline environment variable? The ENVIRONMENT 'dev' variable works perfectly and this same YAML also works perfectly with docker-compose.</p>
<p>There's an example a bit like this in the docs for <a href="https://docs.spring.io/autorepo/docs/spring-cloud-dataflow-server-kubernetes-docs/1.2.x/reference/html/_spring_cloud_deployer_for_kubernetes_properties.html#_using_spring_application_json" rel="noreferrer">spring cloud dataflow</a> but the format in their documentation has the quotes escaped.</p> <p>I was able to recreate the error and get past it by changing the values file entry to:</p> <pre><code>service: spring_application_json: { "spring" : { "boot" : { "admin" : { "client" : { "enabled" : "false", "url" : "http://website1", "instance" : { "service-base-url" : "http://website2", "management-base-url" : "http://website3" } } } } } } </code></pre> <p>And the deployment entry to:</p> <pre><code> - name: SPRING_APPLICATION_JSON value: {{ .Values.service.spring_application_json | toJson | quote }} </code></pre> <p>Notice no quotes around this part as that is handled anyway.</p>
<p>I need to restart the Elasticsearch node after installing the injest-attachment plugin on Kubernetes Engine on Google Cloud Platform. I have deployed Elasticsearch on a pod. What is the best way to restart the Elasticsearch nodes?</p>
<p>If Elasticsearch is running directly on the VM:</p> <pre><code>systemctl restart elasticsearch </code></pre> <p>If Elasticsearch is running as a container on docker:</p> <pre><code>docker restart &lt;container-id&gt; </code></pre> <p>If Elasticsearch is running as a Kubernetes pod (deployed through a Kubernetes manifest):</p> <ul> <li>update the image tag in the manifest if needed, and do <code>kubectl apply</code> </li> <li>Or use <code>kubectl replace</code> or <code>kubectl edit</code> commands</li> </ul> <p>On Kubernetes, ideally, you should use the declarative way of updating the manifests and then do a <code>kubectl apply -f</code></p>
<p>I'm trying to start a kubernetes cluster but with a different url for kubernetes to pull it's images. AFAIK, it's only possible through config file.</p> <p>I'm not familiar with the <strong>config file</strong>, so I started with a simple one:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha2 imageRepository: my.internal.repo:8082 kind: MasterConfiguration kubernetesVersion: v1.11.3 </code></pre> <p>And ran the command <strong>kubeadm init --config file.yaml</strong> After some time, it fails with the following error:</p> <pre><code>[init] using Kubernetes version: v1.11.3 [preflight] running pre-flight checks I1015 12:05:54.066140 27275 kernel_validator.go:81] Validating kernel version I1015 12:05:54.066324 27275 kernel_validator.go:96] Validating kernel config [WARNING Hostname]: hostname "kube-master-0" could not be reached [WARNING Hostname]: hostname "kube-master-0" lookup kube-master-0 on 10.11.12.246:53: no such host [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [kube-master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.5.189] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [kube-master-0 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [kube-master-0 localhost] and IPs [10.10.5.189 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) - No internet connection is available so the kubelet cannot pull or find the following control plane images: - my.internal.repo:8082/kube-apiserver-amd64:v1.11.3 - my.internal.repo:8082/kube-controller-manager-amd64:v1.11.3 - my.internal.repo:8082/kube-scheduler-amd64:v1.11.3 - my.internal.repo:8082/etcd-amd64:3.2.18 - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images are downloaded locally and cached. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' couldn't initialize a Kubernetes cluster </code></pre> <p>I checked kubelet status with <strong>systemctl status kubelet</strong>, and it's running.</p> <p>I <strong>sucessfully</strong> tried to manully pull the images with: </p> <pre><code>docker pull my.internal.repo:8082/kubee-apiserver-amd64:v1.11.3 </code></pre> <p>However, '<strong>docker ps -a returns</strong>' no containers.</p> <p>The <a href="https://pastebin.com/8mAejkaU" rel="nofollow noreferrer">journalctl -xeu kubelet</a> show lots of connection refused and get requests to k8s.io that I'm struggling to understand the root error.</p> <p>Any ideas?</p> <p>Thanks in advance!</p> <p><strong>Edit 1:</strong> I tried to manually open the ports, but nothing changed. [centos@kube-master-0 ~]$ sudo firewall-cmd --zone=public --list-ports 6443/tcp 5000/tcp 2379-2380/tcp 10250-10252/tcp</p> <p>I also changed the kube version from 1.11.3 to 1.12.1, but nothing changed.</p> <p><strong>Edit 2:</strong> I realized that kubelet is trying to pull from k8s.io repo, which means I changed kubeadm internal repo only. I need to do the same with kubelet. </p> <pre><code>Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.108764 24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to...on refused Oct 22 11:10:06 kube-master-1-120 kubelet[24795]: E1022 11:10:06.110539 24795 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v...on refused </code></pre> <p>Any ideas?</p>
<p>you solved half of the problem, may the final solution is to edit the <code>kubelet</code> (<code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code>) init file. You need to set <code>--pod_infra_container_image</code> parameter that will reference to the pause container image that was pulled through your internal repository. The image name will be like this: <code>my.internal.repo:8082/pause:[version]</code>.</p> <p>The reason is that the kubelet can't get the new image tag to refers it.</p>
<p>I have a Homebrew installed kubernetes-cli 1.12.0 and minikube 0.30.0:</p> <pre><code>~ ls -l $(which kubectl) /usr/local/bin/kubectl -&gt; ../Cellar/kubernetes-cli/1.12.0/bin/kubectl ~ ls -l $(which minikube) /usr/local/bin/minikube -&gt; /usr/local/Caskroom/minikube/0.30.0/minikube-darwin-amd64 ~ minikube delete Deleting local Kubernetes cluster... Machine deleted. ~ rm -rf ~/.kube ~/.minikube ~ minikube start --memory 8000 --kubernetes-version 1.12.0 Starting local Kubernetes 1.12.0 cluster... Starting VM... Downloading Minikube ISO 170.78 MB / 170.78 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... E1022 10:08:41.271328 44100 start.go:254] Error updating cluster: generating kubeadm cfg: parsing kubernetes version: parsing kubernetes version: strconv.ParseUint: parsing "": invalid syntax ================================================================================ An error has occurred. Would you like to opt in to sending anonymized crash information to minikube to help prevent future errors? To opt out of these messages, run the command: minikube config set WantReportErrorPrompt false ================================================================================ </code></pre>
<blockquote> <p><code>parsing kubernetes version: strconv.ParseUint: parsing "": invalid syntax</code></p> </blockquote> <p>Try to use different version notation. Here is an example from the <a href="https://kubernetes.io/docs/setup/minikube/#specifying-the-kubernetes-version" rel="noreferrer">Kubernetes documentation</a>:</p> <pre><code>minikube start --kubernetes-version v1.7.3 </code></pre>
<p>hy folks </p> <p>i made the yaml files to deploy my application and now i working with helm to deploy it automatically.However although all of my conf files dor kubernetes worked. i've a problem with helm and <code>PVC</code>. i ve checked on internet and i dont find where is my mistake :( </p> <p><strong>pvc-helm.yaml</strong></p> <pre><code>{{- if .Values.persistence.enabled }} kind: PersistentVolumeClaim apiVersion: v1 metadata: name: {{ .Values.persistence.name }} namespace: {{ .Values.persistence.namespace }} spec: accessModes: - {{ .Values.persistence.accessModes | quote }} resources: requests: storage: {{ .Values.persistence.size | quote }} {{- end }} </code></pre> <p><strong>values.yaml</strong></p> <pre><code>persistence: enabled: true name: ds-pvc namespace: ds-svc storageClassName: standard storageClass: standard accessModes: - ReadWriteOnce size: 20Mi </code></pre> <p>when i run the command <code>helm install cas/ --tls</code> i get the error below </p> <blockquote> <p>Error: release brawny-olm failed: PersistentVolumeClaim "ds-pvc" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]</p> </blockquote> <p>Do i've to set as well a <code>PersistentVolume</code> ? </p>
<p>If you want to have optional values you should check if they have been defined</p> <pre><code>spec: {{- if .Values.persistence.accessModes }} accessModes: - {{ .Values.persistence.accessModes | quote }} {{- end }} </code></pre> <p>another option is define a default value on the <code>values.yaml</code> file</p>
<blockquote> <p><strong>UPDATE:</strong> This problem solved itself. I can't tell why. I just tried again the next day and it worked with the config below.</p> </blockquote> <p>I'm using the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">"ingress-nginx"</a> ingress controller (v. 0.12.0). It works fine except for <a href="https://github.com/kubernetes/ingress-nginx/blob/5c016bee873de5b83e6dad962f66e6a96cd73fa1/docs/user-guide/annotations.md#permanent-redirect" rel="nofollow noreferrer">permanent redirects</a>.</p> <p>In order to redirect <code>foo.example.com</code> to <code>https://google.com</code> I applied the following ingress config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: # nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/permanent-redirect: "https://google.com" name: redirect-test namespace: default spec: rules: - host: foo.example.com http: paths: - backend: serviceName: default-backend servicePort: 80 tls: - hosts: - foo.example.com secretName: domains-tls </code></pre> <p>But if I enter <code>foo.example.com</code> in the browser I get this:</p> <blockquote> <p>We're sorry, but we were unable to process the redirection request for the site you are attempting to access.</p> <p>If you feel that you are receiving this message in error, please check the URL and try your request again.</p> <p>UT</p> <p>(RF)</p> </blockquote> <p>Does anyone know what goes wrong here?</p>
<p>I had this same issue,</p> <p>I fixed it by upgrading the nginx-ingress chart from <code>nginx-ingress-0.8.13</code> to <code>nginx-ingress-0.22.0</code></p> <p>I assume the permanent redirect annotation <code>nginx.ingress.kubernetes.io/permanent-redirect</code> did not existed in <code>0.8.13</code>.</p> <p>So either helm upgrade your chart or upgrade your nginx-ingress directly.</p>
<p>I am learning StackExchange.Redis, and Kubernetes, so I made a simple .net core app that reads a key/value from a Redis master+2slaves deployed on kubernetes. (so, everything, Redis and my app, run inside containers)</p> <p>To connect to redis I use the syntax suggested in the doc: </p> <pre><code>ConnectionMultiplexer.Connect("server1:6379,server2:6379,server3:6379"); </code></pre> <p>However, if I monitor the 3 containers with redis-cli MONITOR, the requests are processed always from the master, the 2 slaves do nothing, so there is no load balancing. </p> <p>I have tried also to connect to a Kubernetes load balancer service which exposes the 3 Redis containers endpoints, the result is that when I start the .net app the request is processed randomly by one of the 3 Redis nodes, but then always on the same node. I have to restart the .net app and it will query another node but subsequent queries go always on that node.</p> <p>What is the proper way to read key/values in a load balanced way using StackExchange.Redis with a master/slave Redis setup?</p> <p>Thank you</p>
<p>SE.Redis has a <code>CommandFlags</code> parameter that is optional on every command. There are some useful and relevant options here:</p> <ul> <li><code>DemandPrimary</code></li> <li><code>PreferPrimary</code></li> <li><code>DemandReplica</code></li> <li><code>PreferReplica</code></li> </ul> <p>The default behaviour is <code>PreferPrimary</code>; write operations bump that to <code>DemandPrimary</code>, and there are a <em>very few</em> commands that actively prefer replicas (keyspace iteration, etc).</p> <p>So: if you aren't specifying <code>CommandFlags</code>, then right now you're probably using the default: <code>PreferPrimary</code>. Assuming a primary exists and is reachable, then: it will use the primary. And there can only be one primary, so: it'll use one server.</p> <p>A cheap option for today would be to add <code>PreferReplica</code> as a <code>CommandFlags</code> option on your high-volume read operations. This will push the work to the replicas if they can be resolved - or if no replicas can be found: the primary. Since there can be multiple replicas, it applies a basic rotation-based load-balancing scheme, and you should start to see load on multiple replicas.</p> <p>If you want to spread load over all nodes including primaries and replicas... then I'll need to add new code for that. So if you want that, please log it as an issue on the github repo.</p>
<p>Migrating a Postgres database from Heroku to Google Cloud in a Kubernetes and Docker setup.</p> <p>Trying to decide what is a better approach.</p> <p>1st approach - Use a persistent disc on the VM that is used by a deployed Postgres instance in the Kubernetes cluster.</p> <p>2nd approach - Use a managed Postgres SQL database that the cluster deployments connect to.</p> <p>I assume the main differences would be for the maintenance and updating of the database? Are there any big trade-offs of one setup vs the other?</p>
<p>This is an opinion question so I'll answer with an option.</p> <ol> <li><p>Kubernetes Postgres</p> <ul> <li><strong>Pros:</strong> <ul> <li>You can manage your own Postgres cluster.</li> <li>No vendor lock-in.</li> <li>Postgres is local to your cluster. (It may not be too much of a difference)</li> <li>Do your own maintenance.</li> <li>Raw cost is less.</li> </ul></li> <li><strong>Cons:</strong> <ul> <li>If you run into any Postgres cluster problems you are responsible to fix them.</li> <li>You have to manage your own storage</li> <li>No vendor lock-in but you still need to move the data if you decide to switch providers.</li> <li>You have to do your own backups.</li> </ul></li> </ul></li> <li><p>Managed postgres SQL database</p> <ul> <li><p><strong>Pros:</strong></p> <ul> <li>GCP does it all for you</li> <li>Any problems will be handled by GCP</li> <li>Maintenance also handled by GCP.</li> <li>Storage handled by GCP.</li> <li>Backups performed by GCP</li> </ul></li> <li><p><strong>Cons:</strong></p> <ul> <li>Vendor lock-in</li> <li>Postgres not local to your cluster.</li> <li>Will probably cost more.</li> </ul></li> </ul></li> </ol>
<p>I have a single node K8s cluster in Azure using AKS. I created a deployment and a service using a simple command:</p> <p><code>kubectl run php-apache --image=pilchard/hpa-example:latest --requests=cpu=200m,memory=300M --expose --port=80</code></p> <p>And enabled HPA via command: <code>kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=15</code></p> <p>Upon running <code>kubectl describe hpa php-apache</code>, I see an error saying:</p> <pre><code>horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) </code></pre> <p>And CPU metric is unknown upon running <code>kubectl get hpa</code>. Any help would be really appreciated.</p> <p>My AKS kube version is <code>v1.9.11</code>.</p>
<p>You either need to install the <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">heapster</a>(Deprecated) or the <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">metrics-server</a> minimally to be able to use an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a>.</p> <p>This provides the minimum set of CPU and Memory metrics to be able to autoscale. A good way to see if you have either installed is that if you get this kind output from <code>kubectl top pod</code>:</p> <pre><code>$ kubectl top pod NAME CPU(cores) MEMORY(bytes) http-svc-xxxxxxxxxx-xxxxx 1m 7Mi myapp-pod 0m 53Mi sleep-xxxxxxxxxx-xxxxx 4m 27Mi </code></pre>
<p>I am deploying separate Kubernetes cluster for each lifecycle i.e. one cluster for dev and a separate cluster for production and will be using namespaces only for logical segregation of application based on there task e.g. all DB in data namespace, UI app in web namespace etc.</p> <p>Because of this design, there is no need to have namespaces in service or pod's FQDN as each service and deployment will be unique. Can anyone please suggest how I can remove namespaces from hostname e.g. if there is a service named my-service in default namespace I would like to resolve the service from other namespace using "my-service.svc.cluster.local" instead of "my-service.default.svc.cluster.local"</p> <p>I am running kubernetes 1.10 and using coreDNS.</p>
<p>If you are using the standard DNS config with CoreDNS or KubeDNS you can't change it as described in the other answers.</p> <p>However, you can change the DNS using a <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-coredns" rel="nofollow noreferrer"><code>StubDomain</code></a> and a service discovery tool. One of the more popular ones is <a href="https://www.consul.io" rel="nofollow noreferrer">Consul</a> and <a href="https://www.consul.io/docs/platform/k8s/dns.html" rel="nofollow noreferrer">here's how to configure a stub domain with it</a>.</p> <p>Note that you will likely have to run your <a href="https://www.consul.io/docs/platform/k8s/index.html" rel="nofollow noreferrer">Consul cluster in Kubernetes</a> if not the server, certainly you will need a consul agent sidecar for your pods.</p>
<p>Let's say I have <em>baremetal servers</em> at <strong>New York, London, Delhi, Beijing.</strong> My requirement is to join all the 4 baremetal servers in a distributed environment and run services on top of kubernetes. <strong><em>How do I achieve this ?</em></strong></p>
<p>Question is too broad but here's my take with insights on a global datastore:</p> <p>Option 1: Setup a K8s cluster for each region and have them talk to each other through services. Have each talk to their own datastores, and keep data separate per region an use something like <a href="https://docs.citrix.com/en-us/netscaler/11-1/gslb/how-gslb-works.html" rel="nofollow noreferrer">GSLB</a> to route your traffic.</p> <p>Option 2. Setup a K8s cluster for each region and use a global database like <a href="https://aws.amazon.com/dynamodb/global-tables/" rel="nofollow noreferrer">Dynamdb Global Tables</a>, <a href="https://cloud.google.com/spanner/" rel="nofollow noreferrer">Cloud Spanner</a> or <a href="https://azure.microsoft.com/en-us/free/cosmos-db/" rel="nofollow noreferrer">CosmosDB</a>.</p> <p>Option 3. Use <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">Kubernetes Federation</a>. Federation doesn't necessarily solve the multi-region data store challenge, and it's also in Beta as of this writing. However, it will help manage your Kubernetes services across multiple regions.</p> <p>Option 4. Something else, that you create on your own, fun! fun!</p>
<p>My dockerized service (webrtc server) uses both TCP and UDP transport protocols. I'm working with Azure Kubernetes service. As you know we cannot create LoadBalancer service in Kubernetes with both TCP and UDP proto (more info <a href="https://github.com/kubernetes/kubernetes/pull/64471" rel="nofollow noreferrer">here</a>)</p> <p>Also, I've tried to create two services: </p> <ul> <li>one for TCP ports</li> <li>one for UDP</li> </ul> <p>bind them with one public IP, but gets: "Ensuring load balancer" message.</p> <p>The only solution is to use NodePort, but in Azure its not working for me (connection timeout). </p> <p>Here my service yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mcu spec: selector: app: mcu ports: - name: mcu nodePort: 30000 port: 8080 protocol: TCP - name: webrtc nodePort: 30003 port: 10000 protocol: UDP type: NodePort externalIPs: - &lt;ext IP&gt; </code></pre>
<p>The support for mixed TCP/UDP protocols depends on the cloud provider. For example, <a href="https://github.com/kubernetes/kubernetes/pull/67986" rel="nofollow noreferrer">Azure</a> supports it but AKS may not have a version that supports it as of this writing.</p> <p>Not clear what is giving you a <code>connection timeout</code> but it should work fine as long as you point the Azure UDP load balancer to the<code>30003</code> NodePort. You can also test locally in a cluster node sending UDP traffic to the Service <code>ClusterIP:10000</code></p> <p>You can also check if your service has endpoints:</p> <pre><code>$ kubectl describe svc &lt;service-name&gt; </code></pre> <p>Or/and:</p> <pre><code>$ kubectl get ep </code></pre>
<p>I need to know to which master node my current worker node is connected. I can see the worker nodes by typing "kubectl get nodes" command in the master node, but I need to find the master node from the worker node itself.</p> <p>In simple words, How to find the master node from the worker node in the kubernetes cluster?</p>
<p>You can usually find it on your <code>kubelet</code> config file: <code>/etc/kubernetes/kubelet.conf</code></p> <pre><code>$ cat /etc/kubernetes/kubelet.conf apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://1.1.1.1:6443 &lt;== here name: default-cluster contexts: - context: cluster: default-cluster namespace: default user: default-auth name: default-context current-context: default-context kind: Config preferences: {} users: - name: default-auth user: client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem client-key: /var/lib/kubelet/pki/kubelet-client-current.pem </code></pre> <p>If you have something like <a href="https://kislyuk.github.io/yq/" rel="nofollow noreferrer"><code>yq</code></a> you can get it like this:</p> <pre><code>yq .clusters[0].cluster.server /etc/kubernetes/kubelet.conf | tr -d "\n\"" </code></pre>
<p>I have an application namespace with 30 services. Most are stateless Deployments, mixed with some StatefulSets etc. Fairly standard stuff that is.</p> <p>I need to grant a special user a Role that can only exec into certain Pod. Currently RBAC grants the exec right to all pods in the namespace, but I need to tighten it down.</p> <p>The problem is Pod(s) are created by a Deployment <code>configurator</code>, and the Pod name(s) are thus "generated", <code>configurator-xxxxx-yyyyyy</code>. Since you cannot use glob (ie. <code>configurator-*</code>), and Role cannot grant exec for Deployments directly.</p> <p>So far I've thought about:</p> <ul> <li>Converting Deployment into StatefulSet or a plain Pod, so Pod would have a known non-generated name, and glob wouldn't be needed</li> <li>Moving the Deployment into separate namespace, so the global exec right is not a problem</li> </ul> <p>Both of these work, but neither is optimal. Is there a way to write a proper Role for this?</p>
<p>RBAC, as it is meant by now, doesn't allow to filter resources by other attributes than namespace and resource name. The discussion is open <a href="https://github.com/kubernetes/kubernetes/issues/44703#issuecomment-324826356" rel="nofollow noreferrer">here</a>.</p> <p>Thus, namespaces are the smallest piece at authorizing access to pods. Services must be separated in namespaces thinking in what users could need access to them.</p> <p>The optimal solution right now is to move this deployment to another namespace since it needs different access rules than other deployments in the original namespace.</p>
<p>I use GCE and try to expose an application via ingress. But path rules don't work.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: front-ingress namespace: {{ .Release.Namespace }} annotations: {{ if eq .Values.env "dev" }} kubernetes.io/ingress.global-static-ip-name: "test-ip" {{ else }} cloud.google.com/load-balancer-type: "Internal" {{ end }} spec: rules: - host: {{ .Values.domain }} http: paths: - path: / backend: serviceName: front-service servicePort: 80 - path: /api/ backend: serviceName: backend-service servicePort: 80 </code></pre> <p>When site opened in browser - all files return 404. When I open file by url, I receive: default backend - 404. If I set default backend via annotations - all files loaded, but /api requests failed - 404 error.</p> <p>What it can be?</p> <p>The main idea: test branch on site subdomain. k8s namespace = branch name. Ingress deployed to every namespace with a different host in rules. Global static IP set by annotation and set in GCE Cloud DNS.</p> <p>Thanks.</p> <p>UPDATE:</p> <p>If I use annotation <code>kubernetes.io/ingress.class: "gce"</code> and path: /* and /api/* - site works perfectly. But because I use global static IP, I can't create more than one ingress per IP. If I use <code>kubernetes.io/ingress.class: "nginx"</code> - site returns error: <code>default backend - 404</code></p>
<p>You can actually create multiple ingresses using the same external IP address. You just have to make sure that they are under different host (or hostname rules), so the paths don't interfere with each other. Every host represents a <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#server" rel="nofollow noreferrer">server {}</a> block in the nginx configs with a unique <code>server_name</code>.</p> <p>Ingress1:</p> <pre><code>spec: rules: - host: host1.domain1 http: paths: - path: / backend: serviceName: front-service1 servicePort: 80 - path: /api/ backend: serviceName: backend-service1 servicePort: 80 </code></pre> <p>Ingress2:</p> <pre><code>- host: host2.domain2 http: paths: - path: / backend: serviceName: front-service2 servicePort: 80 - path: /api/ backend: serviceName: backend-service2 servicePort: 80 </code></pre> <p>If you want to use an externalIP it's still doable, but you just have to use a separate ingress controller with a different ingress class name. For example, with the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx ingress controller</a> you can use the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/" rel="nofollow noreferrer"><code>--ingress-class</code></a> option:</p> <p><a href="https://i.stack.imgur.com/5Pidk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Pidk.png" alt="ingress class"></a></p> <p>Also if you don't specify an <code>--ingress-class</code> in your first ingress controller you will have to configure it too, otherwise like the option says, the first ingress will satisfy all the classes.</p>
<p>I'm using Kubernetes Python client to manage my local Kubernetes cluster:</p> <pre><code>from kubernetes import client, config config = client.Configuration() config.host = "http://local_master_node:8080" client.Configuration.set_default(config) print(client.CoreV1Api().v1.list_node()) </code></pre> <p>Everything works fine until I need to connect to a project on Google Cloud Kubernetes Engine using the key file provided by customer owning the project from Google like:</p> <pre><code>{ "type": "...", "project_id": "...", "private_key_id": "...", "private_key": "...", "client_email": "...", "client_id": "...", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/..." } </code></pre> <p>I'm trying to load it (probably doing it in wrong way):</p> <pre><code>os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = os.path.abspath('credentials.json') config.load_incluster_config() </code></pre> <p>But this code raises an exception <code>kubernetes.config.config_exception.ConfigException: Service host/port is not set.</code></p> <p>The questions are:</p> <ol> <li><strong>How to provide Google credentials for Kubernetes Python client properly?</strong></li> <li><strong>If I am on the right track then where can I find the host/port for using with Google Cloud?</strong></li> </ol> <p>Some snippets will be appreciated. </p>
<p>Finally, I myself found the solution.</p> <p>First, you need to get Kubernetes configuration file. So, go to Google Cloud Platform <code>Kubernetes Engine</code> panel. Select cluster you want to connect and press the <code>connect</code> button. Select <code>Run in Cloud Shell</code> and after you have logged into the shell type suggested string like:</p> <pre><code>$ gcloud container clusters get-credentials ... </code></pre> <p>Then you can find in <code>~/.kube</code> folder the configuration file. Save its content to a yaml-file which you should feed to <code>kubernetes.config.load_kube_config</code> function:</p> <pre><code>os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = os.path.abspath('credentials.json') config.load_kube_config(os.path.abspath('config.yaml')) </code></pre>
<p>I am trying to implement a service mesh to a service with Kubernetes using Istio and Envoy. I was able to set up the service and istio-proxy but I am not able to control the order in which the container and istio-proxy are started.</p> <p>My container is the first started and tries to access an external resource via TCP but at that time, istio-proxy has not completely loaded and so does the ServiceEntry for the external resource</p> <p>I tried adding a panic in my service and also tried with a sleep of 5 seconds before accessing the external resource.</p> <p><strong>Is there a way that I can control the order of these?</strong></p>
<p>I don't think you can control the order other than listing the containers in a particular order in your pod spec. So, I recommend you configure a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">Readiness Probe</a> so that you are pod is not ready until your service can send some traffic to the outside.</p>
<p>The need I have just looks like this stuff :</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: http spec: serviceName: "nginx-set" replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: gcr.io/google_containers/nginx-slim:0.8 ports: - containerPort: 80 name: http ---- apiVersion: v1 kind: Service metadata: name: nginx-set labels: app: nginx spec: ports: - port: 80 name: http clusterIP: None selector: app: nginx </code></pre> <p>Here is the interesting part :</p> <pre><code>apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: test-ingress namespace: default spec: rules: - host: appscode.example.com http: paths: - path: '/testPath' backend: hostNames: - web-0 serviceName: nginx-set #! There is no extra service. This servicePort: '80' # is the Statefulset's Headless Service </code></pre> <p>I'm able to target a specific pod because of setting the hostName in function of the url.</p> <p>Now I'd like to know if it's possible in kubernetes to create a rule like that </p> <pre><code>apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: test-ingress namespace: default spec: rules: - host: appscode.example.com http: paths: - path: '/connect/(\d+)' backend: hostNames: - web-(result of regex match with \d+) serviceName: nginx-set #! There is no extra service. This servicePort: '80' # is the Statefulset's Headless Service </code></pre> <p>or if I have to wrote a rule for each pod ?</p>
<p>Sorry that isn't possible, the best solution is to create multiple paths, each one referencing one pod:</p> <pre><code>apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: test-ingress namespace: default spec: rules: - host: appscode.example.com http: paths: - path: '/connect/0' backend: hostNames: - web-0 serviceName: nginx-set servicePort: '80' - path: '/connect/1' backend: hostNames: - web-1 serviceName: nginx-set servicePort: '80' </code></pre>
<p>I am trying to create a Helm chart for kafka-connect. For the testing purpose and to find out where I am exactly wrong I am not using the secrets for my access key and secret access key.</p> <p>My helm chart is failing with the error:</p> <pre><code>helm install helm-kafka-0.1.0.tgz --namespace prod -f helm-kafka/values.yaml Error: release loping-grizzly failed: Deployment.apps "kafka-connect" is invalid: spec.template.spec.containers[0].env[15].name: Required value </code></pre> <p>Based on issue: <a href="https://github.com/kubernetes/kubernetes/issues/46861" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/46861</a></p> <p>I changed my number to be a string. But still, the issue persists.</p> <p>Can someone point me on how to troubleshoot/solve this?</p> <p>My template/deployment.yaml</p> <pre><code> spec: containers: - name: kafka-connect image: {{ .Values.image.repository }}:{{ .Values.image.tag }} env: - name: "CONNECT_LOG4J_LOGGERS" value: "org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR" - name: "CONNECT_OFFSET_STORAGE_TOPIC" value: "connect-offsets" - name: "CONNECT_PLUGIN_PATH" value: "/usr/share/java" - name: "CONNECT_PRODUCER_ACKS" value: "all" - name: "CONNECT_PRODUCER_COMPRESSION_TYPE" value: "snappy" - nane: "CONNECT_STATUS_STORAGE_TOPIC" value: "connect-status" </code></pre>
<p>In:</p> <pre><code>- nane: "CONNECT_STATUS_STORAGE_TOPIC" value: "connect-status" </code></pre> <p><code>nane:</code> should have an "m".</p> <p>When the error message says <code>spec.template.spec.containers[0].env[15].name</code> you can find the first (zero-indexed) container definition, and within that the sixteenth (zero-indexed) environment variable, which has this typo.</p>
<p>Kubernetes admin can use <code>--cluster-domain</code> to customize cluster domain instead of using default one: <code>cluster.local</code> <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">Kubelet Configs</a>. </p> <p>So the question is, how does an application pod check this domain in runtime?</p>
<p>It needs to be configured on the DNS server.</p> <p>Either <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#kube-dns" rel="noreferrer">kube-dns</a> or <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns" rel="noreferrer">coredns</a> (Favored on newer K8s versions)</p> <p>kube-dns: it's a cli option <a href="https://github.com/kubernetes/dns/blob/master/cmd/kube-dns/app/options/options.go#L148" rel="noreferrer"><code>--domain</code></a></p> <p>core-dns: you can configure the <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-coredns" rel="noreferrer">K8s ConfigMap</a></p> <p>And you see <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction" rel="noreferrer">here</a>:</p> <blockquote> <p>The kubelet passes DNS to each container with the --cluster-dns= flag.</p> </blockquote> <p>If you'd like to know how a pod resolves <code>cluster.local</code> it does it through the <code>/etc/resolv.conf</code> that the kubelet mounts on every pod. The content is something like this:</p> <pre><code>$ cat /etc/resolv.conf nameserver 10.96.0.10 search &lt;namespace&gt;.svc.cluster.local svc.cluster.local cluster.local &lt;nod-domain&gt; options ndots:5 </code></pre> <p><code>10.96.0.10</code> is your <code>coredns</code> or <code>kube-dns</code> cluster IP address.</p>
<p>I'm having the following issue while trying to run <a href="https://spark.apache.org/docs/2.3.2/running-on-kubernetes.html" rel="noreferrer">Spark for kubernetes</a> when the app jar is stored in an Azure Blob Storage container:</p> <pre><code>2018-10-18 08:48:54 INFO DAGScheduler:54 - Job 0 failed: reduce at SparkPi.scala:38, took 1.743177 s Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, 10.244.1.11, executor 2): org.apache.hadoop.fs.azure.AzureException: org.apache.hadoop.fs.azure.AzureException: No credentials found for account datasets83d858296fd0c49b.blob.core.windows.net in the configuration, and its container datasets is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials. at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.createAzureStorageSession(AzureNativeFileSystemStore.java:1086) at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.initialize(AzureNativeFileSystemStore.java:538) at org.apache.hadoop.fs.azure.NativeAzureFileSystem.initialize(NativeAzureFileSystem.java:1366) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3242) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470) at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1897) at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:694) at org.apache.spark.util.Utils$.fetchFile(Utils.scala:476) at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:755) at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:747) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:747) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:312) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.fs.azure.AzureException: No credentials found for account datasets83d858296fd0c49b.blob.core.windows.net in the configuration, and its container datasets is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials. at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.connectUsingAnonymousCredentials(AzureNativeFileSystemStore.java:863) at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.createAzureStorageSession(AzureNativeFileSystemStore.java:1081) ... 24 more </code></pre> <p>The command I use to launch the job is:</p> <pre><code>/opt/spark/bin/spark-submit --master k8s://&lt;my-k8s-master&gt; --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=&lt;my-image-built-with-wasb&gt; --conf spark.kubernetes.namespace=&lt;my-namespace&gt; --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.driver.secrets.spark=/opt/spark/conf --conf spark.kubernetes.executor.secrets.spark=/opt/spark/conf wasb://&lt;my-container-name&gt;@&lt;my-account-name&gt;.blob.core.windows.net/spark-examples_2.11-2.3.2.jar 10000 </code></pre> <p>I have a k8s secret named <code>spark</code> with the following content:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: spark labels: app: spark stack: service type: Opaque data: core-site.xml: |- {% filter b64encode %} &lt;configuration&gt; &lt;property&gt; &lt;name&gt;fs.azure.account.key.&lt;my-account-name&gt;.blob.core.windows.net&lt;/name&gt; &lt;value&gt;&lt;my-account-key&gt;&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;fs.AbstractFileSystem.wasb.Impl&lt;/name&gt; &lt;value&gt;org.apache.hadoop.fs.azure.Wasb&lt;/value&gt; &lt;/property&gt; &lt;/configuration&gt; {% endfilter %} </code></pre> <p>The driver pod manages to download the jar dependencies as stored in a container in Azure Blob Storage. As can be seen in this log snippet:</p> <pre><code>2018-10-18 08:48:16 INFO Utils:54 - Fetching wasb://&lt;my-container-name&gt;@&lt;my-account-name&gt;.blob.core.windows.net/spark-examples_2.11-2.3.2.jar to /var/spark-data/spark-jars/fetchFileTemp8575879929413871510.tmp 2018-10-18 08:48:16 INFO SparkPodInitContainer:54 - Finished downloading application dependencies. </code></pre> <p>How can I get the executor pods to get the credentials as stored in the <code>core-site.xml</code> file that's mounted from the k8s secret? What am I missing?</p>
<p>I solved it by adding the following config to <em>spark-submit</em></p> <pre><code>--conf spark.hadoop.fs.AbstractFileSystem.wasb.Impl=org.apache.hadoop.fs.azure.Wasb --conf spark.hadoop.fs.azure.account.key.${STORAGE_ACCOUNT_NAME}.blob.core.windows.net=${STORAGE_ACCOUNT_KEY} </code></pre>
<p>My objective: To expose a pod's(running angular image) <code>port</code> so that I can <strong>access it from the host machine's browser</strong>. </p> <p>service.yml: </p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-frontend-service spec: selector: app: MyApp ports: - protocol: TCP port: 8000 targetPort: 4200 </code></pre> <p>Pod's yml: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: angular.frontend labels: app: MyApp spec: containers: - name: angular-frontend-demo image: angular-frontend-image ports: - name: nodejs-port containerPort: 4200 </code></pre> <p>Weird thing is that doing <code>kubectl port-forward pod/angular.frontend 8000:4200</code> works. However, my objective is to write that in <code>service.yml</code></p>
<p>Use Nodeport:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-frontend-service spec: selector: app: MyApp type: NodePort ports: - protocol: TCP port: 8000 targetPort: 4200 nodePort: 30001 </code></pre> <p>then you can access the service on nodeport 30001 on any node of the cluster. </p> <p>For example the machine name is node01 , you can then do curl <a href="http://node01:30001" rel="nofollow noreferrer">http://node01:30001</a></p>
<p>I followed <a href="https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/" rel="nofollow noreferrer">this article to use k8s leader election</a> for HA of my app. But I met one issue. Could any one have the same experience? For example, I have 4 pod replicas. One of the pod has already been selected as leader. When this leader pod is down (e.g. kill the pod manually), the scheduler will take 30–40 seconds to start a new pod, but the old dead leader will keep for 10 or more seconds to renew. Is there a way to update the leader immediately when the leader pod is dead? Or is there any setting I missed?</p> <p>In the article I'm referring, it mentions following content which exactly the problem what I have:</p> <blockquote> <p>Because pods in Kubernetes have a grace period before termination, this may take 30-40 seconds.</p> </blockquote> <p>Here is a demo yaml file I'm using. <a href="https://gist.githubusercontent.com/ginkgoch/563d8d8caf9e4dd99a0c8de323e9211c/raw/f1abb94647c60874e4625b1b94f8fa125bd1a5ea/k8s-leader-election.yaml" rel="nofollow noreferrer">https://gist.githubusercontent.com/ginkgoch/563d8d8caf9e4dd99a0c8de323e9211c/raw/f1abb94647c60874e4625b1b94f8fa125bd1a5ea/k8s-leader-election.yaml</a></p>
<p>The article explains this is due to the grace period. When the kill is issued the leader pod is not yet dead it is just shutting down.</p> <p>You could shorten or skip the shutdown process that with a <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#force-deletion" rel="nofollow noreferrer">force delete</a> or change <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#delete-pods" rel="nofollow noreferrer">the grace period</a> in the specification. The risk then is that the pod might shut down without cleaning up fully - you'll know whether this is relevant to your Pods.</p> <p>It should theoretically be possible to listen for the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">preStop hook</a> and begin leader-election as soon as a pod starts terminating. But then you risk having two leaders while the old leader is terminating (k8s should stop sending traffic to the old leader at this point but it might still be doing something important, depending upon your design). The <code>k8s.gcr.io/leader-elector</code> implementation seems to require waiting for the old leader to fully stop. It's possible there are other implementations out there that might support immediate election but I've not found any with a quick search and I think waiting for the old leader to terminate is not unusual.</p>
<p>I'm new to kubernetes. I'm trying to list all the pods in a namespace from inside a pod/container via the javascript client. </p> <pre><code>import k8s = require('@kubernetes/client-node'); const kc = new k8s.KubeConfig(); kc.loadFromDefault(); const k8sApi = kc.makeApiClient(k8s.Core_v1Api); k8sApi.listNamespacedPod('development') .then((res) =&gt; { console.log(res.body); }).catch((err) =&gt; { console.log(err); }); </code></pre> <p>The response error when I look at my pod logs:</p> <pre><code>{ kind: 'Status', apiVersion: 'v1', metadata: {}, status: 'Failure', message: 'pods is forbidden: User "system:serviceaccount:default:default" cannot list pods in the namespace "development"', reason: 'Forbidden', details: { kind: 'pods' }, code: 403 } } </code></pre> <p>I believe I need to create a new User or add some permissions to a Role but I'm not sure where and how. Thanks</p>
<p>As @<a href="https://stackoverflow.com/users/2779488/robert-panzer">Robert Panzer</a> suggested in <a href="https://stackoverflow.com/questions/47813698/access-kubernetes-api-without-kubectl">Access Kubernetes API without kubectl</a>, you can create a role and a rolebinding to enable listing of pods with:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "watch", "list"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: pod-reader apiGroup: rbac.authorization.k8s.io </code></pre>
<p>i have configured kubernetes master on centos 7 and kubernetes node on another node centos 7 </p> <p><strong>services running on kube master:</strong></p> <ul> <li>kube-controller-manager</li> <li>kube-apiserver</li> <li>kube-scheduler </li> <li>etcd </li> <li>flanneld</li> </ul> <p><strong>service running on kube node:</strong></p> <ul> <li>flanneld</li> <li>docker</li> <li>kube-proxy</li> <li>kubelet </li> </ul> <p>all services are up and running and i could see the api url successfully getting all endpoints. <a href="http://kube-master:8080" rel="nofollow">http://kube-master:8080</a> however, when i am running command <code>kube get nodes</code> , getting following error :</p> <p><code>skipping pod synchronization. container runtime is down</code></p> <p>I am not getting what this error means and how to resolve this. Please suggest.</p>
<p>I've seen this problem when docker got into some broken state where it was not able to remove a (specific) stopped container and was leaking zombie processes. Had to power-cycle the node in the end.</p> <p>CentoOS 7 here too, still at Kubernetes 1.10.0 and Docker CE 18.03.</p>
<p>Basically, I'm creating a StatefulSet deployment with 2 pods (single host cluster), I would like to that each pod will be able to mount to a base folder in the host, and to a subfolder beneath it:</p> <p>Base folder mount: /mnt/disks/ssd</p> <p>Pod#1 - /mnt/disks/ssd/pod-1</p> <p>Pod#2 - /mnt/disks/ssd/pod-2</p> <p>I've managed only to mount the first pod to the base folder, but the 2nd folder cannot mount (as the volume is already taken)</p> <p>This is the volume:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-local-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/disks/ssd nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ubuntukuber </code></pre> <p>This is the usage in the stateful set:</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: app namespace: test-ns spec: serviceName: app replicas: 2 .... .... volumeMounts: - name: data mountPath: /var/lib/app/data volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-storage" resources: requests: storage: 2Gi </code></pre> <p>So, i basically would like that each replica would use its own subfolder - how can one achieve it?</p> <p><strong>== EDIT ==</strong></p> <p>I've made some progress, i'm able to mount several replicas into the same mount, using the following YAMLs (the app i'm trying to do it on is rabbitmq - so i'll leave the app name as is)</p> <pre><code>--- kind: PersistentVolume apiVersion: v1 metadata: name: pv-local namespace: test-rabbitmq labels: type: local spec: storageClassName: local capacity: storage: 6Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/disks" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: hostpath-pvc namespace: test-rabbitmq spec: storageClassName: local accessModes: - ReadWriteOnce resources: requests: storage: 3Gi selector: matchLabels: type: local --- </code></pre> <p>In the StatefulSet i'm declaring this volume:</p> <pre><code> volumes: - name: rabbitmq-data persistentVolumeClaim: claimName: hostpath-pvc </code></pre> <p>And mounting "rabbitmq-data".</p> <p>Both pods mount to the same folder, but will not create subfolders - this is no terrible situation as by default there are rabbitmq's subfolders - i'll try to expand it into each pod to use a subfolder</p>
<p>I am able to achieve the above scenario, what you need is "claimRef" in your pv to bind your PVC. Please have a look at following pv json and statefulset json</p> <p>PV-0.json</p> <pre><code>{ "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "pv-data-vol-0", "labels": { "type": "local" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "storageClassName": "local-storage", "local": { "path": "/prafull/data/pv-0" }, "claimRef": { "namespace": "default", "name": "data-test-sf-0" }, "nodeAffinity": { "required": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "kubernetes.io/hostname", "operator": "In", "values": [ "ip-10-0-1-46.ec2.internal" ] } ] } ] } } } } </code></pre> <p>PV-1.json</p> <pre><code>{ "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "pv-data-vol-1", "labels": { "type": "local" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "storageClassName": "local-storage", "local": { "path": "/prafull/data/pv-1" }, "claimRef": { "namespace": "default", "name": "data-test-sf-1" }, "nodeAffinity": { "required": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "kubernetes.io/hostname", "operator": "In", "values": [ "ip-10-0-1-46.ec2.internal" ] } ] } ] } } } } </code></pre> <p>Statefulset.json</p> <pre><code>{ "kind": "StatefulSet", "apiVersion": "apps/v1beta1", "metadata": { "name": "test-sf", "labels": { "state": "test-sf" } }, "spec": { "replicas": 2, "template": { "metadata": { "labels": { "app": "test-sf" }, "annotations": { "pod.alpha.kubernetes.io/initialized": "true" } } ... ... }, "volumeClaimTemplates": [ { "metadata": { "name": "data" }, "spec": { "accessModes": [ "ReadWriteOnce" ], "storageClassName": "local-storage", "resources": { "requests": { "storage": "10Gi" } } } } ] } } </code></pre> <p>There will be two pods created test-sf-0 and test-sf-1 which in-turn will be created two PVC data-test-sf-0 and data-test-sf-1 which will be bound to PV-0 and Pv-1 respectively. Hence test-sf-0 will write to the location specified in PV-0 and test-sf-1 will write in location specified on PV-1. Hope this helps.</p>
<p>I need to setup an environment to host multiple websites (hundreds), each with their own domains and external IP addresses. I was thinking about using Kubernetes, Traefik and Letsencrypt for the SSL certs (in AWS). I have a couple of questions:</p> <ol> <li>Is the Traefik\Kubernetes combination suitable?</li> <li>Will I need a load balancer or can Traefik support multiple ingress IPs;</li> </ol>
<ol> <li><p>Opinion question, opinion answer: Yes. It's not as popular as the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx ingress controller</a> but a good solution nonetheless.</p> </li> <li><p>Generally speaking if you want to support thousands of IPs, you will need an additional load balancer that supports it together with a single ingress controller and also if you either expose your Ingress with a service type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>.</p> <p>However, you could actually have multiple ingress controllers, one per IP address each with its own <a href="https://docs.traefik.io/user-guide/kubernetes/#between-traefik-and-other-ingress-controller-implementations" rel="nofollow noreferrer">ingressClass</a> option. You'll have to check if your cluster can scale up to thousands of ingress controllers.</p> </li> </ol>
<p>Playing with cluster autoscaler I've noticed that scale-down is not working due to standard k8s pods:</p> <pre><code>Fast evaluation: node aks-nodepool1-37748512-0 cannot be removed: non-daemonset, non-mirrored, non-pdb-assignedkube-system pod present: kube-dns-v20-8748686c5-27psn </code></pre> <p>What is a proper PodDisruptionBudget for kube-dns and are there any best practices for standard system POD PDBs? Why aren't they configured by default?</p>
<p>Inside Kubernetes docs about <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">Disruptions</a> we can read:</p> <blockquote> <p>An Application Owner can create a <code>PodDisruptionBudget</code> object (PDB) for each application. A PDB limits the number pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total.</p> </blockquote> <p>You can see examples on how to correctly enable, tune and disable <code>PodDisruptionBudget</code> for kube-dns inside Kubernetes docs for <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/" rel="nofollow noreferrer">Autoscale the DNS Service in a Cluster</a>.</p> <p>Also Marton Sereg wrote a good article about <a href="https://banzaicloud.com/blog/drain/" rel="nofollow noreferrer">Draining Kubernetes nodes</a>, in which he explains how does draining works and what's happening inside the cluster.</p> <p>As for configuration defaults I was able to find this discussion <a href="https://github.com/kubernetes/kubernetes/issues/35318" rel="nofollow noreferrer">Reasonable defaults with eviction and PodDisruptionBudget #35318</a>.</p>
<p>I am trying to set up a new kubernetes cluster on one machine with kubespray (commit 7e84de2ae116f624b570eadc28022e924bd273bc).</p> <p>After running the playbook (on a fresh ubuntu 16.04), I open the dashboard and see those warning popups:</p> <pre><code>- configmaps is forbidden: User "system:serviceaccount:default:default" cannot list configmaps in the namespace "default" - persistentvolumeclaims is forbidden: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims in the namespace "default" - secrets is forbidden: User "system:serviceaccount:default:default" cannot list secrets in the namespace "default" - services is forbidden: User "system:serviceaccount:default:default" cannot list services in the namespace "default" - ingresses.extensions is forbidden: User "system:serviceaccount:default:default" cannot list ingresses.extensions in the namespace "default" - daemonsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list daemonsets.apps in the namespace "default" - pods is forbidden: User "system:serviceaccount:default:default" cannot list pods in the namespace "default" - events is forbidden: User "system:serviceaccount:default:default" cannot list events in the namespace "default" - deployments.apps is forbidden: User "system:serviceaccount:default:default" cannot list deployments.apps in the namespace "default" - replicasets.apps is forbidden: User "system:serviceaccount:default:default" cannot list replicasets.apps in the namespace "default" - jobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list jobs.batch in the namespace "default" - cronjobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list cronjobs.batch in the namespace "default" - replicationcontrollers is forbidden: User "system:serviceaccount:default:default" cannot list replicationcontrollers in the namespace "default" - statefulsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list statefulsets.apps in the namespace "default" </code></pre> <p>The kubectl commands seem fine (proxy works, listing pods etc. return no error, <code>/api</code> is reachable), however, the dashboard seem unable to fetch any useful information. How should I go about debugging that?</p>
<pre><code>kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default </code></pre> <p>seems to do the trick - I'd welcome an explanation though. (Is it an oversight in kubespray? I need to set up a variable there? Is it related to RBAC?)</p>
<p>I have a cluster where the free memory on the nodes recently dipped to %5. When this happens, the nodes CPU (load) spikes while it tries to free up some memory, from cache/buffer. One consequence of the high load, low memory is that I sometimes end up with Pods that get into an Error state or get stuck in Terminating. These Pods sit around until I manually intervene, which can further exacerbate the low memory issue that caused it.</p> <p>My question is why Kubernetes leaves these Pods stuck in this state? My hunch is that kubernetes didn’t get the right feedback from the Docker daemon and never tries again. I need to know how to have Kubernetes cleanup or repair Error and Terminating Pods. Any ideas?</p> <p>I'm currently on:</p> <pre><code>~ # kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p><strong>UPDATE:</strong> Here are some of the Events listed in pods. You can see that some of them sit around for days. You will also see that one shows a Warning, but the others show Normal.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedKillPod 25m kubelet, k8s-node-0 error killing pod: failed to "KillContainer" for "kubectl" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" Normal Killing 20m (x2482 over 3d) kubelet, k8s-node-0 Killing container with id docker://docker:Need to kill Pod Normal Killing 15m (x2484 over 3d) kubelet, k8s-node-0 Killing container with id docker://maven:Need to kill Pod Normal Killing 8m (x2487 over 3d) kubelet, k8s-node-0 Killing container with id docker://node:Need to kill Pod Normal Killing 4m (x2489 over 3d) kubelet, k8s-node-0 Killing container with id docker://jnlp:Need to kill Pod Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Killing 56m (x125 over 5h) kubelet, k8s-node-2 Killing container with id docker://owasp-zap:Need to kill Pod Normal Killing 47m (x129 over 5h) kubelet, k8s-node-2 Killing container with id docker://jnlp:Need to kill Pod Normal Killing 38m (x133 over 5h) kubelet, k8s-node-2 Killing container with id docker://dind:Need to kill Pod Normal Killing 13m (x144 over 5h) kubelet, k8s-node-2 Killing container with id docker://maven:Need to kill Pod Normal Killing 8m (x146 over 5h) kubelet, k8s-node-2 Killing container with id docker://docker-cmds:Need to kill Pod Normal Killing 1m (x149 over 5h) kubelet, k8s-node-2 Killing container with id docker://pmd:Need to kill Pod Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Killing 56m (x2644 over 4d) kubelet, k8s-node-0 Killing container with id docker://openssl:Need to kill Pod Normal Killing 40m (x2651 over 4d) kubelet, k8s-node-0 Killing container with id docker://owasp-zap:Need to kill Pod Normal Killing 31m (x2655 over 4d) kubelet, k8s-node-0 Killing container with id docker://pmd:Need to kill Pod Normal Killing 26m (x2657 over 4d) kubelet, k8s-node-0 Killing container with id docker://kubectl:Need to kill Pod Normal Killing 22m (x2659 over 4d) kubelet, k8s-node-0 Killing container with id docker://dind:Need to kill Pod Normal Killing 11m (x2664 over 4d) kubelet, k8s-node-0 Killing container with id docker://docker-cmds:Need to kill Pod Normal Killing 6m (x2666 over 4d) kubelet, k8s-node-0 Killing container with id docker://maven:Need to kill Pod Normal Killing 1m (x2668 over 4d) kubelet, k8s-node-0 Killing container with id docker://jnlp:Need to kill Pod </code></pre>
<p>This is typically related to the <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#advanced-topics" rel="nofollow noreferrer">metadata.finalizers</a> on your objects (pod, deployment, etc)</p> <p>You can also read more about <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#foreground-cascading-deletion" rel="nofollow noreferrer">Foreground Cascading Deleting</a> and how it uses metadata.finalizers.</p> <p>If not it could be a networking issue, you could check the kubelet logs, typically:</p> <pre><code>journalctl -xeu kubelet </code></pre> <p>You can also check the docker daemon logs, typically:</p> <pre><code>cat /var/log/syslog | grep dockerd </code></pre>
<p>We provide kubernetes cluster for many users, the separation between applications by namespace.</p> <p>For deploy use kubernetes-helm. There are situations when we need to close the opportunity to deploy the app in the cluster. One option is to change permissions for default sa (which the use kubernetes-helm).</p> <p>How else can solve?</p>
<p>You'd use an <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">admission controller</a>.</p> <p>Unfortunately, this might involve writing some code to manage it. There are tools out there that help, like <a href="https://www.openpolicyagent.org/docs/kubernetes-admission-control.html" rel="nofollow noreferrer">Open Policy Agent</a></p>
<p>I'm trying to edit a kubernetes secret using:</p> <pre><code>kubectl edit secret mysecret -o yaml </code></pre> <p>And adding a new variable on <code>data</code>:</p> <pre><code>data: NEW_VAR: true </code></pre> <p>But I receive the error:</p> <blockquote> <p>cannot restore slice from bool</p> </blockquote> <p>If I try to use some number, like:</p> <pre><code>data: NEW_VAR: 1 </code></pre> <p>I receive another error after close the editor:</p> <blockquote> <p>cannot restore slice from int64</p> </blockquote> <p>What this error means?</p>
<p>This error happens when the variable is not a valid <strong>base64</strong> value.</p> <p>So, to use the value <code>true</code>, you need to use his base64 representation:</p> <pre><code>NEW_VAR: dHJ1ZQ== </code></pre>
<p>How can I create gcePD (google persistent disk) and link it as persistent volume (pv) to on-premise kubernetes cluster? It is required to resolve <code>persistentvolume-controller - no persistent volumes available for this claim and no storage class is set</code> message when I deployed some helm chart.</p> <p>Please explain the steps for that. Thank you.</p>
<p>I don't think it's possible, GCE persistent disks are not exposed outside of GCE. Your servers need to be in GCE.</p> <p>You could probably set a remote PV using <a href="https://learn.microsoft.com/en-us/azure/storage/files/storage-files-introduction" rel="nofollow noreferrer">Azure Files</a>, but unless you need them for low-performance storage, I'd be wary of the speed and latency if you don't have a direct connect pipe to the Azure cloud.</p> <p>You could also set up a GCE disk shared filesystem with <a href="https://ceph.com/" rel="nofollow noreferrer">Ceph</a>, <a href="https://en.wikipedia.org/wiki/Dell_EMC_ScaleIO" rel="nofollow noreferrer">ScaleIO</a>, etc, but again you would be going across the public cloud if you don't have a private direct connect.</p>
<p>I have a question about the behavior of Kubernetes when dealing with Attach a volume on a new node after a reschedule of a pod.</p> <p>A common behavior we have in our cluster is:</p> <ol> <li><p>A node n1 becomes unavailable</p></li> <li><p>A pod A with Volume v1 is rescheduled on node n2</p></li> <li><p>Volume v1 is being detached from node n1, this will take a few seconds</p></li> <li><p>kubelet on node n2 tries to Attach Volume v1 to pod A</p></li> <li><p>Because Volume v1 is not yet detached from node n1, the Attach call fails with:</p> <pre><code>Sep 27 11:43:24 node n2 kubelet-wrapper[787]: E0927 11:43:24.794713 787 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/cinder/volume_v1_id\"" failed. No retries permitted until 2018-09-27 11:43:25.294659022 +0000 UTC m=+1120541.835247469 (durationBeforeRetry 500ms). Error: "AttachVolume.Attach failed for volume \"volume v1\" (UniqueName: \"kubernetes.io/cinder/volume_2_id\") from node \"node n2\" : disk volume_v2_id path /dev/vdc is attached to a different instance (pod node n1)" </code></pre></li> <li><p>After this Attach error, the kubelet will eternally try to mount Volume v1 (which will fail because the Volume is not attached)</p> <pre><code>Sep 27 11:43:26 node n2 kubelet-wrapper[787]: E0927 11:43:26.870106 787 attacher.go:240] Error: could not find attached Cinder disk "volume_v1_id" (path: ""): &lt;nil&gt; </code></pre></li> </ol> <p>My question is: Why does k8s not try to Attach again before trying to Mount?</p> <p>The issue here is that when the detach is being done quickly enough we do not have any issues, but if the detach is not yet done when the Attach is called by the kubelet, we are stuck.</p> <p>When digging into the code it seems that the behavior is WaitForAttachAndMount. This will: 1/ Try Attach 2/ whatever the result of the attach, loop on Try Mount.</p> <p>Should the expected behavior be 1/ loop on Try Attach 2/ If at some point Attach is a success, loop on Try Mount?</p> <p>This question is related to <a href="https://github.com/kubernetes/kubernetes/issues/69158" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/69158</a></p>
<p>My take, it depends:</p> <ol> <li><p>It makes sense to continue trying to Attach indefinitely rather than fail and try to mount indefinitely if you want to make the volume provider (which could be EBS, Cinder, GCP, Ceph, etc) responsible for responding to an attach API. It may be that the provider is doing some maintenance and the APIs are failing temporarily. This is if you want to make your systems more automated.</p></li> <li><p>It makes sense to Attach -> fail and mount indefinitely if for reason you want to let the user manually attach the volume and have a subsequent mount succeed. In my opinion, this should be made an option and 1. should be the default.</p></li> </ol>
<p>Why we need to write the apiGroup key in this definition again and again , if it is the same every time:</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: web-rw-deployment namespace: some-web-app-ns subjects: - kind: User name: "[email protected]" apiGroup: rbac.authorization.k8s.io - kind: Group name: "webdevs" apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: web-rw-deployment apiGroup: rbac.authorization.k8s.io </code></pre> <ul> <li>this looks so redudant , that is repeating for everything </li> <li>if we need to write it , what are the other values </li> <li>if there are not other values for the field RBAC apiGroup , then k8s should assume that value automatically <code>apiGroup: rbac.authorization.k8s.io</code></li> </ul> <p>this makes the yaml too redundant , is there any way to work around this. can we just skip this key? OR can we declare this somewhere globally.</p>
<p>Good question. The rationale that I can think of is that there may be different APIs in the future that could be supported, for example, <code>rbacv2.authorization.k8s.io</code> and you wouldn't like to restrict references and subjects to just one for compatibility reasons.</p> <p>My take on this is that it would be nice to have yet another optional global field for <code>RoleBinding</code> besides 'subjects' called something like 'bindingApigroup'. Feel free to open an <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow noreferrer">issue</a>: kind/feature, sig/auth and/or sig/api-machinery.</p> <p>Also, there might be more rationale/details in the <a href="https://github.com/kubernetes/community/tree/master/contributors/design-proposals/auth" rel="nofollow noreferrer">sig-auth</a> design proposals.</p>
<p>While trying to install "incubator/fluentd-cloudwatch" using helm on Amazon EKS, and setting user to root, I am getting below response.</p> <p>Command used : </p> <pre><code>helm install --name fluentd incubator/fluentd-cloudwatch --set awsRegion=eu-west-1,rbac.create=true --set extraVars[0]="{ name: FLUENT_UID, value: '0' }" </code></pre> <p>Error: </p> <pre><code>Error: YAML parse error on fluentd-cloudwatch/templates/daemonset.yaml: error converting YAML to JSON: yaml: line 38: did not find expected ',' or ']' </code></pre> <p>If we do not set user to root, then by default, fluentd runs with "fluent" user and its log shows:</p> <pre><code>[error]: unexpected error error_class=Errno::EACCES error=#&lt;Errno:: EACCES: Permission denied @ rb_sysopen - /var/log/fluentd-containers.log.pos&gt;` </code></pre>
<p>Based on <a href="https://github.com/helm/charts/blob/master/incubator/fluentd-cloudwatch/templates/daemonset.yaml#L38" rel="nofollow noreferrer">this</a> looks like it's just trying to convert <code>eu-west-1,rbac.create=true</code> to a JSON field as field and there's an extra comma(,) there causing it to fail. </p> <p>And if you look at the <a href="https://github.com/helm/charts/blob/master/incubator/fluentd-cloudwatch/values.yaml" rel="nofollow noreferrer">values.yaml</a> you'll see the right separate options are <code>awsRegion</code> and <code>rbac.create</code> so <code>--set awsRegion=eu-west-1 --set rbac.create=true</code> should fix the first error.</p> <p>With respect to the <code>/var/log/... Permission denied</code> error, you can see <a href="https://github.com/helm/charts/blob/master/incubator/fluentd-cloudwatch/templates/daemonset.yaml#L74" rel="nofollow noreferrer">here</a> that its mounted as a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> so if you do a: </p> <pre><code># (means read/write user/group/world) $ sudo chmod 444 /var/log </code></pre> <p>and all your nodes, the error should go away. Note that you need to add it to all the nodes because your pod can land anywhere in your cluster.</p>
<p>In the tutorial <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a>, it says </p> <blockquote> <p>For flannel to work correctly, you must pass <code>--pod-network-cidr=10.244.0.0/16</code> to <code>kubeadm init.</code>.</p> </blockquote> <p>How to pass other cidr, e.g., <code>--pod-network-cidr=192.168.0.0/16</code>?</p>
<p>Follow the same steps in the tutorial, except: </p> <p>(1) After <code>kubeadm reset</code>, clear earlier net interfaces on both master and slave nodes.</p> <pre><code>sudo ip link del cni0 sudo ip link del flannel.1 sudo systemctl restart network </code></pre> <p>(2) Run <code>kubeadm init --pod-network-cidr=192.168.0.0/16</code></p> <p>(3) Download the <code>kube-flannel.yml</code> file, change hard coded <code>10.244.0.0</code> to <code>192.168.0.0</code>, then do <code>kubectl create -f kube-flannel.yml</code>.</p> <p><strong>Test result</strong></p> <pre><code>$ k get po -o=wide NAME READY STATUS RESTARTS AGE IP NODE h2-75cb7756c6-r4gkj 1/1 Running 0 5m 192.168.1.14 slave1 h2-75cb7756c6-xfstk 1/1 Running 0 16m 192.168.0.5 master jobserver-58bf6985f9-77mdd 1/1 Running 0 16m 192.168.0.6 master jobserver-58bf6985f9-h9hlx 1/1 Running 0 5m 192.168.1.15 slave1 # ping pod on slave $ ping 192.168.1.14 PING 192.168.1.14 (192.168.1.14) 56(84) bytes of data. 64 bytes from 192.168.1.14: icmp_seq=1 ttl=63 time=0.454 ms # ping pod on master $ ping 192.168.0.5 PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data. 64 bytes from 192.168.0.5: icmp_seq=1 ttl=64 time=0.143 ms # ping docker container on the same node $ ping 172.18.0.2 PING 172.18.0.2 (172.18.0.2): 56 data bytes 64 bytes from 172.18.0.2: seq=0 ttl=241 time=21.580 ms </code></pre>
<p>I am trying to run a kubernetes cluster on mac os for some prototyping using docker (not vagrant or virtualbox). </p> <p>I found the instructions online at <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md</a> but the instructions are 3 years old (Oct 2015).<br> The instructions refer to boot2docker but the present version of docker on mac (Docker Community Edition v 18.06.1-ce-mac73) doesn't have boot2docker. </p> <p>Can you point me to the latest instructions?</p>
<p>Since 2015, everything has been move to the <a href="https://github.com/kubernetes/website" rel="nofollow noreferrer">Kubernetes website GitHub repo</a>.</p> <p>The full installation/process page is now at <a href="https://kubernetes.io/docs/tasks/" rel="nofollow noreferrer"><code>kubernetes.io/docs/tasks/</code></a>.</p> <p>And since <a href="https://github.com/kubernetes/website/issues/7307" rel="nofollow noreferrer">issue 7307</a>, a Kubernetes installation on MacOs would no longer use xHyve, but, as <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">stated in the documentation</a>:</p> <blockquote> <p>macOS: <a href="https://www.virtualbox.org/wiki/Downloads" rel="nofollow noreferrer">VirtualBox</a> or <a href="https://www.vmware.com/products/fusion" rel="nofollow noreferrer">VMware Fusion</a>, or HyperKit.</p> </blockquote>
<p>I am trying to run a kubernetes cluster on mac os for some prototyping using docker (not vagrant or virtualbox). </p> <p>I found the instructions online at <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md</a> but the instructions are 3 years old (Oct 2015).<br> The instructions refer to boot2docker but the present version of docker on mac (Docker Community Edition v 18.06.1-ce-mac73) doesn't have boot2docker. </p> <p>Can you point me to the latest instructions?</p>
<p>Current Docker for Mac has Kubernetes included, just enable it.</p> <p><a href="https://i.stack.imgur.com/YBiVa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YBiVa.png" alt="enter image description here"></a></p>
<p>I've created an "Istio-enabled" Kubernetes cluster, and my containers, by default, "<a href="https://archive.istio.io/v1.0/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer">are unable to access URLs outside of the cluster</a>" (Istio v1.0.2). This is fine and matches my security requirements:</p> <blockquote> <p>By default, Istio-enabled services are unable to access URLs outside of the cluster because the pod uses iptables to transparently redirect all outbound traffic to the sidecar proxy...</p> </blockquote> <p>Now I'm trying to create an <a href="https://archive.istio.io/v1.0/docs/concepts/traffic-management/#service-entries" rel="nofollow noreferrer">Istio Service Entry</a> to <strong>allow my containers to requests my s3 buckets</strong> that are outside the Istio service mesh. </p> <p>As far I know, Amazon S3 does not have a specific "host" or a well-defined range of IP addresses. How can I do this? What protocol do I need to use?</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: enable-access-to-s3-buckets spec: hosts: - ???????? ports: - number: ??????? name: ?????? protocol: ?????? resolution: ????? </code></pre> <hr> <p><em>Note: <a href="https://istio.io/news/2019/announcing-1.2/#traffic-management" rel="nofollow noreferrer">Istio v1.2</a> changed the default outbound traffic policy to <code>ALLOW_ANY</code>.</em></p>
<p>Looking here you can get a list on the terminating points of s3 that might help: <a href="https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region" rel="nofollow noreferrer">https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region</a></p> <p>Another solution can be to create a s3 endpoint inside the same VPC of your K8S cluster and use that name to restrict the access with private IPs rules. see <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html</a> for a detailed documentation on it.</p>
<p>Is there a way to use <code>kubectl</code> to list only the pods belonging to a deployment? Currently, I do this to get pods:</p> <p><code>kubectl get pods| grep hello</code></p> <p>But it seems an overkill to get ALL the pods when I am interested to know only the pods for a given deployment. I use the output of this command to see the status of all pods, and then possibly exec into one of them.</p> <p>I also tried <code>kc get -o wide deployments hellodeployment</code>, but it does not print the Pod names.</p>
<p>There's a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">label</a> in the pod for the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">selector</a> in the deployment. That's how a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">deployment</a> manages its pods. For example for the label or selector <code>app=http-svc</code> you can do something like that this and avoid using <code>grep</code> and listing all the pods (this becomes useful as your number of pods becomes very large)</p> <p>here are some examples command line:</p> <pre><code># single label kubectl get pods -l=app=http-svc kubectl get pods --selector=app=http-svc # multiple labels kubectl get pods --selector key1=value1,key2=value2 </code></pre>
<p>Does kubernetes provide an API in its client library to get the cluster-info dump? I went through its API <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#-strong-api-overview-strong-" rel="nofollow noreferrer">documentation</a> and could find any API which could actually do this.</p> <p><strong>What i do now:</strong></p> <pre><code>kubectl cluster-info dump --output-directory=&quot;dumpdir&quot; </code></pre> <p><strong>What i want:</strong></p> <p>Using client-go/kubernetes API libraries, make an API call to get this dump from a golang application. Is it possible?</p> <p><strong>What i know:</strong></p> <p>There are individual API's for each resource which can provide all the information provided by the cluster-info dump, but i want to do it will a single API call.</p> <p>For example, this Golang code will give me a list of nodes:</p> <pre><code>coreClient := kubernetesapi.CoreV1() nodeList, err := coreClient.Nodes().List(metav1.ListOptions{}) </code></pre> <p>Is there an API which returns what <code>kubectl cluster-info dump</code> would give, so that I can get all the details programmatically?</p>
<p>You can capture API calls by listing the output with verbose option in <code>kubectl cluster-info</code> command:</p> <p><code>kubectl cluster-info dump -v 9</code></p> <p>For example:</p> <blockquote> <p>curl -k -v -XGET -H "Accept: application/json, <em>/</em>" -H "User-Agent: kubectl/v1.12.1 (linux/amd64) kubernetes/4ed3216" '<a href="https://10.142.0.3:6443/api/v1/namespaces/kube-system/events" rel="nofollow noreferrer">https://10.142.0.3:6443/api/v1/namespaces/kube-system/events</a>'</p> </blockquote> <p>Get the token for authorization purpose in your cluster:</p> <p><code>MY_TOKEN="$(kubectl get secret &lt;default-secret&gt; -o jsonpath='{$.data.token}' | base64 -d)"</code></p> <p>Now you can make API call for the target resource:</p> <pre><code>curl -k -v -H "Authorization : Bearer $MY_TOKEN" https://10.142.0.3:6443/api/v1/namespaces/kube-system/events </code></pre> <p>Keep in mind that you may require role binding in order to grant view permission for your service account:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: default-view roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: default namespace: default </code></pre>
<p>After creating a new GKE cluster, creating a cluster role failed with the following error:</p> <pre><code>Error from server (Forbidden): error when creating &quot;./role.yaml&quot;: clusterroles.rbac.authorization.k8s.io &quot;secret-reader&quot; is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:[&quot;secrets&quot;], APIGroups:[&quot;&quot;], Verbs:[&quot;get&quot;]} PolicyRule{Resources:[&quot;secrets&quot;], APIGroups:[&quot;&quot;], Verbs:[&quot;watch&quot;]} PolicyRule{Resources:[&quot;secrets&quot;], APIGroups:[&quot;&quot;], Verbs:[&quot;list&quot;]}] user=&amp;{[email protected] [system:authenticated] map[authenticator:[GKE]]} ownerrules= . [PolicyRule{Resources:[&quot;selfsubjectaccessreviews&quot; &quot;selfsubjectrulesreviews&quot;], APIGroups:[&quot;authorization.k8s.io&quot;], Verbs: [&quot;create&quot;]} PolicyRule{NonResourceURLs:[&quot;/api&quot; &quot;/api/*&quot; &quot;/apis&quot; &quot;/apis/*&quot; &quot;/healthz&quot; &quot;/swagger-2.0.0.pb-v1&quot; &quot;/swagger.json&quot; &quot;/swaggerapi&quot; &quot;/swaggerapi/*&quot; &quot;/version&quot;], Verbs:[&quot;get&quot;]}] ruleResolutionErrors=[] </code></pre> <p>My account has the following permissions in IAM:</p> <blockquote> <p>Kubernetes Engine Admin</p> <p>Kubernetes Engine Cluster Admin</p> <p>Owner</p> </blockquote> <p>This is my <code>role.yaml</code> (from the <a href="https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole" rel="noreferrer">Kubernetes docs)</a>:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: secret-reader rules: - apiGroups: [&quot;&quot;] resources: [&quot;secrets&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;list&quot;] </code></pre> <p>According to the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="noreferrer">RBAC docs of GCloud</a>, I need to</p> <blockquote> <p>create a RoleBinding that gives your Google identity a cluster-admin role before attempting to create additional Role or ClusterRole permissions.</p> </blockquote> <p>So I tried <a href="https://projectriff.io/docs/running-on-gke-with-rbac/" rel="noreferrer">this</a>:</p> <pre><code>export GCP_USER=$(gcloud config get-value account | head -n 1) kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$GCP_USER </code></pre> <p>which succeeded, but I still get the same error when creating the cluster role.</p> <p>Any ideas what I might be doing wrong?</p>
<p>According to <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="noreferrer">Google Container Engine docs</a> you must first create a RoleBinding that grants you all of the permissions included in the role you want to create.</p> <h1>Get current google identity</h1> <pre><code>$ gcloud info | grep Account Account: [[email protected]] </code></pre> <h1>Grant cluster-admin to your current identity</h1> <pre><code>$ kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin [email protected] Clusterrolebinding "myname-cluster-admin-binding" created </code></pre> <p>Now you can create your ClusterRole without any problem.</p> <p>I found the answer in CoreOS <a href="https://coreos.com/operators/prometheus/docs/latest/troubleshooting.html" rel="noreferrer">FAQ / Troubleshooting</a> check it out for more information.</p>
<p>I'm trying to follow <a href="https://github.com/kubernetes/dashboard#getting-started" rel="noreferrer">GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters</a>.</p> <p>deploy/access:</p> <pre><code># export KUBECONFIG=/etc/kubernetes/admin.conf # kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created # kubectl proxy Starting to serve on 127.0.0.1:8001 </code></pre> <p>curl:</p> <pre><code># curl http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "no endpoints available for service \"kubernetes-dashboard\"", "reason": "ServiceUnavailable", "code": 503 }# </code></pre> <p>Please advise.</p> <p>per @VKR</p> <pre><code>$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-56vg7 0/1 ContainerCreating 0 57m kube-system coredns-576cbf47c7-sn2fk 0/1 ContainerCreating 0 57m kube-system etcd-wcmisdlin02.uftwf.local 1/1 Running 0 56m kube-system kube-apiserver-wcmisdlin02.uftwf.local 1/1 Running 0 56m kube-system kube-controller-manager-wcmisdlin02.uftwf.local 1/1 Running 0 56m kube-system kube-proxy-2hhf7 1/1 Running 0 6m57s kube-system kube-proxy-lzfcx 1/1 Running 0 7m35s kube-system kube-proxy-rndhm 1/1 Running 0 57m kube-system kube-scheduler-wcmisdlin02.uftwf.local 1/1 Running 0 56m kube-system kubernetes-dashboard-77fd78f978-g2hts 0/1 Pending 0 2m38s $ </code></pre> <p><code>logs</code>:</p> <pre><code>$ kubectl logs kubernetes-dashboard-77fd78f978-g2hts -n kube-system $ </code></pre> <p><code>describe</code>:</p> <pre><code>$ kubectl describe pod kubernetes-dashboard-77fd78f978-g2hts -n kube-system Name: kubernetes-dashboard-77fd78f978-g2hts Namespace: kube-system Priority: 0 PriorityClassName: &lt;none&gt; Node: &lt;none&gt; Labels: k8s-app=kubernetes-dashboard pod-template-hash=77fd78f978 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/kubernetes-dashboard-77fd78f978 Containers: kubernetes-dashboard: Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 Port: 8443/TCP Host Port: 0/TCP Args: --auto-generate-certificates Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /certs from kubernetes-dashboard-certs (rw) /tmp from tmp-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-gp4l7 (ro) Conditions: Type Status PodScheduled False Volumes: kubernetes-dashboard-certs: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-certs Optional: false tmp-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: kubernetes-dashboard-token-gp4l7: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-gp4l7 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4m39s (x21689 over 20h) default-scheduler 0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate. $ </code></pre>
<p>It would appear that you are attempting to deploy Kubernetes leveraging <code>kubeadm</code> but have skipped the step of <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network" rel="noreferrer">Installing a pod network add-on (CNI)</a>. Notice the warning:</p> <blockquote> <p>The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed. kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).</p> </blockquote> <p>Once you do this, the CoreDNS pods should come up healthy. This can be verified with: <code>kubectl -n kube-system -l=k8s-app=kube-dns get pods</code></p> <p>Then the <code>kubernetes-dashboard</code> pod should come up healthy as well.</p>
<p>I have installed a kubernetes cluster using this <a href="https://www.edureka.co/blog/install-kubernetes-on-ubuntu" rel="nofollow noreferrer">tutorial.</a> When I've set it up on VM Virtual Box - my host can connect with NodePort normally. When I've tried it on Compute Engine Virtual Machine instance, the Kubernetes cluster can't connect host with NodePort?</p> <p>I have attached two pictures. Thank you for your support.</p> <p><a href="https://i.stack.imgur.com/jE2G5.png" rel="nofollow noreferrer">Kubernetes cluster (bare metal) on Local VM Virtual Box</a> <a href="https://i.stack.imgur.com/Pv5s3.png" rel="nofollow noreferrer">Kubernetes cluster (bare metal) on Google cloud Platform VM Instances </a></p>
<p>This took me a while to test but I finally have a result. So it turns out the reason of your issue is Calico and GCP firewall. To be more specific you have to add firewall rules before you can be successful with the connectivity. Following this <a href="https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/gce" rel="nofollow noreferrer">document</a> on installing Calico for GCE:</p> <blockquote> <p>GCE blocks traffic between hosts by default; run the following command to allow Calico traffic to flow between containers on different hosts (where the source-ranges parameter assumes you have created your project with the default GCE network parameters - modify the address range if yours is different):</p> </blockquote> <p>So you need to allow the traffic to flow between containers:</p> <p><code>gcloud compute firewall-rules create calico-ipip --allow 4 --network "default" --source-ranges "10.128.0.0/9"</code></p> <p>Note that this IP should be changed. You can use for test purposes <code>10.0.0.0/8</code> but this is way to wide range so <strong>please narrow it down to your needs.</strong> </p> <p>Then proceed with setting up instances for master and nodes. You can actually skip most of the steps from the tutorial you posted, as connectivity is resolved by cloud provider. Here is a really simple script I use for Kubeadm on VM's. You can also perform this step by step. </p> <pre><code>#!/bin/bash swapoff -a echo net/bridge/bridge-nf-call-ip6tables = 1 &gt;&gt; /etc/ufw/sysctl.conf echo net/bridge/bridge-nf-call-iptables = 1 &gt;&gt; /etc/ufw/sysctl.conf echo net/bridge/bridge-nf-call-arptables = 1 &gt;&gt; /etc/ufw/sysctl.conf apt-get install -y ebtables ethtool apt-get update apt-get install -y docker.io apt-get install -y apt-transport-https apt-get install -y curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl kubeadm init --pod-network-cidr=192.168.0.0/16 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml kubectl taint nodes --all node-role.kubernetes.io/master- </code></pre> <p>In my case I used simple Redis application from Kubernetes <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/#start-up-the-redis-master" rel="nofollow noreferrer">documentation</a></p> <pre><code>root@calico-master:/home/xxx# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 29m redis-master ClusterIP 10.107.41.117 &lt;none&gt; 6379/TCP 26m root@calico-master:/home/xxx# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-master-57fc67768d-5lx92 1/1 Running 0 27m 192.168.1.4 calico &lt;none&gt; root@calico-master:/home/xxx# ping 192.168.1.4 PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data. 64 bytes from 192.168.1.4: icmp_seq=1 ttl=63 time=1.48 ms </code></pre> <p>Before the firewall rules and regular Calico installation I was not able to ping, nor <code>wget</code>from the service after that there is no problem with pinging the IP or hostname and also wget works:</p> <pre><code>&gt; root@calico-master:/home/xxx# wget http://10.107.41.117:6379 &gt; --2018-10-24 13:24:43-- http://10.107.41.117:6379/ Connecting to 10.107.41.117:6379... connected. HTTP request sent, awaiting response... 200 No headers, assuming HTTP/0.9 Length: unspecified &gt; Saving to: ‘index.html.2’ </code></pre> <p>Steps above were also tested for <code>type: NodePort</code> and it works as well. </p> <p>Another way is to use <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> which I also tested and it worked out of the box for the needs of testing your issue. Be sure to read more about CNI’s so you can choose one that will suit your needs. Hope this solves your problem. </p>