prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>Created a <code>kubernetes</code> cluster with private topology on <code>aws</code> using <code>kops</code></p> <p>My application exposes several services. As expected, services communicate among each other using their names, i.e. the <code>name</code> field below:</p> <pre><code>kind: Service metadata: name: myservice namespace: staging_namespace </code></pre> <p>Here is the question:</p> <p>Assuming that I will deploy <strong>2</strong> version of my application (e.g. <code>testing</code> and <code>staging</code>) in <strong>different namespaces</strong>, will this prevent service name collision?</p> <p>Will namespace separation allow </p> <ul> <li><p><code>service1</code> reach the correct <code>myservice</code> in <code>staging_namespace</code> in my <code>staging</code> deployment</p></li> <li><p><code>service1</code> reach the correct <code>myservice</code> in <code>testing_namespace</code> in my <code>testing</code> deployment</p></li> </ul> <p>?</p> <p>Using </p> <pre><code>kops version Version 1.8.0 (git-5099bc5) </code></pre> <p>and </p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>The simple answer is yes, you can put resources with the same name into separate namespaces and there will be no collision.</p>
<p>I would like to have Kubernetes use the local SSD in my Google Kubernetes engine cluster without using alpha features. Is there a way to do this?</p> <p>Thanks in advance for any suggestions or your help.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/local-ssd" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/local-ssd</a> explains how to use local SSDs on your nodes in Google Kubernetes Engine. Based on the gcloud commands, the feature appears to be beta (not alpha) so I don't think you need to rely on any alpha features to take advantage of it. </p>
<p>I have a simple, yet irritating issue. I have bunch of deployments, services, etc. written out. </p> <p>For staging env (I'm using namespaces to separate staging/prod environments), I'm using images with tag :latest.</p> <p>For prod env I'd like custom :tag. However I'd like avoiding copy/pasting the .yml file and am unsure how to best structure my code in helping achieve this goal. </p>
<p>Been there once. What I followed into was doing some simple templating, writing my own template wrappers, to finally end up evaluating and completely switching to <code>helm</code> "kubernetes package manager".</p> <p>I would strongly advise you to take a shortcut and go directly for helm, it can help a lot really, and writing a basic chart for what you have is pretty simple and quick solution. That way you can install your chart (ergo manifests) with something like ie . <code>helm install mychart --set defaulttag=latest</code> or <code>helm install mychart --set defaulttag=dev</code> and copy no manifests around</p>
<p>I'm trying to use the standalone <code>gsutil</code> tool from within a container running in a GKE cluster, but I cannot get it to work. I believe the cluster has adequate permissions (see below). However, running</p> <pre><code>./gsutil ls gs://my-bucket/ </code></pre> <p>yields</p> <pre><code>ServiceException: 401 Anonymous users does not have storage.objects.list access to bucket my-bucket. </code></pre> <p>Am I missing anything? I don't have a <code>.boto</code> file, as I believe it shouldn't be necessary—or is it? This is the list of scopes that the cluster and the node pool have:</p> <pre><code>- https://www.googleapis.com/auth/compute - https://www.googleapis.com/auth/devstorage.full_control - https://www.googleapis.com/auth/logging.write - https://www.googleapis.com/auth/monitoring.write - https://www.googleapis.com/auth/pubsub - https://www.googleapis.com/auth/servicecontrol - https://www.googleapis.com/auth/service.management.readonly - https://www.googleapis.com/auth/trace.append </code></pre>
<p>You can use gsutil inside a docker container on GKE with a <a href="https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating_a_service_account" rel="noreferrer">service account</a>, or with your own credentials.</p> <p><strong>Service Account</strong></p> <p><strong>1)</strong> Add the <code>service-account.json</code> file to your project.</p> <p><strong>2)</strong> Add a <code>.boto</code> file to your project pointing to the <code>service-account.json</code> file:</p> <pre><code>[Credentials] gs_service_key_file = /path/to/service-account.json </code></pre> <p><strong>3)</strong> In your Dockerfile, set the <code>BOTO_CONFIG</code> environment variable to point to this <code>.boto</code> file:</p> <pre><code>ENV BOTO_CONFIG=/path/to/.boto </code></pre> <p><br/><strong>Own Credentials</strong></p> <p><strong>1)</strong> Locally, run gcloud auth login. A <code>.boto</code> file will be created at <em>~/.config/gcloud/legacy_credentials/[email protected]/.boto</em> with the following structure:</p> <pre><code>[OAuth2] client_id = &lt;id&gt;.apps.googleusercontent.com client_secret = &lt;secret&gt; [Credentials] gs_oauth2_refresh_token = &lt;token&gt; </code></pre> <p><strong>2)</strong> Copy this <code>.boto</code> file into your project</p> <p><strong>3)</strong> In your Dockerfile, set the <code>BOTO_CONFIG</code> environment variable to point to this <code>.boto</code> file:</p> <pre><code>ENV BOTO_CONFIG=/path/to/.boto </code></pre> <p><br/><em>I installed standalone gsutil in the docker container using pip install gsutil</em></p>
<p>We are running our prod DBs within Docker which works out good.</p> <p>Now we are going into managed K8s and putting eg elasticsearch into it which does not feel good at all. After the issues with the volumes were solved (with PersistentVolumeClaimTemplates) clustering hit us hard. The nodes of the cluster simply do not find each other (after hours of fiddling around with using a headless service in the elasticsearch configs).</p> <p>So, I am guessing that it is not very wise to do that and we should keep DBs outside the K8s cluster on VMs managed eg by Ansible. </p> <p>What is your opinion about this?</p>
<p>Some of my clusters come from as early as Kubernetes 1.2-alpha, back then it was obvious that the really statefull services (MySQL Galera cluster in my case was the primary one) need to be kept outside of kube cluster. That did not change for me much, even with 1.8 installed, my DB is still external. But it is also large and separated (makes sense to have just mysql on each of the hosts) neither would I use k8s features to upgrade it or limit resources.</p> <p>This is in my opinion still a perfectly viable option, specialy for large data stores that make sense to be isolated / reserve full node capacity.</p> <p>On the other habd, if you have a wordpress blog to deploy, it can be perfectly reasonable to have the db for it as a part of it's helm chart. Even in the case above, while prod has separate DB, stage and dev envs have a <code>--set devdb.enabled=true</code> ability that brigs up the database inside the kube cluster instead of connecting to an external one. Another example I have is prometheus, which I fully deploy on kubernetes. Although in both cases I did not have to struggle with clustering.</p> <p>The bottom line is, that what suits your case best is the right solution for you :)</p>
<p>Consider we have two different Docker images both exposing the same port (80, for example).</p> <p>Now we'd like to create a multi-container Pod in Kubernetes cluster that contains containers created from these images.</p> <p>Is there a way to setup such configuration without changing images? How to map ports on Pod level?</p>
<p>Containers inside a Kubernetes Pod share different linux namespaces for things like networking, pid or file system. <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#networking" rel="nofollow noreferrer">From the docs</a></p> <blockquote> <p>Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost</p> </blockquote> <p>So the same way you can't have two processes listening on the same port on your machine, you can't have two containers that share the networking namespace listen on the same port.</p>
<p>I have a vm that sits in front of the cluster. Currently it is running HAProxy (with <code>use-proxy-protocol: "true"</code>). My end goal is to allow the pods associated with the default backend to be able to read the actual source client source IP. </p> <p>Here's a sample log of with <code>use-proxy-protocol</code> turned on:</p> <pre><code>10.244.0.0 - [10.244.0.0] - - [10/Jan/2018:23:06:42 +0000] "GET /platform/ping HTTP/1.1" 200 16 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7" 367 0.002 [upstream-default-backend] 10.244.3.101:80 16 0.002 200 10.244.0.0 - [10.244.0.0] - - [10/Jan/2018:23:06:59 +0000] "GET /platform/ping HTTP/1.1" 200 16 "-" "curl/7.54.0" 91 0.074 [upstream-default-backend] 10.244.3.101:80 16 0.074 200 10.244.0.0 - [10.244.0.0] - - [10/Jan/2018:23:09:51 +0000] "PROXY TCP4 127.0.0.1 127.0.0.1 43088 80" 400 173 "-" "-" 0 0.001 [] - - - - 10.244.0.0 - [10.244.0.0] - - [10/Jan/2018:23:09:59 +0000] "PROXY TCP4 127.0.0.1 127.0.0.1 43092 80" 400 173 "-" "-" 0 0.001 [] - - - - 10.244.0.0 - [10.244.0.0] - - [10/Jan/2018:23:10:09 +0000] "PROXY TCP4 127.0.0.1 127.0.0.1 43096 80" 400 173 "-" "-" 0 0.002 [] - - - - I0110 23:11:42.050971 5 controller.go:211] backend reload required I0110 23:11:42.054732 5 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"7539f546-f599-11e7-bee6-fa163e2f1153", APIVersion:"v1", ResourceVersion:"127044", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap ingress-nginx/nginx-configuration I0110 23:11:42.138901 5 controller.go:220] ingress backend successfully reloaded... 127.0.0.1 - [127.0.0.1] - - [10/Jan/2018:23:11:56 +0000] "GET /platform/ping HTTP/1.1" 200 16 "-" "curl/7.47.0" 86 0.003 [upstream-default-backend] 10.244.3.101:80 16 0.003 200 142.xx.xxx.xx - [142.xx.xxx.xx] - - [10/Jan/2018:23:15:50 +0000] "GET / HTTP/1.1" 500 21 "-" "curl/7.47.0" 78 0.020 [upstream-default-backend] 10.244.3.101:80 21 0.020 500 142.xx.xxx.xx - [142.xx.xxx.xx] - - [10/Jan/2018:23:16:02 +0000] "GET /platform/bitcoin HTTP/1.1" 200 45 "-" "curl/7.47.0" 94 0.165 [upstream-default-backend] 10.244.3.101:80 45 0.165 200 216.249.49.20 - [216.249.49.20] - - [10/Jan/2018:23:16:16 +0000] "GET / HTTP/1.1" 500 21 "-" "curl/7.54.0" 78 0.002 [upstream-default-backend] 10.244.3.101:80 21 0.002 500 216.249.49.20 - [216.249.49.20] - - [10/Jan/2018:23:16:30 +0000] "GET /platform/bitcoin HTTP/1.1" 200 45 "-" "curl/7.54.0" 94 0.002 [upstream-default-backend] 10.244.3.101:80 45 0.002 200 216.249.49.20 - [216.249.49.20] - - [10/Jan/2018:23:16:43 +0000] "GET /platform/bitcoin HTTP/1.1" 200 45 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7" 370 0.049 [upstream-default-backend] 10.244.3.101:80 45 0.049 200 216.249.49.20 - [216.249.49.20] - - [10/Jan/2018:23:16:44 +0000] "GET /favicon.ico HTTP/1.1" 404 9 "http://142.xx.xxx.xx/platform/bitcoin" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7" 324 0.013 [upstream-default-backend] 10.244.3.101:80 9 0.013 404 216.249.49.20 - [216.249.49.20] - - [10/Jan/2018:23:17:04 +0000] "GET /platform/bitcoin HTTP/1.1" 200 45 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7" 370 0.002 [upstream-default-backend] 10.244.3.101:80 45 0.002 200 216.249.49.20 - [216.249.49.20] - - [10/Jan/2018:23:17:07 +0000] "GET /platform/ping HTTP/1.1" 200 16 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/604.4.7 (KHTML, like Gecko) Version/11.0.2 Safari/604.4.7" 367 0.002 [upstream-default-backend] 10.244.3.101:80 16 0.002 200 216.249.49.20 - [216.249.49.20] - - [10/Jan/2018:23:17:56 +0000] "GET /platform/ping HTTP/1.1" 200 16 "-" "curl/7.54.0" 91 0.002 [upstream-default-backend] 10.244.3.101:80 16 0.002 200 Logs from 1/10/18 10:17 PM to 1/10/18 11:17 PM UTC </code></pre> <p><em>142.xx.xxx.xx is the IP of the HAProxy vm</em></p> <p>216.249.49.20 is an external IP coming from the university. As you can see, the ingress pod can read external IP's passed from HAProxy with <code>use-proxy-protocol: "true"</code> Just fine. </p> <p>But when I curl the address of HAProxy vm, I get:</p> <pre><code>demonfuse@Williams-MacBook-Pro ~/N/K/NGINX&gt; curl 142.xx.xxx.xx/platform/ping pong2 10.244.2.6 </code></pre> <p>10.244.2.6 is the IP of the ingress pod. <em><strong>I am confident ingress-nginx at this point has the real source IP.</strong></em></p> <p>Is there a way to forward the headers and real source IP to pods behind ingress-nginx via configmaps? From what I can tell <a href="https://github.com/kubernetes/ingress-nginx/pull/1851" rel="nofollow noreferrer">here</a> it most of it should be turned on by default.</p> <p><strong>How to reproduce</strong>:</p> <ol> <li>Install ingress-nginx on brand new cluster following the guide <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md" rel="nofollow noreferrer">here</a></li> <li>Redirect traffic from HAProxy / external load balancer to ingress-nginx</li> <li>Go script</li> </ol> <p>as follows:</p> <pre><code>import ( "github.com/kataras/iris" "github.com/kataras/iris/context" //... ) func main() { app := iris.New() app.Get("/platform/ping", func(ctx context.Context) { fmt.Println("connected with " + ctx.RemoteAddr() + "!") ctx.WriteString("pong2 " + ctx.RemoteAddr()) }) //... app.Run(iris.Addr(":80"), iris.WithoutServerError(iris.ErrServerClosed)) } </code></pre> <p><strong>Additional info:</strong> </p> <p>Environment: <code>Internet -&gt; Dedicated HAProxy VM -&gt; Bare metal OVH K8S Cluster (1 master, 2 worker)</code></p> <p>configmap.yaml</p> <pre><code>apiVersion: v1 data: proxy-set-headers: "ingress-nginx/custom-headers" use-proxy-protocol: "true" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app: ingress-nginx </code></pre> <p>custom_headers.yaml</p> <pre><code>apiVersion: v1 data: X-Forwarded-For: "142.xx.xxx.xxx" kind: ConfigMap metadata: name: custom-headers namespace: ingress-nginx </code></pre> <p>haproxy config </p> <pre><code>global maxconn 4096 log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http retries 3 option redispatch maxconn 2000 timeout connect 5000 timeout client 50000 timeout server 50000 frontend TestServerTest bind 142.xx.xxx.xxx:80 mode tcp default_backend TestServernodes backend TestServernodes mode tcp server TestServer01 142.xx.xxx.xxx:80 send-proxy </code></pre> <p>Where and how did I made a mistake? </p> <p>I have attempted a combination of X-Forwaded-For with the internal ingress pod IP, the external IP associated with the ingress service, and the public IP of the HAProxy vm. So far curling the external IP of the HAProxy still returns <code>pong2 10.244.2.6</code> (internal IP of the ingress pod)</p>
<p>I figured it out! The problem lies in the Iris web framework and has little to nothing to do with ingress-nginx. </p> <p>The solution is to read the remote headers manually in <code>ctx.Application().ConfigurationReadOnly().GetRemoteAddrHeaders()</code>. By default the Iris framework does not check for <code>X-Forwarded-For</code> and <code>X-Real-Ip</code></p> <p>Hopefully this will be useful for those running reverse proxies to and from Kubernetes. </p>
<p>We are running our prod DBs within Docker which works out good.</p> <p>Now we are going into managed K8s and putting eg elasticsearch into it which does not feel good at all. After the issues with the volumes were solved (with PersistentVolumeClaimTemplates) clustering hit us hard. The nodes of the cluster simply do not find each other (after hours of fiddling around with using a headless service in the elasticsearch configs).</p> <p>So, I am guessing that it is not very wise to do that and we should keep DBs outside the K8s cluster on VMs managed eg by Ansible. </p> <p>What is your opinion about this?</p>
<p>Personally, I prefer to keep as much important state as possible outside of Kubernetes (k8s) or any other Container Orchestration Framework (referring to it as COF from here on) and most people I asked about this topic felt the same. In the end, COFs are software which dynamically manage your containers (and their dedicated drives if you must keep state ..). While this is very cool for stateless components, I do not feel easy about it when it comes to important state. The dynamic of COFs is achieved through an extra layer of complexity and I don't want extra complexity managing important state, as more complexity also means more bug surface. In contrast to configuration management tools like Ansible or SaltStack, which run in a controlled fashion on times that you decide, COF algorithms run independently all the time and can make decisions which might affect your database containers and drives too. This means that a bug in your COF configuration or inside the COF algorithm itself might have severe consequences at any time when you might not be prepared for it. Do I need that dynamic in my critical data layer? Separate machines with a controlled configuration management feel more reliable and simpler here. </p> <p>Concerning k8s, another point is when you run self-managed clusters. Upgrading the production cluster manually is quite an experience and it feels way more secure if you cannot destroy your whole state there in a worst-case scenario.</p> <p>In the end there is also a clash of philosophies here. I think that ideally, containers should be completely stateless and disposable, which is the complete opposite of the purpose of a database. Of course we do not live in an ideal world and sooner or later you reach the point where you have to keep some amount of state in your containers to make it work. We are offered to mount persistent volumes then and I think for non-critical data this is a good compromise. But should critical data be managed by something which was primarily designed for stateless concepts, even though it offers now ways for managing state too? Opinions differ here, but I'd say no.</p> <p>That being said, in our current project we are still running ES clusters in k8s in production and never experienced severe issues or data loss. We use the ES clusters for log/metric data and other non-critical data that could easily be re-imported in case of total failure. As ES offers easy replication and scaling, it does not feel completely wrong to use it inside k8s for non-critical data, if you keep the replication factor high. Strict master-slave databases like Postgres on the other hand I wouldn't use inside k8s in a production environment. We use Postgres containers in our k8s test clusters to save cost, but in production we use managed DBs outside of k8s. Also, we run Redis master instances inside k8s, but we use them for caching purposes only - so again no critical state contained there. </p>
<p>I have a database cluster that needs to set IP range into the whitelist. I set up a kubernetes cluster and run my app. How can I give/get the kubernetes cluster an IP address so that I can set it to my whitelist?</p> <p>Thank you~</p>
<p>If you can SSH into the cluster, you can run "ip addr show" to get the ip address.</p>
<p>This is more of a theoretical question. How do you guys create the structure of a Kubernetes deployments/services/pods that runs multiple applications?</p> <p>Let's say I want to run 3 Wordpress websites on my servers. For this I need: Nginx, MySQL, PHP-FPM and the Wordpress code base.</p> <ol> <li><p>Is it better to spin off separate pods/services for Nginx, MySQL, PHP-FPM that will serve all 3 Wordpress websites and create 3 Wordpress pods/services for the 3 websites?</p></li> <li><p>OR is it better to create a separate pods/service for each one of the websites, therefore the grouping would be: </p> <ul> <li>Pod1: Nginx, MySQL, PHP-FPM, Wordpress</li> <li>Pod2: Nginx, MySQL, PHP-FPM, Wordpress</li> <li>Pod3: Nginx, MySQL, PHP-FPM, Wordpress</li> </ul></li> </ol> <p>With option 2 I would need somehow to route the specific website traffic to the specific service/pod</p>
<p>Kubernetes is extremely flexible as you are discovering and allows you to architect you application in numerous ways. As a general rule of thumb, only run one process per container per pod. However, there definitely valid use cases for running multiple containers in a pod. I think for your use case, you can use both approaches.</p> <p>Let me attempt to break down each of your components:</p> <p><strong>MySQL</strong><br> I would definitely run this in it's own pod. I would wrap it in a StatefulSet and front it with its own Service</p> <p><strong>Nginx + Wordpress</strong><br> In my opinion, whether you run these two processes in one pod or two depends on how you are using tls, if at all. As we know, Wordpress is very vulnerable to attacks. Hence, perhaps you have rules in your Nginx config to limit access to certain paths, methods, etc. If you run Nginx and Wordpress in the same pod, then you can expose only the Nginx port and the only way traffic will get to the Wordpress container is if it goes through Nginx. If you run these containers as separate pods, then from a security standpoint, you'll need some other way to make sure that inbound traffic to your Wordpress pod only comes from your Nginx pod. You can accomplish this with the NetworkPolicy resource or you can just use mutual TLS between these two pods.</p> <p>In summary, in a microservice architecture, you want your process to be as decoupled as possible so that they can be managed and deployed separately. Hence, a single process per container per Pod is attractive. However, there are instances that require you to run more than one container per Pod. In my example I used security as such motivation. </p>
<p>I cannot find any articles answering question: Is it safe/right to deploy Spinnaker to same Kubernetes cluster which Spinnaker will manage? Mainly I mean for production, HA deployments.</p>
<p>I think the architectures of Spinnaker and Kubernetes compliment each other very well, and running Spinnaker in the same K8s cluster it is managing is definitely safe.</p> <p>As per your comment in @mdirkse's answer, there is a codelab, which is official Spinnaker documentation, that explains how to create a set of basic pipelines for deploying code from a Github repo to a production Kubernetes cluster in the form of a Docker container.</p> <p>In this <a href="https://www.spinnaker.io/guides/tutorials/codelabs/kubernetes-source-to-prod/#configuring-spinnaker" rel="nofollow noreferrer">documentation</a>, it specifically states the following: </p> <blockquote> <p>We will be deploying Spinnaker to the same Kubernetes cluster it will be managing. ... </p> </blockquote> <p>Not sure if this is exactly what you are looking for though.</p>
<p>Does anyone know how could I proxy forward database traffic from localhost to an AWS database using kubernetes. Networkingwise, the pod has access to the db and it looks like below </p> <pre><code>+-----------+ +--------+ +-----------+ | | | | | | | Localhost +--&gt; | Pod +---&gt; | AWS RDS | | | | | | | +-----------+ +--------+ +-----------+ </code></pre> <p>If I had a normal ubuntu box instead of a pod I would have done something like this</p> <pre><code>ssh -L 5432:test.rds.amazonaws.com:5432 name-of-host </code></pre> <p>I tried to run the command below but that only proxy port 5432 to the pod not to RDS</p> <pre><code>kubectl port-forward pod-name 5432:5432 </code></pre> <p>Anyone knows how to tackle this issue?</p>
<p>What you refer to is ssh port forwarding, and kubernetes has something similar with <code>kubectl port-forward &lt;mypod&gt; &lt;localport&gt;:&lt;podport&gt;</code>. With one crucial difference, it forwards just the ports from the pod you point it to. What we do to achieve more or less what you ask for, is running an HAProxy in a pod. Then you can port-forward to proxying service running in pod and HAProxy will pass your traffic to the RDS.</p>
<p>From the docs:</p> <blockquote> <p>Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional. References to Secrets that do not exist will prevent the pod from starting.</p> </blockquote> <p>How to mark secret as optional? </p>
<p>What you're looking for is</p> <pre><code> - name: ENV_NAME valueFrom: secretKeyRef: name: &lt;secrets name&gt; key: &lt;secrets key&gt; optional: true </code></pre> <p>You can find type definition <a href="https://github.com/kubernetes/kubernetes/blob/e9e190c57c61a88a2ba96c4b0688b8eb3a8945a8/pkg/apis/core/types.go#L1762" rel="noreferrer">here</a></p> <p>Edit: similarly for <code>envFrom</code></p> <pre><code>envFrom: - secretRef: name: secname optional: true </code></pre>
<p>Storage size which I specified in the persistent volume claim ignores with using nfs as a storage backend.</p> <p>I want to attach persistent volume to container with specified volume size. The following is the yaml file which I used to create pvc.</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata:  name: test-claim  annotations:   volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes:  - ReadWriteMany resources:   requests:   storage: 1Mi </code></pre> <p>The following is the result of created pvc.<br></p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-bd0fdb84-f73c-11e7-bdd5-0050569b0869 1Mi RWX managed-nfs-storage 6m </code></pre> <p>Finally, I attached created volume to container and see the size of mounted file system, it shows the total amount of disk size which I export using NFS.</p> <p>Does anybody know how to enable request storage size parameter? In other words, Is there any way to specify the size of volumes when using NSF as a backend storage?</p>
<p>Simply put, no, it's not possible.</p> <p>The storage parameter is used for matching PVC to a PV, and for autoprovisioning PVs when supported (ie. adding an EBS on AWS). Kubernetes it self has no means of managing filesystem quota whatsoever.</p> <p>One thing that could help is if you'd automaticaly provision NFS to share a particular mount point on the server that is created with this limit (ie. as separate LVM LV, btrfs or zfs). You can also think about switching to something like GlusterFS with its provisioning API <a href="https://github.com/heketi/heketi" rel="nofollow noreferrer">heketi</a></p>
<p>I'm deploying drone.io on gke k8s using helm. It works great if I have LetsEncrypt off. But I really would like https support.</p> <p>Here's my service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{ template "drone_ci.fullname" . }}-external labels: name: server app: {{ template "drone_ci.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: type: LoadBalancer loadBalancerIP: {{ .Values.droneLoadBalancerIp}} ports: - name: http protocol: TCP port: 80 targetPort: 8000 - name: https protocol: TCP port: 443 targetPort: 443 selector: name: server </code></pre> <p>I have another service for port 9000 since that is only required for the drone agent.</p> <p>My drone-server deployment template looks like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: {{ template "drone_ci_server.fullname" . }} labels: app: {{ template "drone_ci.name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: 1 template: metadata: labels: name: server app: {{ template "drone_ci.name" . }} release: {{ .Release.Name }} spec: containers: - name: server image: "{{ .Values.server.image.repository }}:{{ .Values.server.image.tag }}" imagePullPolicy: {{ .Values.server.image.pullPolicy }} env: - name: "DRONE_HOST" value: {{ .Values.droneHost }} - name: "DRONE_OPEN" value: "true" - name: "DRONE_GITLAB" value: "true" - name: DRONE_GITLAB_URL value: {{ .Values.droneGitlabUrl }} - name: DRONE_ADMIN value: {{ .Values.droneAdmin }} - name: DRONE_GITLAB_CLIENT valueFrom: secretKeyRef: name: {{ template "drone_ci.fullname" . }} key: DRONE_GITLAB_CLIENT - name: DRONE_GITLAB_SECRET valueFrom: secretKeyRef: name: {{ template "drone_ci.fullname" . }} key: DRONE_GITLAB_SECRET - name: DRONE_SECRET valueFrom: secretKeyRef: name: {{ template "drone_ci.fullname" . }} key: DRONE_SECRET - name: DRONE_LETS_ENCRYPT value: "true" volumeMounts: - mountPath: /var/lib/drone name: drone-lib-pv-storage volumes: - name: drone-lib-pv-storage persistentVolumeClaim: claimName: {{ template "drone_ci.fullname" . }} </code></pre> <p>When letsEncrypt is false then my site works and it connects to my gitlab instance just fine at the correct url. When letsEncrypt is true then:</p> <p>Navigating to my drone in chrome gives me "This site cant provide a secure connection". <a href="https://www.ssllabs.com/ssltest" rel="nofollow noreferrer">ssllab't test</a> tells me:</p> <pre><code>No secure protocols supported - if you get this message, but you know that the site supports SSL, wait until the cache expires on its own, then try again, making sure the hostname you enter uses the "www" prefix (e.g., "www.ssllabs.com", not just "ssllabs.com"). no more data allowed for version 1 certificate - the certificate is invalid; it is declared as version 1, but uses extensions, which were introduced in version 3. Browsers might ignore this problem, but our parser is strict and refuses to proceed. We'll try to find a different parser to avoid this problem. Failed to obtain certificate and Internal Error - errors of this type will often be reported for servers that use connection rate limits or block connections in response to unusual traffic. Problems of this type are very difficult to diagnose. If you have access to the server being tested, before reporting a problem to us, please check that there is no rate limiting or IDS in place. NetScaler issues - some NetScaler versions appear to reject SSL handshakes that do not include certain suites or handshakes that use a few suites. If the test is failing and there is a NetScaler load balancer in place, that's most likely the reason. Unexpected failure - our tests are designed to fail when unusual results are observed. This usually happens when there are multiple TLS servers behind the same IP address. In such cases we can't provide accurate results, which is why we fail. </code></pre> <p>Looking at my pod logs, every time I try and access drone via chrome I get:</p> <pre><code>http: TLS handshake error from x.x.x.x:53938: acme/autocert: no supported challenge type found http: TLS handshake error from y.y.y.y:53936: acme/autocert: missing certificate </code></pre> <p>My drone server image is:</p> <pre><code>image: repository: drone/drone tag: 0.8 pullPolicy: Always </code></pre> <p>What am I missing or doing wrong? </p>
<p>I would suggest you switching from a LoadBalancer type service to a regular one an instead expose it via means of Ingress. Coupling Ingress with kube-lego you get a very nice support for exposing anything you want easily with trivial way of enabling let's encrypt integration for the used domains, even if the software behind has no built in support for LE. This is in fact how my own instance of drone.io is set up.</p> <p>While this might not be an answer to a root cause of your problem, which probably needs some more debug information, it's a perfectly viable and verified solution :)</p> <p>As for the error it self, it seems from <a href="https://github.com/golang/crypto/blob/541b9d50ad47e36efd8fb423e938e59ff1691f68/acme/autocert/autocert.go#L472" rel="nofollow noreferrer">this code</a> that there is no support in drone for challenges other then tls-sni-01/02 one. Among other issues that might be cluster level, there is also <a href="https://community.letsencrypt.org/t/2018-01-09-issue-with-tls-sni-01-and-shared-hosting-infrastructure/49996" rel="nofollow noreferrer">this issue</a> with TLS-SNI being now disabled by LE</p>
<p>I am trying to create cluster with Kops, In my script I take a directory called instancegroup and copy its contant inside my "state" S3 bucket. When the cluster is being created I can see in Amazon console that all my nodes where deployed (master, 2 default nodes, and 5 nodes that I specifed inside my instancegroup directory).</p> <p>Problem is that when typing :</p> <pre><code>kubectl get nodes </code></pre> <p>I only get the master machine and the 2 default nodes.</p> <p>Is that the right way to create such cluster? And why cant I see my other nodes?</p>
<p><code>get nodes</code> only lists registered Node API objects. The other nodes must not be registering themselves with the API server. Check the logs for the kubelet process on those nodes to determine why they aren't registering themselves. </p>
<p>I have 9 pods running which are basically 9 different applications.</p> <p>Is it possible to have same k8s service (LB | Public IP) for multiple pods such that I can access them by different ports, but the same IP of service (LB)?</p> <p>E.g. like so:<br> LB-IP:80 -- In the backend an application is running, which I can access.<br> LB-IP:8080 (Same IP as previous) -- I will run another pod in the backend.</p> <p>**** Selectors will be different for each pod.**</p>
<p>As for the pure service approach, no it is not possible. Service relates to only one selector so you can't.</p> <p>Now, as you talk about LB here, you might be talking about exposing the thing externally, and for that you can have an Ingress/IngressController. If you want, you can also just deploy a "gateway" service that will do the thing for you (ie. HA proxy configured to expose these different ports you want)</p> <p>If your environment does not support ingress, you might want to deploy ie. <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">Nginx Ingress Controller</a> which does a great job for the point of entry to your services</p>
<p>I'd like to mount volume if it exists. For example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: "/etc/foo" volumes: - name: foo secret: secretName: mysecret </code></pre> <p>is the example from the documentation. However if the secret <code>mysecret</code> doesn't exist I'd like to skip mounting. That is optimistic/optional mount point.</p> <p>Now it stalls until the secret is created.</p>
<p>secret and configmap volumes can be marked optional, and result in empty directories if the associated secret or configmap doesn't exist, rather than blocking pod startup</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: /etc/foo volumes: - name: foo secret: secretName: mysecret optional: true </code></pre>
<p>In my StatefulSet deployment specification, I have 'replicas' defined as 2. Now I want to use Persistent Volume (PV) and Persistent Volume Claims (PVC), for which I created one PV (dynamic provisioning using StorageClass) and one PVC which I then used in my deployment spec. I am testing the deployment on AWS.</p> <p>The problem is that only one node is able to get attached to the PV using the PVC. Even if I create multiple PVs and PVCs for each node, I am not sure how to use them in the deployment spec so that each node picks a different PV.</p> <p>Error:</p> <pre><code>Multi-Attach error for volume "pvc-ec99e704-f72e-11e7-87a6-065468f047a0" Volume is already exclusively attached to one node and can't be attached to another </code></pre> <p>Any pointer will help! </p>
<p>I think there are two issues you're facing here.</p> <p>For starters, AWS EBS can be attached to one node exclusively, o can't have the same PVC/PV couple for multiple PODs (what is called a ROX or RWX mode).</p> <p>Secondly, for scaled deployments/statefullsets etc. there is a special way to declare PVC in a way that is "dynamic" called volumeClaimTemplate.</p> <pre><code> volumeClaimTemplates: - metadata: name: myname spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi </code></pre> <p>With this, whenever scaled, your pods will create/remove matching PVCs automatically.</p> <p>This use case really calls for support of automatic PV provisioning though, to be really usefull, meaning you either need to be on a supported cloud provider or use another mechanism like GlusterFS with <a href="https://github.com/heketi/heketi" rel="nofollow noreferrer">Heketi</a></p>
<p>I'd like to edit my secrets. The only way I'm aware of is <code>kubectl edit secret mysecret</code> which gets me yaml blob to edit. However, all secrets are base64 encoded which isn't an easy way to edit them.</p> <p>Can I mount secrets to local volume somehow? Can I extract secrets to my localhost and edit them there? And lastly, can I edit some way to get plaintext keys/values (or just one key) instead of base64 encoded values.</p> <p>P.S. Can I see secret keys easily with kubectl? With <code>edit</code> I see them, but when I'm only interested in keys, not the values.</p>
<p>Unfortunately no, the problem you describe is something you just have to deal with "on the side" by decoding/encoding base64 content on your own.</p> <p>There are ways to simplify this by using templating for resources (ie. via helm charts), but that involves storing the raw secret in some other way and just applying changes from "source" rather then do an edit.</p>
<p>When deploying a Kubernetes cluster manually we use kubeadm,</p> <p><pre>kubeadm init ...</pre></p> <p>passing the parameter <pre>--apiserver-cert-extra-sans=&lt;FQDN&gt;</pre> to include the FQDN in the generated certificate.</p> <p>What approach can we use to achieve the same affect using Kubespray/Ansible?</p>
<p>I thought it was <a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.3.0/inventory/group_vars/k8s-cluster.yml#L176" rel="nofollow noreferrer"><code>supplementary_addresses_in_ssl_keys</code></a> but <a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.3.0/roles/kubernetes/secrets/templates/openssl.conf.j2#L32" rel="nofollow noreferrer">seeing it used</a> demonstrates they really mean "IP address" and not the more generic address concept.</p> <p>So I would suspect one of two paths: 1. update the <code>openssl.conf.j2</code> to distinguish between a <code>supplementary_address</code> which is an IP, versus a hostname; 2. cheat and make the <a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.3.0/roles/kubernetes/secrets/templates/openssl.conf.j2#L15" rel="nofollow noreferrer"><code>kube-master</code> "hostnames"</a> in <a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.3.0/inventory/inventory.example#L14-L15" rel="nofollow noreferrer">the inventory</a> match up with the actual SAN name you would like in the cert (since those identifiers in the inventory can be mapped to IP addresses via <a href="https://github.com/kubernetes-incubator/kubespray/blob/v2.3.0/inventory/inventory.example#L3" rel="nofollow noreferrer"><code>ansible_ssh_host</code></a> for the purposes of connecting to the Nodes)</p> <p>Arguably the change to <code>openssl.conf.j2</code> should go upstream in a PR, because your request certainly seems like a common and reasonable one</p>
<p>I am getting the error:</p> <blockquote> <p>error validating "mysql.yaml": error validating data: ValidationError(Deployment.spec.template.spec.volumes[0]): unknown field "path" in io.k8s.kubernetes.pkg.api.v1.Volume; )</p> </blockquote> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mysql labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mapping-sandbox-test mountPath: /var/lib/mysql volumes: - name: mapping-sandbox-test path: gs://&lt;bucket-name&gt; </code></pre>
<p>Your Deployment object looks correct using name and path as keys. You can see an example on <a href="https://github.com/maciekrb/gcs-fuse-sample" rel="noreferrer">how to mount a GCS bucket on kubernetes here</a></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: gcsfuse-test spec: replicas: 1 template: metadata: labels: app: gcsfuse-test spec: containers: - name: gcsfuse-test image: gcr.io/some-repo/gcs-fuse:latest securityContext: privileged: true capabilities: add: - SYS_ADMIN lifecycle: postStart: exec: command: ["gcsfuse", "-o", "nonempty", "some-bucket", "/mnt/some-bucket"] preStop: exec: command: ["fusermount", "-u", "/mnt/some-bucket"] </code></pre> <p><a href="https://stackoverflow.com/questions/35966832/mount-google-storage-bucket-in-google-container">This Stack Overflow question might help too</a>.</p>
<p>I want to start monit process in docker.</p> <p>Since its getting daemonized, the container is getting completed once monit starts. What is the best way to run it as a foreground process?</p>
<p>From <a href="https://wiki.gentoo.org/wiki/Monit" rel="nofollow noreferrer">https://wiki.gentoo.org/wiki/Monit</a>:</p> <pre><code>Running monit in the foreground To run monit in the foreground and provide feedback on everything it is detecting, use the -Ivv option: root #monit -Ivv </code></pre>
<p>This is a pretty basic question that I cannot seem to find an answer to, but I cannot figure out how to set the concurrencyPolicy in a cronjob. I have tried variations of my current file config:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: job-newspaper spec: schedule: "* */3 * * *" jobTemplate: spec: template: spec: containers: - name: job-newspaper image: bdsdev.azurecr.io/job-newspaper:latest imagePullPolicy: Always resources: limits: cpu: "2048m" memory: "10G" requests: cpu: "512m" memory: "2G" command: ["spark-submit","/app/newspaper_job.py"] restartPolicy: OnFailure concurrencyPolicy: Forbid </code></pre> <p>When I run <code>kubectl create -f ./job.yaml</code> I get the following error:</p> <pre><code>error: error validating "./job.yaml": error validating data: ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown field "concurrencyPolicy" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>I am probably either putting this property in the wrong place or calling it the wrong name, I just cannot find it in documentation. Thanks!</p>
<p>The property <code>concurrencyPolicy</code> is part of the CronJob spec, not the PodSpec. You can locally see the spec for a given object using <code>kubectl explain</code>, like</p> <pre><code>kubectl explain --api-version="batch/v1beta1" cronjobs.spec </code></pre> <p>There you can see the structure/spec of the CronJob object, which in your case should be</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: job-newspaper spec: schedule: "* */3 * * *" concurrencyPolicy: Forbid jobTemplate: spec: template: spec: containers: - name: job-newspaper image: bdsdev.azurecr.io/job-newspaper:latest imagePullPolicy: Always resources: limits: cpu: "2048m" memory: "10G" requests: cpu: "512m" memory: "2G" command: ["spark-submit","/app/newspaper_job.py"] restartPolicy: OnFailure </code></pre>
<p>The <strong>error</strong> I am getting after running <strong>kubectl cluster-info</strong></p> <p>Kubernetes master is running at <a href="https://xxx-xxx-aks-yyyy.hcp.westeurope.azmk8s.io:443" rel="nofollow noreferrer">https://xxx-xxx-aks-yyyy.hcp.westeurope.azmk8s.io:443</a></p> <p>To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. <strong>Unable to connect to the server: net/http: TLS handshake timeout</strong> Rifats-MacBook-Pro:~ rifaterdemsahin$ kubectl cluster-info</p>
<p>There is some issue on Azure Server. Please refer to this similar <a href="https://github.com/Azure/AKS/issues/112" rel="nofollow noreferrer">issue</a>.</p> <p>I suggest you could try again, if it still does not work, you could open a ticket or give feedback to Azure.</p>
<p>I need to understand how to deal with common libraries that i have created which my Application depends upon. When i create jar for this app using maven it creates a package. But how do we maintain or configure other common libraries which are listed in pom.xml of this application?</p> <p>should we have maven also as an image in the docker file?</p> <p>Please explain in detail.</p> <p>My current progress is explained below: I have an Application A which has other dependencies like B and C libraries which i have specified in pom.xml. When i run application A in my local system it uses local repository that i have configured in user settings for maven. SO it works fine.</p> <p>So how do we maintain this in kubenetes.</p>
<p>The images that you use to build the containers that run on Kubernetes should contain everything that's needed to run the application.</p> <p>When you create the JAR file for your application, this JAR should contain the dependencies. There are different ways to achieve this, using both Maven or Gradle. <a href="https://stackoverflow.com/questions/574594/how-can-i-create-an-executable-jar-with-dependencies-using-maven">This is an example using Maven</a> and its Apache Maven Assembly Plugin. <a href="http://www.baeldung.com/executable-jar-with-maven" rel="nofollow noreferrer">This is another good user guide on how to achieve it</a>.</p> <p>Then, you need to create a container image that can run that JAR file, something like</p> <pre><code>FROM openjdk:8-jdk-alpine EXPOSE 8080 WORKDIR /opt/app CMD ["java", "-jar", "app.jar"] COPY build/libs/app.jar /opt/app </code></pre> <p>Once this container image is published on a registry, whenever Kubernetes needs to schedule and create a container, it will just use that image: there is no need to re-compile the application and its dependencies.</p>
<h3>Setup</h3> <p>The pods of the deployment named frontend are exposed via a service named frontend.</p> <pre><code>NAME READY STATUS RESTARTS AGE po/frontend-b48b88789-dcxs6 1/1 Running 0 2h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/frontend 1 1 1 1 2h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/frontend ClusterIP 10.106.243.125 &lt;none&gt; 80/TCP 2h </code></pre> <h3>Problem</h3> <p>Querying via busybox the DNS name for the service frontend returns the services ip address. Querying via busybox the DNS name for the pod frontend-b48b88789-dcxs6 doesn't resolve.</p> <pre><code>/ # nslookup frontend Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: frontend Address 1: 10.106.243.125 frontend.exam.svc.cluster.local / # nslookup frontend-b48b88789-dcxs6 Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local nslookup: can't resolve 'frontend-b48b88789-dcxs6' </code></pre> <hr> <h3>Question</h3> <p>What do I have to do to make a pod resolvable via the Kubernetes clusters DNS system?</p> <p><em>Sources I have been looking for an answer</em></p> <ul> <li><em><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/</a></em></li> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></em></li> </ul>
<p>I think that if you really need this, then what you actually might be interested in is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> which provides a somewhat static way of referencing to the pods that are spinned up by it. With StatefulSet, your pods will have a predictable name like <code>myapp-0</code>, <code>myapp-1</code> etc. and you will be able to resolve them as ie. <code>myapp-0.mysvc</code> (where <code>mysvc</code> is a "governing" service for the statefulset)</p> <p>Predictable names allow for easier configuration of things that need to form a cluster, plus you get other gains like sequential scaling, mapping of persistent storage or ability to create services that always hit given pod (as in pod number N) of the StatefulSet meaning you are for example capable of creating a <code>myapp-0</code> service that always points to a <code>myapp-0.mysvc</code> pod.</p>
<p>I created a StatefulSet on GKE, and it provisioned a bunch of GCE disks that are attached to the pods that belong to that StatefulSet. Suppose I scale the StatefulSet to 0: the constituent pods are destroyed and the disks are released. When I scale back up, the disks are reattached and mounted inside the correct pods. </p> <p>My questions are:</p> <ul> <li>How does Kubernetes keep track of which GCE disk to reconnect to which StatefulSet pod?</li> <li>Suppose I want to restore a StatefulSet Pod's PV from a snapshot. How can I get Kubernetes to use the disk that was created from the snapshot, instead of old disk?</li> </ul>
<p>When you scale the StatefulSet to 0 replicas, the pods get destroyed but the persistent volumes and persistent volume claims are kept. The association with the GCE disk is written inside the PersistentVolume object. When you scale the RS up again, pods are assigned to the correct PV and thus get the same volume from GCE.</p> <p>In order to change the persistent volume - GCE disk association after a snapshot restore, you need to edit the PV object.</p>
<p>Suppose I have a RabbitMQ instance and a set of pods that pick messages from RabbitMQ and process them. How do I make Kubernetes increase the number of pods as the queue size increases?</p> <p>(I'm mentioning RabbitMQ, but that's just an example. Pick your favorite message queue software or load balancer if you wish.)</p>
<p>The top-level solution to this is quite straightforward:</p> <p>Set up a separate container that is connected to your queue, and uses the Kubernetes API to scale the deployments.</p> <p>There exist some solutions to this problem already, but they do however not look like they are actively maintained and production ready, but might help:</p> <ul> <li><a href="https://github.com/mbogus/kube-amqp-autoscale" rel="noreferrer">https://github.com/mbogus/kube-amqp-autoscale</a></li> <li><a href="https://github.com/mbogus/docker-kube-amqp-autoscale" rel="noreferrer">https://github.com/mbogus/docker-kube-amqp-autoscale</a></li> <li><a href="https://github.com/onfido/k8s-rabbit-pod-autoscaler" rel="noreferrer">https://github.com/onfido/k8s-rabbit-pod-autoscaler</a></li> </ul>
<p>Running Kubernetes on GKE</p> <p>Installed Nginx controller with latest stable release by using helm.</p> <p>Everythings works well, except adding the whitelist-source-range annotation results in that I'm completely locked out from my service.</p> <p>Ingress config</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: staging-ingress namespace: staging annotations: kubernetes.io/ingress.class: nginx ingress.kubernetes.io/whitelist-source-range: "x.x.x.x, y.y.y.y" spec: rules: - host: staging.com http: paths: - path: / backend: serviceName:staging-service servicePort: 80 </code></pre> <p>I connected to the controller pod and checked the nginx config and found this:</p> <pre><code># Deny for staging.com/ geo $the_real_ip $deny_5b3266e9d666401cb7ac676a73d8d5ae { default 1; x.x.x.x 0; y.y.y.y 0; } </code></pre> <p>It looks like he is locking me out instead of whitelist this IP's. But it also locking out all other addresses... I get 403 by going from staging.com host.</p>
<p>Yes. However, I figured out by myself. Your service has to be enabled <code>externalTrafficPolicy: Local</code>. That means that the actual client IP should be used instead of the internal cluster IP.</p> <p>To accomplish this run <code>kubectl patch svc nginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'</code></p>
<p>I'm trying to reach a DaemonSet listening on port 18081 via a service but unsucessfully so far.</p> <p>The pod that was started by the DaemonSet works correctly. I can port-forward to the pod and port 18081 and talk to exposed API on the port.</p> <p>The service for the DaemonSet is configured as follows:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: monerod-service spec: selector: name: monerod ports: - protocol: TCP port: 18081 </code></pre> <p>In the Kubernetes UI (kubectl proxy) the correct Pod is selected in the service, so the pod selectors seems to be fine.</p> <p>I can execute a ping on the pod that needs to connect to monerod-service and the correct IP is shown. But connection to the port via curl fails (same curl works in the port-forward test).</p> <p>What am I missing in the configuration. Is there a difference between DaemonSet/Deployment service creation?</p> <hr> <p>More playing around with Kubernetes</p> <p>I played around with the service and DaemonSet. I converted the DaemonSet to a "normal" Deployment, but the same behaviour is shown. So the behaviour has nothing to do with DaemonSets. It has to be something else with services/pods I do not understand.</p> <p>I created the service now with:</p> <pre><code>kubectl expose deployment monerod-deployment --type=ClusterIP </code></pre> <p>and this results in the following service:</p> <pre><code>{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "monerod-deployment", "namespace": "default", "labels": { "app": "monerod" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 18081, "targetPort": 18081 } ], "selector": { "app": "monerod" }, "clusterIP": "&lt;some-ip&gt;", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } </code></pre> <p>That looks pretty good. That should expose port 18081 for other services only in the cluster.</p> <p>If I execute now a</p> <pre><code>curl -X POST http://monerod-deployment:18081/json_rpc ... </code></pre> <p>in the pod that should talk to the monerod service this results in</p> <pre><code>port 18081: Connection refused </code></pre> <p>In the monerod-deployment docker container the container is exposed (with EXPOSE 18081) and the the deployment has the following port definition:</p> <pre><code>"ports": [ { "containerPort": 18081, "protocol": "TCP" } ], </code></pre> <p>Doing a port-forward to the deployment with kubectl and executing the curl locally works perfectly fine. I do not understand, why the connection from the pod to the monerod-deployment cannot be established.</p>
<p>Found out was wrong, it was nothing in Kubernetes.</p> <p>The monerod daemon does only allow connections from localhost by default, for security reason. Enabling external connections '--confirm-external-bind' made it work.</p> <p>The link to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#running-commands-in-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#running-commands-in-a-pod</a> from Baltazar was very helpful finding out was is going on and elimanting all causes in Kubernetes one by one!</p>
<p>I need to implement logging for kubernetes ingress installation so all request will be logging with the following details:</p> <ul> <li>Request headers</li> <li>Request body</li> <li>Response headers</li> <li>Response body</li> </ul> <p>I understand that I can edit <code>log_format</code> of nginx using <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/configmap.md" rel="nofollow noreferrer">ConfigMap</a>, however, for logging response body, I need to use lua (something like <a href="https://gist.github.com/morhekil/1ff0e902ed4de2adcb7a" rel="nofollow noreferrer">this</a>). </p> <p>In this case, I should have possibility to add this lua code in every server section of nginx config. What is the most easy way to do it?</p> <p>Thank you!</p>
<p>I think I found the solution. I need to use custom nginx template. In general, the steps should be as following:</p> <ol> <li>Obtain the <code>nginx.tmpl</code> from the ingress-nginx controller that is located at <code>/etc/nginx/template/nginx.tmpl</code></li> <li>Edit it according to your needs</li> <li>Map it using volumes</li> </ol> <p>The detailed instruction is located <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/custom-template.md" rel="nofollow noreferrer">here</a></p>
<p>I am very new to Kubernetes so apologies for gaps in my understanding and possibly incorrect wording. I am developing on my local MacBook Pro, which is somewhat resource constrained. My actual payload is a database, which is already running in a Docker container, but obviously needs some sort of persistent storage. The individual containers also need to talk to each over network and some of them need a channel (port open) to the outside world. I would like to set up a single Kubernetes cluster for dev and testing purposes that I can later easily deploy to to bare metal servers or a cloud vendor - Google and AWS.</p> <p>From reading so far it looks that I can, for example use minikube and orchestrate that cluster on top VirtualBox that I am already running. How would that then map to an actual deployment in the cloud? What additional tools do I need to get it all running, especially with regards to persistent storage and network? Will it map easily to the cloud? What configuration management software would you recommend to maintain all that configuration?</p>
<p>A very short answer is that it's hard to do this properly.</p> <p>One of the best options I know of is <a href="https://github.com/linuxkit/kubernetes" rel="nofollow noreferrer">LinuxKit</a>, it allows you to build identical images that you can run on any of the popular cloud providers or in a data centre of your own, or desktop hypervisor. In fact, this is what Docker for Mac is based on.</p> <hr> <p>Disclaimer: I am one of the LinuxKit contributors.</p>
<p>I have a service which is long-running (in a <code>while 1</code> loop) and processes payloads via GCloud pub/sub, after which it writes the result to a DB.</p> <p>The service doesn't need to listen on any port.</p> <p><strong>What would the declarative YAML config look like for <code>Kind=Deployment</code>?</strong></p> <p>I understand <code>ClusterIP</code> is the default type, and the docs <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">go on to say</a> that a headless service just has to define <code>spec.clusterIP</code> as <code>None</code>.</p> <p>(A better practice would probably be to modify the worker to exit after a successful payload processing, and change the <code>Kind</code> to <code>Job</code>, but this is in the backlog)</p>
<p>What you're describing sounds more like a job or a deployment than a service. You can run a deployment (which creates a replicaset, which ensures a certain number of replicas are running) without creating a service. </p> <p>If your pod isn't exposing any network services for others to consume, there's very little reason to create a service. </p>
<p>I have created a deployment for jenkins in Kubernetes. The pod is running fine, I've created a service to access jenkins on service-ip:8080 but it seems not to work. When I create an <code>ingress</code> above the service I can access it using the public ip.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: jenkins-ui namespace: jenkins spec: type: NodePort selector: app: jenkins ports: - protocol: TCP port: 8080 targetPort: 8080 name: ui </code></pre> <p>I created my service as described above:</p> <pre><code>$ kubectl get svc --namespace=jenkins NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins-ui NodePort 10.47.xx.xx &lt;none&gt; 8080:30960/TCP 1d </code></pre> <p>I tried to access: <code>10.47.xx.xx:8080</code> but I was not able to access the jenkins UI. What am I doing wrong? I also tried <code>10.47.xx.xx:30960</code></p> <p>I want to access my jenkins UI using a service but I want to keep it private in my cluster. (ingress makes it public).</p> <p>UPDATE:</p> <pre><code>$ kubectl describe svc jenkins-ui --namespace jenkins Name: jenkins-ui Namespace: jenkins Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=jenkins Type: NodePort IP: 10.47.xx.xx Port: ui 8080/TCP TargetPort: 8080/TCP NodePort: ui 30960/TCP Endpoints: 10.44.10.xx:8080 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>accessing the pod on 10.44.10.xx:8080 does not work too.</p>
<p>If I understand correctly, you want any container running in your cluster to be able to access your jenkins service, but you don't want your jenkins service to be accessible outside your cluster to something like your browser?</p> <p>In this case:</p> <pre><code>curl http://jenkins-ui.default:8080 curl http://10.47.10.xx:8080 </code></pre> <p>will work perfectly fine from <strong><em>inside any container in your kubernetes cluster</em></strong>.</p> <p>Also, you cannot access it <code>10.47.10.xx:8080</code> from outside your cluster because that IP is only valid/available inside your kubernetes cluster.</p> <p>If you want to access it from outside the cluster an ingress controller or to connect on <code>http://&lt;node-ip&gt;: 30960</code> is the only way to connect to the <code>jenkins-ui</code> k8s service and thus the pod behind it.</p> <p><strong>EDIT:</strong> Use kubectl port-forward</p> <p>In development mode, if you want to access a container running internally, you can use <code>kubectl port-forward</code>:</p> <pre><code>kubectl port-forward &lt;jenkins-ui-pod&gt; 9090:8080 </code></pre> <p>This way, <code>http://localhost:9090</code> will show you the jenkins-ui screen because you have kubectl access.</p> <p><code>kubectl port-forward</code> doesn't work for services yet: <a href="https://github.com/kubernetes/kubernetes/issues/15180" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/15180</a></p>
<p>I've been having a really frustrating issue where Kubernetes services randomly stop being available on their cluster IPs after around a few hours of deployment. They almost seem to be ageing.</p> <p>My pods are having <code>hostNetwork: true</code> and <code>dnsPolicy: ClusterFirstWithHostNet</code>. Here is where things get interesting - I have two namespaces (staging and production) on the afflicted cluster. On another identical cluster with just one namespace, this issue hasn't seem to have appeared yet!</p> <p>On trying to look at the <code>kube-proxy</code> logs, here is what I see:</p> <pre><code>admin@gke ~ $ tail /var/log/kube-proxy.log E0115 12:13:01.669222 5 proxier.go:1372] can't open "nodePort for staging/foo:foo-sip-1" (:31765/tcp), skipping this nodePort: listen tcp :31765: bind: address already in use E0115 12:13:01.671353 5 proxier.go:1372] can't open "nodePort for staging/foo:http-api" (:30932/tcp), skipping this nodePort: listen tcp :30932: bind: address already in use E0115 12:13:01.671548 5 proxier.go:1372] can't open "nodePort for staging/our-lb:our-lb-http" (:32477/tcp), skipping this nodePort: listen tcp :32477: bind: address alrea dy in use E0115 12:13:01.671641 5 proxier.go:1372] can't open "nodePort for staging/foo:foo-sip-0" (:30130/tcp), skipping this nodePort: listen tcp :30130: bind: address already in use E0115 12:13:01.671710 5 proxier.go:1372] can't open "nodePort for default/foo:foo-sip-0" (:30132/tcp), skipping this nodePort: listen tcp :30132: bind: address already in use E0115 12:13:02.510177 5 proxier.go:1372] can't open "nodePort for default/our-lb:our-lb-http" (:31613/tcp), skipping this nodePort: listen tcp :31613: bind: address alrea dy in use E0115 12:13:06.577412 5 server.go:661] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use E0115 12:13:11.578446 5 server.go:661] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use E0115 12:13:16.580441 5 server.go:661] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use E0115 12:13:21.583691 5 server.go:661] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use </code></pre> <p>I've now deleted one namespace from the afflicted cluster and the remaining one seems to have fixed itself; but I am curious about why Kubernetes didn't warn me at the time of resource creation, and if it <em>wasn't</em> competing for resources, then why does it reassign them later on in a way that causes this issue? This can't be a DNS cache issue, because <code>getent hosts</code> shows me the right cluster IP for the service - that IP just isn't reachable! It really seems to me to be a <strong>bug in the Kubernetes networking setup.</strong></p> <p>Should I be creating an issue, or is there something obvious that I'm doing incorrectly?</p>
<p>It sounds like you have pods with <code>hostNetwork: true</code> and use services with <code>type: NodePort</code> and set fix node port number to be the same as the one your pod will be using.</p> <p>Generally, unless you have a very compelling use-case, you should avoid <code>hostNetwork: true</code>. It's mostly for use with legacy applications or daemons that require access to the host network. If you do need to use a service along with your pods that are on host network, you should use a service with <code>type: ClusterIP</code>.</p>
<p>i need a solution for that:</p> <p>i have 2 node pools in gcloud kubernetes, first is preemptible and autoscaling, second is only autoscaling.</p> <p>Jobs should be started on the first one ( with preemptible VMs ), but when no resources on the first pool are available Jobs should be started on the second one.</p> <p>How can i realize that, maybe with Taints and Tolerations?</p>
<p>I don't think you can get exactly what you want with the Cluster Autoscaler but I'll hopefully give you a couple options and pointers to further explore.</p> <ul> <li>The Cluster Autoscaler has the notion of <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders" rel="noreferrer">Expanders</a> which can help determine which node group to scale up when a scaling event happens. The <code>price</code> expander seems to be close to what you want, but based on the <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/pricing.md" rel="noreferrer">description</a> of it, it doesn't look like it has support for preemptible VMs yet. You could explore that further, and possibly <a href="https://github.com/kubernetes/autoscaler/issues" rel="noreferrer">submit a feature request</a> to add support for preemptible node pools.</li> <li><p>When choosing a mixture of preemptible and non-preemptible nodes, whenever there is a stock out on GCP and preemptible nodes are not available, it's very likely that non-preemptible nodes will <strong>also</strong> not be available. In that case you may find yourself with a small number of non-preemptible nodes in the cluster and without the ability to create new ones.</p> <p>It may be a better idea to have a fixed minimum size of non-preemptible nodes, and auto-scale a preemptible node pool on top of that using the Cluster Autoscaler.</p></li> </ul>
<p>We are planning to use Kube for Postgres deployments. Our applications will be microservices with separated schema (or logical database). For security sake, we'd like to have separate users for each schema/logical_db. </p> <p>I suppose that the db/schema&amp;user should be created by Kube, so the application itself does not need to have access to DB admin account.</p> <p>In <a href="https://github.com/sorintlab/stolon" rel="nofollow noreferrer">Stolon</a> it seems there is just a possibility to create a single user and single database and this seems to be the case also for other HA Postgres charts. </p> <p>Question: What is the preferred way in Microservices in Kube to create DB users?</p>
<p>When it comes to creating user, as you said, most charts and containers will have environment variables for creating a user at boot time. However, most of them do not consider the possibility of creating multiple users at boot time. </p> <p>What other containers do is, as you said, have the root credentials in k8s secrets so they access the database and create the proper schemas and users. This does not necessarily need to be done in the application logic but, for example, using an init container that sets up the proper database for your application to run.</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers</a></p> <p>This way you would have a pod with two containers: one for your application and an init container for setting up the DB.</p>
<p>I want to link my selenium/hub container to my chrome and firefox node containers in a POD.</p> <p>In docker, it was easily defined in the docker compose yaml file. I want to know how to achieve this linking in kubernetes.</p> <p>This is what appears on the log.: <a href="https://i.stack.imgur.com/JJnKZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JJnKZ.png" alt="logs"></a><br> This is the error image: <a href="https://i.stack.imgur.com/DFjNC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DFjNC.png" alt="error"></a></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mytestingpod spec: containers: - name: seleniumhub image: selenium/hub ports: - containerPort: 4444 hostPort: 4444 - name: chromenode image: selenium/node-chrome-debug ports: - containerPort: 5901 links: seleniumhub:hub - name: firefoxnode image: selenium/node-firefox-debug ports: - containerPort: 5902 links: seleniumhub:hub </code></pre> <p><a href="https://i.stack.imgur.com/DFjNC.png" rel="nofollow noreferrer">2</a>: </p>
<p>You don't need to link them. The way Kubernetes works, <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication" rel="nofollow noreferrer">all the containers in the same Pod are already on the same networking namespace</a>, meaning that they can just talk to each other through <code>localhost</code> and the right port.</p> <blockquote> <p>The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports. Each pod has an IP address in a flat shared networking space that has full communication with other physical computers and pods across the network.</p> </blockquote> <p>If you want to access the <code>chromenode</code> container from the <code>seleniumhub</code> container, just send a request to localhost:5901.</p> <p>If you want to access the <code>seleniumhub</code> container from the <code>chromenode</code> container, just send a request to localhost:4444.</p>
<p>When a Kubernetes service is exposed via an <code>Ingress</code> object, is the load balancer "phisically" deployed in the cluster, i.e. as some <code>pod</code> controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?</p> <p>Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?</p>
<p>By default, a kubernetes cluster has no <code>IngressController</code> at all. This means that you need to deploy one yourself if you are on premise.</p> <p>Some cloud providers do provide a default ingress controller in their kubernetes offer though, and this is the case of GKE. In their case the ingress controller is provided "As a service" but I am unsure about where it is exactly deployed.</p> <p>Talking about AWS, if you deploy a cluster using kops you're on your own (you need to deploy an ingress controller yourself) but different deploy options on AWS could include an ingress controller deployment.</p>
<p>I´m setting up kubernetes on GKE as described in Kelsey Hightowers <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/</a></p> <p>Everything works fine except for setting up the DNS ClusterAddon <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/12-dns-addon.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/12-dns-addon.md</a></p> <p>When I start kube-dns like that:</p> <blockquote> <p>kubectl create -f <a href="https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml</a></p> </blockquote> <p>I do get the expected output :</p> <pre><code> serviceaccount "kube-dns" created configmap "kube-dns" created service "kube-dns" created deployment "kube-dns" created </code></pre> <p>But checking state of the pods and the output of the kube-dns container I see errors:</p> <pre><code>kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE kube-dns-6c857864fb-cpvvr 2/3 CrashLoopBackOff 63 2h </code></pre> <p>and in the container log:</p> <pre><code>I0115 13:22:35.272492 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0115 13:22:35.772476 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0115 13:22:36.272406 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0115 13:22:36.772356 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0115 13:22:37.272386 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... E0115 13:22:37.273178 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.32.0.1:443: i/o timeout E0115 13:22:37.273340 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.32.0.1:443: i/o timeout </code></pre> <p>The URL <a href="https://10.32.0.1:443" rel="nofollow noreferrer">https://10.32.0.1:443</a> in the container log seems to be wrong, but I cannot find any place where I can specify a different URL and neither the place where this URL is set in the config file <a href="https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml</a></p>
<p>The URL comes from internal kubernetes information (service account token) and it should be fine (it should point to the first IP in the range assigned to service network, and that should be the <code>kubernetes.default</code> service. What you need to check is if your pod-to-pod networking and kube-proxy (it implements service ClusterIPs) works as expected.</p> <p>If you do <code>kubectl get svc kubernetes -o yaml</code> you should see a kubernetes service with that <code>10.32.0.1</code> IP so confirm that as well (apiserver registers its own IP for this svc so doing <code>ksp get endpoints kubernetes</code> should give you API IP/PORT)</p>
<p>When a Kubernetes service is exposed via an <code>Ingress</code> object, is the load balancer "phisically" deployed in the cluster, i.e. as some <code>pod</code> controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?</p> <p>Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?</p>
<p>I will answer with respect to Google Cloud Engine. </p> <p>Yes, everytime, you deploy a new ingress resource, a Load balancer is created which you can view from the section:</p> <pre><code>GCP Console --&gt; Network services --&gt; LoadBalancing </code></pre> <p>Clicking on the respective Loadbalancer id gives you all the details, for example the External IP, the backend service, ecc</p>
<p>I am using Google Kubernetes Engine and would like my <code>StatefulSet</code> to use my previously created disk <code>my-app-disk</code> instead of dynamically creating new persistent disk for each replica in the set. My config is as follows:</p> <p><code>PersistentVolume</code> yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: my-app-data labels: app: my-app spec: capacity: storage: 60Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: my-app-disk fsType: ext4 </code></pre> <p><code>PersistentVolumeClaim</code> yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-app-data labels: app: my-app spec: accessModes: - ReadWriteOnce resources: requests: storage: 60Gi selector: matchLabels: app: my-app </code></pre> <p><code>StatefulSet</code> yaml:</p> <pre><code>apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: my-stateful-set spec: serviceName: my-stateful-set replicas: 1 selector: matchLabels: app: my-app updateStrategy: type: RollingUpdate template: metadata: labels: app: my-app spec: containers: - name: my-app image: gcr.io/projectname/my-app:0.0.3 ports: - containerPort: 80 name: http volumeMounts: - name: my-app-data mountPath: /data volumeClaimTemplates: - metadata: name: my-app-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 50Gi </code></pre>
<p><code>StatefulSet</code> will create it's own <code>PersistentVolumeClaim</code> for each pod so you don't have to create one yourself. A <code>PersistentVolume</code> and a <code>PersistentVolumeClaim</code> will bind exclusively <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="noreferrer">one to one</a>. Your PVC is binding to your volume so any PVCs created by the <code>StatefulSet</code> can't bind to your volume so it won't be used.</p> <p>In your case your <code>PersistentVolume</code> and the <code>StatefulSet</code> below should do the trick. Make sure to delete the <code>PersistentVolumeClaim</code> you created so that it's not bound to your <code>PersistentVolume</code>. Also, make sure the storage class name is set properly below on your PV and in <code>volumeClaimTemplates</code> on your <code>StatefulSet</code> below or the PVC made by the <code>StatefulSet</code> may not bind to your volume.</p> <p><code>PersistentVolume.yaml</code>:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: my-app-data labels: app: my-app spec: capacity: storage: 60Gi storageClassName: standard accessModes: - ReadWriteOnce gcePersistentDisk: pdName: my-app-disk fsType: ext4 </code></pre> <p><code>StatefulSet.yaml</code>:</p> <pre><code>apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: my-stateful-set spec: serviceName: my-stateful-set replicas: 1 selector: matchLabels: app: my-app updateStrategy: type: RollingUpdate template: metadata: labels: app: my-app spec: containers: - name: my-app image: gcr.io/projectname/my-app:0.0.3 ports: - containerPort: 80 name: http volumeMounts: - name: my-app-data mountPath: /data volumeClaimTemplates: - metadata: name: my-app-data spec: selector: matchLabels: app: my-app storageClassName: standard accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 50Gi </code></pre>
<p>Minikube version v0.24.1 </p> <p>kubernetes version 1.8.0</p> <p>The problem that I am facing is that I have several <code>statefulsets</code> created in minikube each with one pod. </p> <p>Sometimes when I start up minikube my pods will start up initially then keep being restarted by kubernetes. They will go from the creating container state, to running, to terminating over and over. </p> <p>Now I've seen kubernetes kill and restart things before if kubernetes detects disk pressure, memory pressure, or some other condition like that, but that's not the case here as these flags are not raised and the only message in the pod's event log is "Need to kill pod". </p> <p>What's most confusing is that this issue doesn't happen all the time, and I'm not sure how to trigger it. My minikube setup will work for a week or more without this happening then one day I'll start minikube up and the pods for my <code>statefulsets</code> just keep restarting. So far the only workaround I've found is to delete my minikube instance and set it up again from scratch, but obviously this is not ideal. </p> <p>Seen here is a sample of one of the <code>statefulsets</code> whose pod keeps getting restarted. Seen in the logs kubernetes is deleting the pod and starting it again. This happens repeatedly. I'm unable to figure out why it keeps doing that and why it only gets into this state sometimes.</p> <pre><code>$ kubectl describe statefulsets mongo --namespace=storage Name: mongo Namespace: storage CreationTimestamp: Mon, 08 Jan 2018 16:11:39 -0600 Selector: environment=test,role=mongo Labels: name=mongo Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"name":"mongo"},"name":"mongo","namespace":"storage"},"... Replicas: 1 desired | 1 total Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: environment=test role=mongo Containers: mongo: Image: mongo:3.4.10-jessie Port: 27017/TCP Command: mongod --replSet rs0 --smallfiles --noprealloc Environment: &lt;none&gt; Mounts: /data/db from mongo-persistent-storage (rw) mongo-sidecar: Image: cvallance/mongo-k8s-sidecar Port: &lt;none&gt; Environment: MONGO_SIDECAR_POD_LABELS: role=mongo,environment=test KUBERNETES_MONGO_SERVICE_NAME: mongo Mounts: &lt;none&gt; Volumes: &lt;none&gt; Volume Claims: Name: mongo-persistent-storage StorageClass: Labels: &lt;none&gt; Annotations: volume.alpha.kubernetes.io/storage-class=default Capacity: 5Gi Access Modes: [ReadWriteOnce] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulDelete 23m (x46 over 1h) statefulset delete Pod mongo-0 in StatefulSet mongo successful Normal SuccessfulCreate 3m (x62 over 1h) statefulset create Pod mongo-0 in StatefulSet mongo successful </code></pre>
<p>After some more digging there seems to have been a bug which can affect statefulsets that creates multiple controllers for the same statefulset:</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/56355" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/56355</a></p> <p>This issue seems to have been fixed and the fix seems to have been backported to version 1.8 of kubernetes and included in version 1.9, but minikube doesn't yet have the fixed version. A workaround if your system enters this state is to list the controller revisions like so:</p> <pre><code>$ kubectl get controllerrevisions --namespace=storage NAME CONTROLLER REVISION AGE mongo-68bd5cbcc6 StatefulSet/mongo 1 19h mongo-68bd5cbcc7 StatefulSet/mongo 1 7d </code></pre> <p>and delete the duplicate controllers for each statefulset.</p> <pre><code>$ kubectl delete controllerrevisions mongo-68bd5cbcc6 --namespace=storage </code></pre> <p>or to simply use version 1.9 of kubernetes or above that includes this bug fix.</p>
<p>Similar question on SO has 10 answers as 'force delete the pod' -_-</p> <p>Of course this is unacceptable as it causes problems on the cluster - too many pods are stuck on 'terminating', and many times if you try to delete a random pod it also gets stuck. It happens fairly randomly.</p> <p>So how to determine, first why are 'termination' commands issued and second how to find the culprit behind the freezes.</p> <p>Is it the CNI? Core components like kubelet, controllermanager?</p> <p>Logs don't show anything useful, nor does 'describe pod'.</p>
<p>If your pods got terminated with apparently no cause, it could be:</p> <ul> <li>the node is under stress (memory, cpu) </li> <li>liveness condition is not respected</li> </ul> <p>For these reasons, the scheduler kills some pods.</p> <p>How to determine the precise cause? If you found 'logs' and 'describe' command useless, it could be useful a monitoring system (ex. influxdb+grafana: <a href="https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb" rel="nofollow noreferrer">https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb</a>).</p>
<p>How to access the canonical kubernetes dashboard from external network/IP? Is there a way to expose dashboard services externally rather accessing from the localhost browser where the canonical k8s cluster node?</p>
<p><a href="https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above" rel="noreferrer">The documentation has a guide</a> on how to do it.</p> <h2>Using kubectl proxy</h2> <p><code>kubectl proxy</code> creates proxy server between your machine and Kubernetes API server. By default it is only accessible locally (from the machine that started it). Start local proxy server:</p> <pre><code>$ kubectl proxy Starting to serve on 127.0.0.1:8001 </code></pre> <p>Once proxy server is started you should be able to access Dashboard from your browser.</p> <p>To access HTTPS endpoint of dashboard go to: <code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</code></p> <p>NOTE: Dashboard should not be exposed publicly using kubectl proxy command as it only allows HTTP connection. For domains other than localhost and 127.0.0.1 it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.</p> <h2>Using NodePort</h2> <p>This way of accessing Dashboard is only recommended for development environments in a single node setup. Edit <code>kubernetes-dashboard</code> service.</p> <pre><code>$ kubectl -n kube-system edit service kubernetes-dashboard </code></pre> <p>You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file. If it's already changed go to next step.</p> <pre><code># Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 ... name: kubernetes-dashboard namespace: kube-system resourceVersion: &quot;343478&quot; selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head uid: 8e48f478-993d-11e7-87e0-901b0e532516 spec: clusterIP: 10.100.124.90 externalTrafficPolicy: Cluster ports: - port: 443 protocol: TCP targetPort: 8443 selector: k8s-app: kubernetes-dashboard sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>Next we need to check port on which Dashboard was exposed.</p> <pre><code>$ kubectl -n kube-system get service kubernetes-dashboard NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard 10.100.124.90 &lt;nodes&gt; 443:31707/TCP 21h </code></pre> <p>Dashboard has been exposed on port 31707 (HTTPS). Now you can access it from your browser at: <code>https://&lt;master-ip&gt;:31707</code>. <code>master-ip</code> can be found by executing <code>kubectl cluster-info</code>. Usually it is either 127.0.0.1 or IP of your machine, assuming that your cluster is running directly on the machine, on which these commands are executed.</p> <p>In case you are trying to expose Dashboard using NodePort on a multi-node cluster, then you have to find out IP of the node on which Dashboard is running to access it. Instead of accessing <code>https://&lt;master-ip&gt;:&lt;nodePort&gt;</code> you should access <code>https://&lt;node-ip&gt;:&lt;nodePort&gt;</code>.</p> <h2>API Server</h2> <p>In case Kubernetes API server is exposed and accessible from outside you can directly access dashboard at: <code>https://&lt;master-ip&gt;:&lt;apiserver-port&gt;/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</code></p> <h2>Ingress</h2> <p>Dashboard can be also exposed using Ingress resource. For example</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-dashboard-ingress namespace: kube-system spec: rules: — host: kubernetes http: paths: — path: /ui backend: serviceName: kubernetes-dashboard servicePort: 80 </code></pre>
<p>I am attempting to install postgres via helm using the latest <a href="https://github.com/kubernetes/charts/tree/master/stable/postgresql" rel="noreferrer">stable</a> and it isn't installing the persistent volume properly. I am installing it in Minikube and for some reason it doesn't appear to be able to hostMount properly.</p> <p><strong>Error (on the deployment, pod, and replica set)</strong></p> <blockquote> <p>PersistentVolumeClaim is not bound: "postgres-postgresql" Error: lstat /tmp/hostpath-provisioner/pvc-c713429d-e2a3-11e7-9ca9-080027231d54: no such file or directory Error syncing pod</p> </blockquote> <p>When I look at the persistent volume it appears to be running properly. In case it helps here is my persistent volume yaml:</p> <pre><code>{ "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "pvc-c713429d-e2a3-11e7-9ca9-080027231d54", "selfLink": "/api/v1/persistentvolumes/pvc-c713429d-e2a3-11e7-9ca9-080027231d54", "uid": "c71850e1-e2a3-11e7-9ca9-080027231d54", "resourceVersion": "396568", "creationTimestamp": "2017-12-16T20:57:50Z", "annotations": { "hostPathProvisionerIdentity": "8979806c-dfba-11e7-862f-080027231d54", "pv.kubernetes.io/provisioned-by": "k8s.io/minikube-hostpath" } }, "spec": { "capacity": { "storage": "8Gi" }, "hostPath": { "path": "/tmp/hostpath-provisioner/pvc-c713429d-e2a3-11e7-9ca9-080027231d54", "type": "" }, "accessModes": [ "ReadWriteOnce" ], "claimRef": { "kind": "PersistentVolumeClaim", "namespace": "default", "name": "postgres-postgresql", "uid": "c713429d-e2a3-11e7-9ca9-080027231d54", "apiVersion": "v1", "resourceVersion": "396550" }, "persistentVolumeReclaimPolicy": "Delete", "storageClassName": "standard" }, "status": { "phase": "Bound" } } </code></pre> <p>Persistent Volume Claim Yaml:</p> <pre><code>{ "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "postgres-postgresql", "namespace": "default", "selfLink": "/api/v1/namespaces/default/persistentvolumeclaims/postgres-postgresql", "uid": "c713429d-e2a3-11e7-9ca9-080027231d54", "resourceVersion": "396588", "creationTimestamp": "2017-12-16T20:57:50Z", "labels": { "app": "postgres-postgresql", "chart": "postgresql-0.8.3", "heritage": "Tiller", "release": "postgres" }, "annotations": { "control-plane.alpha.kubernetes.io/leader": "{\"holderIdentity\":\"897980a2-dfba-11e7-862f-080027231d54\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2017-12-16T20:57:50Z\",\"renewTime\":\"2017-12-16T20:57:52Z\",\"leaderTransitions\":0}", "pv.kubernetes.io/bind-completed": "yes", "pv.kubernetes.io/bound-by-controller": "yes", "volume.beta.kubernetes.io/storage-provisioner": "k8s.io/minikube-hostpath" } }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "8Gi" } }, "volumeName": "pvc-c713429d-e2a3-11e7-9ca9-080027231d54", "storageClassName": "standard" }, "status": { "phase": "Bound", "accessModes": [ "ReadWriteOnce" ], "capacity": { "storage": "8Gi" } } } </code></pre> <p>Any assistance would be appreciated.</p>
<p>You may be running into this issue: <a href="https://github.com/kubernetes/minikube/issues/2256" rel="noreferrer">https://github.com/kubernetes/minikube/issues/2256</a></p> <p>The problem is there's a bug in the hostpath volume provisioner that encounters an error when the 'subPath' field is present in the Deployment resource (event if the field has an empty value).</p> <p>Here's a workaround that worked for me - unpack the postgresql chart and comment out the following line in deployment.yaml:</p> <pre><code> # subPath: {{ .Values.persistence.subPath }} </code></pre> <p>Then redeploy the modified chart. If you're reliant on the 'subPath' field, this workaround won't work for you.</p> <p>Note: This issue is also present on Kubernetes on Docker-for-Mac (which is where I've encountered it).</p>
<p>Can I create two pods where containers are running on same ports in one kubernetes cluster? considering that will create a separate service for both.</p> <p>Something like this : </p> <p>-- Deployment 1</p> <pre><code>kind: Deployment spec: containers: - name: &lt;name&gt; image: &lt;image&gt; imagePullPolicy: Always ports: - containerPort: 8080 </code></pre> <p>-- Service 1</p> <pre><code>kind: Service spec: type: LoadBalancer ports: - port: 8081 targetPort: 8080 </code></pre> <p>-- Deployment 2</p> <pre><code>kind: Deployment spec: containers: - name: &lt;name&gt; image: &lt;image&gt; imagePullPolicy: Always ports: - containerPort: 8080 </code></pre> <p>-- Service 2</p> <pre><code>kind: Service spec: type: LoadBalancer ports: - port: 8082 targetPort: 8080 </code></pre> <p>but this approach is not working.</p>
<p>Sure you can. Every <code>POD</code> (which is the basic workload unit in k8s) is isolated from the others in terms of networking (as long as you don't mess with advanced networking options) so you can have as many pods as you want that bind the same port. You can't have two <code>containers</code> inside the same <code>POD</code> that bind the same port, though.</p>
<p>I'm trying to use the Jenkins/Kubernetes plugin to orchestrate docker slaves with Jenkins. </p> <p>I'm using this plugin: <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a></p> <p>My problem is that all the slaves are offline so the job can't execute:</p> <p><a href="https://i.stack.imgur.com/2dLZc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2dLZc.png" alt="Slave status"></a></p> <p><a href="https://i.stack.imgur.com/Go4Sw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Go4Sw.png" alt="enter image description here"></a></p> <p>I have tried this on my local box using minikube, and on a K8 Cluster hosted by our ops group. I've tried both Jenkins 1.9 and Jenkins 2. I always get the same result. The screenshots are from Jenkins 1.642.4, K8 v1.2.0</p> <p>Here is my configuration... note that when I click 'test connection' I get a success. Also note I didn't need any credentials (this is the only difference I can see vs the documented example).</p> <p><a href="https://i.stack.imgur.com/9gcxQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9gcxQ.png" alt="Jenkins System Configuration"></a></p> <p>The Jenkins log shows the following over and over:</p> <pre><code> Waiting for slave to connect (11/100): docker-6b55f1b7fafce Jul 20, 2016 5:01:06 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call Waiting for slave to connect (12/100): docker-6b55f1b7fafce Jul 20, 2016 5:01:07 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call Waiting for slave to connect (13/100): docker-6b55f1b7fafce Jul 20, 2016 5:01:08 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud$ProvisioningCallback call </code></pre> <p>When I run <code>kubectl get events</code> I see this:</p> <pre><code>24s 24s 1 docker-6b3c2ff27dad3 Pod Normal Scheduled {default-scheduler } Successfully assigned docker-6b3c2ff27dad3 to 96.xxx.xx.159 24s 23s 2 docker-6b3c2ff27dad3 Pod Warning MissingClusterDNS {kubelet 96.xxx.xx.159} kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy. 23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Pulled {kubelet 96.xxx.xx.159} Container image "jenkinsci/jnlp-slave" already present on machine 23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Created {kubelet 96.xxx.xx.159} Created container with docker id 82fcf1bd0328 23s 23s 1 docker-6b3c2ff27dad3 Pod spec.containers{slave} Normal Started {kubelet 96.xxx.xx.159} Started container with docker id 82fcf1bd0328 </code></pre> <p>Any ideas?</p> <p>UPDATE: more log info as suggested by csanchez</p> <pre><code> ➜ docker git:(master) ✗ kubectl get pods --namespace default -o wide NAME READY STATUS RESTARTS AGE NODE docker-6bb647254a2a4 1/1 Running 0 1m 96.x.x.159 ➜ docker git:(master) ✗ kubectl log docker-6bafbac10b392 Jul 20, 2016 6:45:10 PM hudson.remoting.jnlp.Main$CuiListener status INFO: Connecting to 96.x.x.159:50000 (retrying:10) java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) </code></pre> <p>I'll have to look at what this port 50000 is used for??</p>
<p>When running jenkins in Kubernetes, the service name is resolvable by both the jenkins master and slaves.</p> <p>The best way to configure this is than the use the internal DNS and set the jenkins url to:</p> <pre><code>http://jenkins:8080 </code></pre> <p>(assuming you called your service jenkins, and your port on the service is 8080)</p> <p>No tunnel is required.</p> <p>The benefit of this approach is that it will survive restarts of your jenkins without reconfiguration.</p> <p>Secondary benefit is that you would not have to expose Jenkins to the outside world, thus limiting security risks.</p>
<p>I have configured a kubernetes ingress service but it only works when the path is /</p> <p>I have tried all manner of different values for the path including:</p> <pre><code>/* /servicea /servicea/ /servicea/* </code></pre> <p>This is my ingress configuration (that works)</p> <pre><code>- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: boardingservice annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: my.url.com http: paths: - path: / backend: serviceName: servicea-nodeport servicePort: 80 </code></pre> <p>This is my nodeport service</p> <pre><code>- apiVersion: v1 kind: Service metadata: name: servicea-nodeport spec: type: NodePort ports: - port: 80 targetPort: 8081 nodePort: 30124 selector: app: servicea </code></pre> <p>And this is my deployment</p> <pre><code>- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: servicea spec: replicas: 1 template: metadata: name: ervicea labels: app: servicea spec: containers: - image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/servicea name: servicea ports: - containerPort: 8080 protocol: TCP - image: 350329402011.dkr.ecr.eu-west-2.amazonaws.com/serviceb name: serviceab ports: - containerPort: 8081 protocol: TCP </code></pre> <p>If the path is / then I can do this <a href="http://my.url.com/api/ping" rel="noreferrer">http://my.url.com/api/ping</a> but as I will have multiple services I want to do this: <a href="http://my.url.com/servicea/api/ping" rel="noreferrer">http://my.url.com/servicea/api/ping</a> but when I set the path to /servicea I get a 404.</p> <p>I am running kubernetes on AWS with an ingress-nginx ingress controller</p> <p>Any idea?</p>
<p>You are not using kubernetes Pods as they are intended to be used. <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="noreferrer">A Pod</a></p> <blockquote> <p>it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.</p> </blockquote> <p>If you have two applications, <code>servicea</code> and <code>serviceb</code>, they should be running on different Pods: one pod for <code>servicea</code> and another one for <code>serviceb</code>. This has many benefits: you can deploy them separately, scale them independently, etc. As the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#understanding-pods" rel="noreferrer">docs say</a></p> <blockquote> <p>A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.</p> </blockquote> <p>These Pods can be created using <code>Deployments</code>, as you were already doing. That's fine and recommended.</p> <p>Once you have the <code>Deployments</code> running, you'd create a different <code>Service</code> that would balance traffic between all the <code>Pod</code>s for a given <code>Deployment</code>.</p> <p>And finally, you want to hit <code>servicea</code> or <code>serviceb</code> depending on the request URL. That can be done with <code>Ingress</code>, as you were trying, but <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="noreferrer">mapping each path to different services</a>. For example</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: my.url.com http: paths: - path: /servicea backend: serviceName: servicea servicePort: 80 - path: /serviceb backend: serviceName: serviceb servicePort: 80 </code></pre> <p>That way, requests going to your ingress controller using the /servicea path would be served by the Pods behind the <code>servicea</code> Service. And requests going to your ingress controller using the /serviceb path would be served by the Pods behind the <code>serviceb</code> Service.</p>
<p>Through a stroke of luck I've been given an extremely powerful server in my office - I'd love to somehow set up a replica of our staging Kubernetes environment on it. Our staging Kube environment is 5 nodes running on AWS that each have different configurations. I can't find much in the way of best practice guides (probably because this is a very weird use case) for this configuration.</p> <p>My gut feel is this:</p> <ul> <li>Install some kind of bare metal OS on the machine</li> <li>Set up multiple VMs on the machine each configured to mirror a node from staging</li> <li>Install the Kube master on one of the machines</li> <li>Enrol each of the other VMs as a node under kubernetes</li> <li>Run my deployments</li> </ul> <p>Is there any better way for me to configure this or any potential issues I may hit/roadblocks if I follow this approach?</p>
<p>If you want to have it everything in one machine, I would also go for the multi-vm option. With Vagrant you could try to make the process simpler. This could help you:</p> <p><a href="https://github.com/pires/kubernetes-vagrant-coreos-cluster" rel="nofollow noreferrer">https://github.com/pires/kubernetes-vagrant-coreos-cluster</a></p> <p>After setting up the cluster you could adapt it to mimic the state of your staging cluster. </p> <p>The only issue that comes to mind is that of overlay networking an external access. If you configure NAT networking you would have issues with external access and probably no issue with the network overlay. On the other side, I am not 100% certain how the overlay network would work in a bridged setting. </p>
<p>We have created 2 different Kubernetes clusters on Google Cloud Platform, one for Development and the other for Production. Our team members have the "editor" role (so they can create, update delete and list pods)</p> <p>We want to limit access to the production cluster by using RBAC authorization provided by Kubernetes. I've created a <code>ClusterRole</code> and a <code>ClusterBindingRole</code>, as follow:</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: prod-all rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: access-prod-all subjects: - kind: User name: [email protected] apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: prod-all apiGroup: rbac.authorization.k8s.io </code></pre> <p>But the users already have an "editor" role (complete access to all the clusters). So I don't know if we should assign a simple "viewer" role than extend it using kubernetes RBAC. </p> <p>I also want to know if there is a way to completely hide the production cluster from some users. (our clusters are in the same project)</p>
<p>If you are in a initial phase or you can manage to move your testing cluster I would advise you to set up the clusters in two different projects.</p> <p>This will create two completely different environments and you will not have any kind of issues in the future and you automatically forbid the access to half of your resources and you don't have to fear that something is misconfigured and your production is still reachable. When you need to grant something you simply add that person to the project with the corresponding role</p> <p>Because maybe you succeed in blocking the cluster access using IAM and RBAC, but then you would need to deal with securing the access to the networking components, LoadBalacers, Firewalls, to the Compute Engine ecc</p> <p>Maybe at the beginning it is a lot of work, but in the long run it will save you a lot of issues.</p> <p>This is the <a href="https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod" rel="nofollow noreferrer">link</a> for the official Google Cloud documentation about how to set up two cluster of which one is in production.</p>
<p>Is there a way to monitor kube cronjob?</p> <p>I have a kube cronjob which runs every 10mins on my cluster. Is there a way to collect metrics every time my cronjob fails due to some error or notify when my cronjob has not been completed after a certain period of time?</p>
<p>I'm using these rules with <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/cronjob-metrics.md" rel="noreferrer">kube-state-metrics</a>:</p> <pre><code>groups: - name: job.rules rules: - alert: CronJobRunning expr: time() -kube_cronjob_next_schedule_time &gt; 3600 for: 1h labels: severity: warning annotations: description: CronJob {{$labels.namespaces}}/{{$labels.cronjob}} is taking more than 1h to complete summary: CronJob didn't finish after 1h - alert: JobCompletion expr: kube_job_spec_completions - kube_job_status_succeeded &gt; 0 for: 1h labels: severity: warning annotations: description: Job completion is taking more than 1h to complete cronjob {{$labels.namespaces}}/{{$labels.job}} summary: Job {{$labels.job}} didn't finish to complete after 1h - alert: JobFailed expr: kube_job_status_failed &gt; 0 for: 1h labels: severity: warning annotations: description: Job {{$labels.namespaces}}/{{$labels.job}} failed to complete summary: Job failed </code></pre>
<p>Update:</p> <p>Found <a href="https://webhookrelay.com/v1/examples/relay-ingress.html" rel="noreferrer">this</a>, but it that the right way?</p> <p>I can see that I can do portforward to e node, like:</p> <pre><code>kubectl port-forward hello-nginx 8080:80 </code></pre> <p>But I want to play and try to scale and access a service. I have started the service:</p> <pre><code>kubectl expose deployment hello-nginx --type=NodePort service “hello-nginx” exposed </code></pre> <p>and then:</p> <pre><code>kubectl get services rolling-sponge-hello-world ClusterIP 10.104.12.39 &lt;none&gt; 80/TCP 3d </code></pre> <p>Then how do you proxy to the kubernetes service?</p> <p>Note: Running Docker for Mac(Edge).</p>
<p>The best approach would be using a Ingress, as you mentioned. That way, you could send requests to your local machine IP and the Ingress controller would send the traffic to the right <code>Pods</code>.</p> <p>For that you need to:</p> <ul> <li>Create <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress rules</a> that describe how you want the Ingress controller to route traffic to your <code>Pods</code>. These rules can redirect traffic to specific <code>Pods</code> based on the path or the host in the request.</li> <li>Deploy an Ingress controller that will follow those rules, like the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx Ingress controller</a>. You can learn how to deploy it in <a href="https://medium.com/@gokulc/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">this blog post</a>.</li> </ul> <p><a href="https://www.datawire.io/docker-mac-kubernetes-ingress/" rel="nofollow noreferrer">This blog post</a> talks specifically about Docker for mac and Ingress.</p>
<p>I have three cluster in Google Kubernetes Engine. (I called Cluster A, B C). In three cluster, i deployed some services.</p> <p>Im trying call example api with flow : Api call Service A1 in cluster A (Gke-A), and service A1 call service B1 in Gke-B then service B1 call service C1 in Gke C. I'm using nginx ingress and this flow worked with public traffic.</p> <p>Now i want to service A1 call service b1, and service B1 call service c1 by network internal. I'm using VPC peering in Cluster B and CLuster C. In one node of cluster B, i try to call serivce C1, i worked. But when i call service A1 in ccluster Gke-A, it's not working. </p> <p>I checked log and i saw, reuqest from service C1 from B1 is not working.</p> <p>What happen? Thanks!</p>
<p>This question is a little vague as it is, but in general there is no proper support for accessing services running in antother Kubernetes Cluster inside GCE, yet.</p> <p>One thing that could work would be to use an internal GCE load balancer and headless service pointing to this LBs IPs. But I'm not sure if/how that works with an nginx Ingress.</p>
<p>When creating a service, I can either specify static IP address from cluster IP range or don't specify any IP address in which case such address will be dynamically assigned. </p> <p>But when specifying static IP address, how can I make sure that it will not conflict with existing dynamically assigned IP address? I could for example programmatically query if such IP address is already in use. Or, what I would more prefer is to specify IP range that is cluster-wise reserved for manual allocation. For example</p> <ul> <li>Service cluster IP range: 10.20.0.0/16</li> <li>Service cluster IP manual range: 10.20.5.0/24</li> </ul> <p>Now, I can manage IP address in range from 10.20.5.0-10.22.5.255 myself and kubernetes can use remaining pool for dynamic allocation. Sort of how usually DHCP/static IP range works on home routers.</p> <p>Is this scenario possible in kubernetes?</p>
<p>The service ip you manually select has to be part of the selected range or you'll receive an <code>invalid</code> (422) response from kubernetes. The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address" rel="nofollow noreferrer">kubernetes documentation</a> has a choosing your own ip section for services. If you have admin rights to the cluster the easiest option is going to perform <code>kubectl get services --all-namespaces</code> which will show you every service provisioned in your cluster with its corresponding CLUSTER-IP shown.</p>
<p>I want to expose multiple services trough single load balancer. Each service points to exactly one pod.</p> <p>So far I tried to:</p> <pre><code>kubectl expose &lt;podName&gt; --port=7000 </code></pre> <p>And in Azure portal to manually set either load balancing rules or Inbound Nat rules, pointing to exposed pod. So far I can connect to pod using external IP and specified port.</p>
<p>Depends on how you want to separate services on the same IP. The two ways that come to my mind are :</p> <ul> <li>use NodePort services and then map some ports from your LB to that part on your cluster nodes. This gives separation by port.</li> <li>way more interesting in my opinion is to use Ingress/IngressController. You would expose only IC on standard ports like 80 &amp; 443 and then it will map to your services by hostname and uri</li> </ul>
<p>I want to expose multiple services trough single load balancer. Each service points to exactly one pod.</p> <p>So far I tried to:</p> <pre><code>kubectl expose &lt;podName&gt; --port=7000 </code></pre> <p>And in Azure portal to manually set either load balancing rules or Inbound Nat rules, pointing to exposed pod. So far I can connect to pod using external IP and specified port.</p>
<p>In Azure container service, Azure will use Load Balancer to expose k8s services, like this:</p> <pre><code>root@k8s-master-E27AE453-0:~# kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE jasonnginx 10.0.41.194 52.226.33.200 8080:32011/TCP 4m kubernetes 10.0.0.1 &lt;none&gt; 443/TCP 11m mynginx 10.0.144.49 40.71.230.60 80:32366/TCP 5m yournginx 10.0.147.28 40.71.226.23 80:32289/TCP 4m root@k8s-master-E27AE453-0:~# </code></pre> <p>Via Azure portal, check Azure load balancer frontend IP configuration(different IP address):</p> <p><a href="https://i.stack.imgur.com/JdtHs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JdtHs.png" alt="enter image description here"></a></p> <p>ACS will create <code>Load Balancer rules</code> and add <code>rontend IP</code> address <strong>automatically</strong>.</p> <blockquote> <p>How to expose multiple kubernetes services trough single azure load balancer?</p> </blockquote> <p>ACS expose k8s services through that Azure Load Balancer, do you mean you want to expose k8s services with <strong>a single Public IP address</strong>?</p> <p>If you want to expose k8s services with a single public IP address, as Radek said, maybe you should use <strong>Nginx Ingress Controller</strong>.</p> <p>The Ingress Controller works like this:</p> <p><a href="https://i.stack.imgur.com/S90s7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S90s7.png" alt="enter image description here"></a></p>
<p>I am trying trying to use a persistent disk from GCE on a GKE project.</p> <p>Here are the steps I used:</p> <p>gcloud compute disks create --size 50GB XXX</p> <p>And here is the response:</p> <hr> <p>WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: <a href="https://developers.google.com/compute/docs/disks#performance" rel="nofollow noreferrer">https://developers.google.com/compute/docs/disks#performance</a>. Created [<a href="https://www.googleapis.com/compute/v1/projects/XXX/zones/us-central1-a/disks/DISKNAME]" rel="nofollow noreferrer">https://www.googleapis.com/compute/v1/projects/XXX/zones/us-central1-a/disks/DISKNAME]</a>. NAME ZONE SIZE_GB TYPE STATUS DISKNAME us-central1-a 50 pd-standard READY</p> <p>New disks are unformatted. You must format and mount a disk before it can be used. You can find instructions on how to do this at:</p> <p><a href="https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting</a></p> <hr> <p>Here is a sniped of my deployment file:</p> <hr> <p>volumeMounts: - mountPath: /data/db name: GKEDISK - mountPath: /data/configdb name: GKEDISK restartPolicy: Always volumes: - name: GKEDISK gcePersistentDisk: pdName: DISKNAME</p> <h2> fsType: ext4</h2> <p>Questions:</p> <p>1) Do I need to format the disk myself or will GKE do this for me based on the fsType I selected?</p> <p>2) After I create the disk, I can see it on gcloud, but kubectl get persistentvolumes returns "No Resources Found". Are there any extra steps that need to be execute to allow for GKE to see the disk created on GCE?</p>
<p>If you go the manual route, you need to both create the disk in gcloud AND create PV in kubernetes.</p> <p>Although, why not make use of the flexibility kube gives you with this cloud provider and stick with <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamic PV provisioning</a> ? That way you just create a PVC and the volume/pv is created for you "magically".</p>
<p>I'm starting to write helm charts for our services.</p> <p>There are two things I'm not sure how they are supposed to work or what to do with them.</p> <p>First: the release name. When installing a chart, you specify a name which helm uses to create a release. This release name is often referenced within a chart to properly isolate chart installs from each other? For example the postgres chart contains:</p> <pre><code>{{- define &quot;postgresql.fullname&quot; -}} {{- $name := default .Chart.Name .Values.nameOverride -}} {{- printf &quot;%s-%s&quot; .Release.Name $name | trunc 63 | trimSuffix &quot;-&quot; -}} {{- end -}} </code></pre> <p>Which is then used for the service:</p> <pre><code>metadata: name: {{ template &quot;postgresql.fullname&quot; . }} </code></pre> <p>It does look like &quot;myrelease-postgresql&quot; in the end in kubernetes. I wonder what a good release name is? What is typically used for this? A version? Or some code-name like the ubuntu releases?</p> <p>Second: referencing values.</p> <p>My chart uses postgresql as a sub-chart. I'd like to not duplicate the way the value for the name of the postgresql service is created (see snipped above).</p> <p>Is there a way I can reference the service name of a sub-chart or that template define {{ template &quot;postgresql.fullname&quot; . }} in the parent chart? I need it to pass it into my service as database host (which works if I hardcode everything but that cannot be the meaning of this).</p> <p>I tried:</p> <pre><code> env: - name: DB_HOST value: {{ template &quot;mychart.postgresql.fullname&quot; . }} </code></pre> <p>But that lead into an error message:</p> <pre><code>template &quot;mychart.postgresql.fullname&quot; not defined </code></pre> <p>I've seen examples of Charts doing similar things, like the <a href="https://github.com/kubernetes/charts/blob/master/stable/odoo/templates/_helpers.tpl" rel="nofollow noreferrer">odoo</a> chart. But in here that logic how the postgresql host name is created is copied and an own define in the template is created.</p> <p>So is there a way to access sub-chart names? Or values or template defines?</p> <p>Thanks!</p> <p><strong>Update after some digging:</strong> According to <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer">Subcharts and Globals</a> the templates are shared between charts.</p> <p>So what I can do is this:</p> <p>In my chart in _helpers.tpl I add (overwrite) the postgres block:</p> <pre><code>{{- define &quot;postgresql.fullname&quot; -}} {{- $name := .Values.global.name -}} {{- printf &quot;%s-%s&quot; $name &quot;postgresql&quot; | trunc 63 | trimSuffix &quot;-&quot; -}} {{- end -}} </code></pre> <p>So this value is used when the sub-chart is deployed. I cannot reference all values or the chart name in here as it will be different in the sub-chart - so I used a global value.</p> <p>Like this I know the value of the service that is created in the sub-chart.</p> <p>Not sure if this is the best way to do this :-/</p>
<p>Are you pulling in <code>postgresql</code> as a subchart of your chart (via your chart's <code>requirements.yaml</code>)? If so, both the <code>postgresql</code> (sub) chart and your chart will have the <strong>same</strong> <code>.Release.Name</code> - thus, you could specify your container's environment as</p> <pre><code> env: - name: DB_HOST value: {{ printf "%s-postgresql" .Release.Name }} </code></pre> <p>if you override <code>postgresql</code>'s name by adding the following to your chart's <code>values.yaml</code>:</p> <pre><code>postgresql: nameOverride: your-postgresql </code></pre> <p>then your container's env would be:</p> <pre><code> env: - name: DB_HOST value: {{ printf "%s-%s" .Release.Name .Values.postgresql.nameOverride }} </code></pre>
<p>I am using Kubernetes Engine on the Google Cloud Platform. I have a pod running a process in a Docker scratch container. I also have a load balancer service that gives me access to the pod from the outside world.</p> <p>The process running in the pod needs to know what its external IP address is. How can I get this?</p> <p>Prior to using Kubernetes Engine I was using Compute Engine and could find the external IP address by the following:</p> <pre><code>curl -H "Metadata-Flavor: Google" http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip </code></pre> <p>Are there any internal tools I can use that would be available to my process? Or would I need the process to call an external site that can mirror back the IP address?</p>
<p>Every Pod (unless configured not to do so) has valid kubernetes credentials in <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">as described here</a> so the answer is to use the kubernetes API to ask the <code>Service</code> in front of the Pod(s) for its <code>status:loadBalancer:ingress:ip:</code> <a href="https://v1-8.docs.kubernetes.io/docs/api-reference/v1.8/#loadbalanceringress-v1-core" rel="nofollow noreferrer">as described here</a> which I have every reason to believe GKE will keep up-to-date with any changes to the load balancer. The kubernetes API is always(?) located at <code>https://kubernetes</code> (that's normally enough, or <code>https://kubernetes.default.svc.cluster.local</code> is its full name), so there should be very little configuration the Pod would need in order to carry out the lookup.</p> <p>The asterisk to that response is that one must provide the name of the Service to the Pod(s) of the Service sitting in front of it, because (for the most part) there is no way for the Pod to know how many Services point to it.</p>
<p>I attempted to upgrade to 1.7 to 1.9 using kubeadm, kube-dns was crashloopig. I removed the deployment and applied the a new deployment using the latest yaml for kube-dns (replacing the clusterip with 10.96.0.10, domain with cluster.local).</p> <p>The kubedns container fails after not being able to get a valid response from the api server. The 10.96.0.1 ip does respond to a wget on the 443 port from all servers in the cluster (403 forbidden response).</p> <pre><code>E0104 21:51:42.732805 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0104 21:51:42.732971 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout </code></pre> <p>Is this a connection issue, configuration issue, or a security model change that is causing the errors in the log?</p> <p>Thanks.</p> <pre><code> $ kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu80 Ready master 165d v1.9.1 ubuntu81 Ready &lt;none&gt; 165d v1.9.1 ubuntu82 Ready &lt;none&gt; 165d v1.9.1 ubuntu83 Ready &lt;none&gt; 163d v1.9.1 $ kubectl get all --namespace=kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds/kube-flannel-ds 4 4 4 0 4 beta.kubernetes.io/arch=amd64 165d ds/kube-proxy 4 4 4 4 4 &lt;none&gt; 165d ds/traefik-ingress-controller 3 3 3 3 3 &lt;none&gt; 165d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 0 1h deploy/tiller-deploy 1 1 1 1 163d NAME DESIRED CURRENT READY AGE rs/kube-dns-6c857864fb 1 1 0 1h rs/tiller-deploy-3341511835 1 1 1 105d NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ds/kube-flannel-ds 4 4 4 0 4 beta.kubernetes.io/arch=amd64 165d ds/kube-proxy 4 4 4 4 4 &lt;none&gt; 165d ds/traefik-ingress-controller 3 3 3 3 3 &lt;none&gt; 165d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 0 1h deploy/tiller-deploy 1 1 1 1 163d NAME DESIRED CURRENT READY AGE rs/kube-dns-6c857864fb 1 1 0 1h rs/tiller-deploy-3341511835 1 1 1 105d NAME READY STATUS RESTARTS AGE po/etcd-ubuntu80 1/1 Running 1 16d po/kube-apiserver-ubuntu80 1/1 Running 1 2h po/kube-controller-manager-ubuntu80 1/1 Running 1 2h po/kube-dns-6c857864fb-grhxp 1/3 CrashLoopBackOff 52 1h po/kube-flannel-ds-07npj 2/2 Running 32 165d po/kube-flannel-ds-169lh 2/2 Running 26 165d po/kube-flannel-ds-50c56 2/2 Running 27 163d po/kube-flannel-ds-wkd7j 2/2 Running 29 165d po/kube-proxy-495n7 1/1 Running 1 2h po/kube-proxy-9g7d2 1/1 Running 1 2h po/kube-proxy-d856z 1/1 Running 0 2h po/kube-proxy-kzmcc 1/1 Running 0 2h po/kube-scheduler-ubuntu80 1/1 Running 1 2h po/tiller-deploy-3341511835-m3x26 1/1 Running 2 58d po/traefik-ingress-controller-51r7d 1/1 Running 4 105d po/traefik-ingress-controller-sf6lc 1/1 Running 4 105d po/traefik-ingress-controller-xz1rt 1/1 Running 5 105d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 1h svc/kubernetes-dashboard ClusterIP 10.101.112.198 &lt;none&gt; 443/TCP 165d svc/tiller-deploy ClusterIP 10.98.117.242 &lt;none&gt; 44134/TCP 163d svc/traefik-web-ui ClusterIP 10.110.215.194 &lt;none&gt; 80/TCP 165d $ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns I0104 21:51:12.730927 1 dns.go:48] version: 1.14.6-3-gc36cb11 I0104 21:51:12.731643 1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s I0104 21:51:12.731673 1 server.go:112] FLAG: --alsologtostderr="false" I0104 21:51:12.731679 1 server.go:112] FLAG: --config-dir="/kube-dns-config" I0104 21:51:12.731683 1 server.go:112] FLAG: --config-map="" I0104 21:51:12.731686 1 server.go:112] FLAG: --config-map-namespace="kube-system" I0104 21:51:12.731688 1 server.go:112] FLAG: --config-period="10s" I0104 21:51:12.731693 1 server.go:112] FLAG: --dns-bind-address="0.0.0.0" I0104 21:51:12.731695 1 server.go:112] FLAG: --dns-port="10053" I0104 21:51:12.731713 1 server.go:112] FLAG: --domain="cluster.local." I0104 21:51:12.731717 1 server.go:112] FLAG: --federations="" I0104 21:51:12.731723 1 server.go:112] FLAG: --healthz-port="8081" I0104 21:51:12.731726 1 server.go:112] FLAG: --initial-sync-timeout="1m0s" I0104 21:51:12.731729 1 server.go:112] FLAG: --kube-master-url="" I0104 21:51:12.731733 1 server.go:112] FLAG: --kubecfg-file="" I0104 21:51:12.731735 1 server.go:112] FLAG: --log-backtrace-at=":0" I0104 21:51:12.731740 1 server.go:112] FLAG: --log-dir="" I0104 21:51:12.731743 1 server.go:112] FLAG: --log-flush-frequency="5s" I0104 21:51:12.731746 1 server.go:112] FLAG: --logtostderr="true" I0104 21:51:12.731748 1 server.go:112] FLAG: --nameservers="" I0104 21:51:12.731751 1 server.go:112] FLAG: --stderrthreshold="2" I0104 21:51:12.731753 1 server.go:112] FLAG: --v="2" I0104 21:51:12.731756 1 server.go:112] FLAG: --version="false" I0104 21:51:12.731761 1 server.go:112] FLAG: --vmodule="" I0104 21:51:12.731798 1 server.go:194] Starting SkyDNS server (0.0.0.0:10053) I0104 21:51:12.731979 1 server.go:213] Skydns metrics enabled (/metrics:10055) I0104 21:51:12.731987 1 dns.go:146] Starting endpointsController I0104 21:51:12.731991 1 dns.go:149] Starting serviceController I0104 21:51:12.732457 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I0104 21:51:12.732467 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] I0104 21:51:13.232355 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:13.732395 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:14.232389 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:14.732389 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:15.232369 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:42.732629 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... E0104 21:51:42.732805 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0104 21:51:42.732971 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0104 21:51:43.232257 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:51.232379 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:51.732371 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:51:52.232390 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:52:11.732376 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0104 21:52:12.232382 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... F0104 21:52:12.732377 1 dns.go:167] Timeout waiting for initialization $ kubectl describe po/kube-dns-6c857864fb-grhxp --namespace=kube-system Name: kube-dns-6c857864fb-grhxp Namespace: kube-system Node: ubuntu82/10.80.82.1 Start Time: Fri, 05 Jan 2018 01:55:48 +0530 Labels: k8s-app=kube-dns pod-template-hash=2741342096 Annotations: scheduler.alpha.kubernetes.io/critical-pod= Status: Running IP: 10.244.2.12 Controlled By: ReplicaSet/kube-dns-6c857864fb Containers: kubedns: Container ID: docker://3daa4233f54fa251abdcdfe73d2e71179356f5da45983d19fe66a3f18bab8d13 Image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680 Ports: 10053/UDP, 10053/TCP, 10055/TCP Args: --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 255 Started: Fri, 05 Jan 2018 03:21:12 +0530 Finished: Fri, 05 Jan 2018 03:22:12 +0530 Ready: False Restart Count: 26 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3 Environment: PROMETHEUS_PORT: 10055 Mounts: /kube-dns-config from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-cpzzw (ro) dnsmasq: Container ID: docker://a40a34e6fdf7176ea148fdb1f21d157c5d264e44bd14183ed9d19164a742fb65 Image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:6cfb9f9c2756979013dbd3074e852c2d8ac99652570c5d17d152e0c0eb3321d6 Ports: 53/UDP, 53/TCP Args: -v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 State: Running Started: Fri, 05 Jan 2018 03:24:44 +0530 Last State: Terminated Reason: Error Exit Code: 137 Started: Fri, 05 Jan 2018 03:17:33 +0530 Finished: Fri, 05 Jan 2018 03:19:33 +0530 Ready: True Restart Count: 27 Requests: cpu: 150m memory: 20Mi Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-cpzzw (ro) sidecar: Container ID: docker://c05b33a08344f15b0d1a1e8fee39cc05b6d9de6a24db6d2cd05e92c2706fc03c Image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7 Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:f80f5f9328107dc516d67f7b70054354b9367d31d4946a3bffd3383d83d7efe8 Port: 10054/TCP Args: --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV State: Running Started: Fri, 05 Jan 2018 02:09:25 +0530 Last State: Terminated Reason: Error Exit Code: 2 Started: Fri, 05 Jan 2018 01:55:50 +0530 Finished: Fri, 05 Jan 2018 02:08:20 +0530 Ready: True Restart Count: 1 Requests: cpu: 10m memory: 20Mi Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-cpzzw (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: kube-dns-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-dns Optional: true kube-dns-token-cpzzw: Type: Secret (a volume populated by a Secret) SecretName: kube-dns-token-cpzzw Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: CriticalAddonsOnly node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 46m (x57 over 1h) kubelet, ubuntu82 Readiness probe failed: Get http://10.244.2.12:8081/readiness: dial tcp 10.244.2.12:8081: getsockopt: connection refused Warning Unhealthy 36m (x42 over 1h) kubelet, ubuntu82 Liveness probe failed: HTTP probe failed with statuscode: 503 Warning BackOff 31m (x162 over 1h) kubelet, ubuntu82 Back-off restarting failed container Normal Killing 26m (x13 over 1h) kubelet, ubuntu82 Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated. Normal SuccessfulMountVolume 21m kubelet, ubuntu82 MountVolume.SetUp succeeded for volume "kube-dns-token-cpzzw" Normal SuccessfulMountVolume 21m kubelet, ubuntu82 MountVolume.SetUp succeeded for volume "kube-dns-config" Normal Pulled 21m kubelet, ubuntu82 Container image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7" already present on machine Normal Started 21m kubelet, ubuntu82 Started container Normal Created 21m kubelet, ubuntu82 Created container Normal Started 19m (x2 over 21m) kubelet, ubuntu82 Started container Normal Created 19m (x2 over 21m) kubelet, ubuntu82 Created container Normal Pulled 19m (x2 over 21m) kubelet, ubuntu82 Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7" already present on machine Warning Unhealthy 19m (x4 over 20m) kubelet, ubuntu82 Liveness probe failed: HTTP probe failed with statuscode: 503 Warning Unhealthy 16m (x22 over 21m) kubelet, ubuntu82 Readiness probe failed: Get http://10.244.2.12:8081/readiness: dial tcp 10.244.2.12:8081: getsockopt: connection refused Normal Killing 6m (x6 over 19m) kubelet, ubuntu82 Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated. Warning BackOff 1m (x65 over 20m) kubelet, ubuntu82 Back-off restarting failed container </code></pre>
<p>Kubedns 1.14.7 does not work well with kubernetes 1.9.1. In my case, kubedns was trying to connect to apiserver using 443 and not, as configured, 6443.</p> <p>When I changed the image version to 1.14.8 (newest - <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns" rel="nofollow noreferrer">kubedns github</a>), kubedns recognized the apiserver port properly. No problems any more:</p> <pre><code>kubectl edit deploy kube-dns --namespace=kube-system #change to the image version to 1.14.8 and works </code></pre>
<p>I have a working cluster with services and PODS/replicas running. If I an not wrong, if a service is created with Type=NodePort/LoadBalancer, Kube-proxy creates a new NodePort in all the nodes and creates a iptable rule. If I add a new Node to the cluster, does Kube-proxy creates a NodePort in the newly added Node also?</p>
<p>Yes it does. That's what kubeproxy is meant to do.</p>
<p>I'm running Jenkins on Kubernetes with the git plugin installed. Now I want to use git commands in my script, which fails with the log: </p> <pre><code>script.sh: line 1: git: not found </code></pre> <p>My script: </p> <pre><code>stage('Package Helm Chart'){ sh """ #!/bin/bash echo "Pushing to remote Repository.." git checkout master git add &lt;myfilehere&gt; git commit -m "[Jenkins] Adding Artifact ${env.BUILD_NUMBER} to repository" git push echo "Successfully pushed artifact to repository" """ </code></pre> <p>Any idea on how to fix this? </p> <p>Cheers Jst </p>
<p>The sh command in question should run on a jenkins node inside a node block. This command will then run in a shell on that node. To use git in the sh tag of a pipeline script you need to have git installed and on the PATH on the node that you want to use.</p> <p>If you are using Kubernetes, then I assume you are running the Jenkins master or the node from a docker image, thus this image will need git installed and on the PATH.</p> <p>Once this is done the shell will be able to find git.</p>
<p>I'm trying to deploy my Lagom accessservices on Kubernetes.</p> <p>To do that I tried to containerize my service using fabric8’s docker-maven-plugin.</p> <p>So, I added the following plugin settings to the root project pom.xml to register the fabric8 Maven plugin:</p> <pre><code>&lt;plugin&gt; &lt;groupId&gt;io.fabric8&lt;/groupId&gt; &lt;artifactId&gt;docker-maven-plugin&lt;/artifactId&gt; &lt;version&gt;0.20.1&lt;/version&gt; &lt;configuration&gt; &lt;skip&gt;true&lt;/skip&gt; &lt;images&gt; &lt;image&gt; &lt;name&gt;%g/%a:%l&lt;/name&gt; &lt;build&gt; &lt;from&gt;openjdk:8-jre-alpine&lt;/from&gt; &lt;tags&gt; &lt;tag&gt;latest&lt;/tag&gt; &lt;tag&gt;${project.version}&lt;/tag&gt; &lt;/tags&gt; &lt;assembly&gt; &lt;descriptorRef&gt;artifact-with-dependencies&lt;/descriptorRef&gt; &lt;/assembly&gt; &lt;/build&gt; &lt;/image&gt; &lt;/images&gt; &lt;/configuration&gt; &lt;/plugin&gt; </code></pre> <p>And then, I Added the following plugin settings on the pom.xml under the application’s module directory:</p> <pre><code>&lt;plugin&gt; &lt;groupId&gt;io.fabric8&lt;/groupId&gt; &lt;artifactId&gt;docker-maven-plugin&lt;/artifactId&gt; &lt;configuration&gt; &lt;skip&gt;false&lt;/skip&gt; &lt;images&gt; &lt;image&gt; &lt;build&gt; &lt;entryPoint&gt; java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -cp '/maven/*' -Dhttp.address="$(eval "echo $ACCESSSERVICE_BIND_IP")" -Dhttp.port="$(eval "echo $ACCESSSERVICE_BIND_PORT")" -Dakka.remote.netty.tcp.hostname="$(eval "echo $AKKA_REMOTING_HOST")" -Dakka.remote.netty.tcp.bind-hostname="$(eval "echo $AKKA_REMOTING_BIND_HOST")" -Dakka.remote.netty.tcp.port="$(eval "echo $AKKA_REMOTING_PORT")" -Dakka.remote.netty.tcp.bind-port="$(eval "echo $AKKA_REMOTING_BIND_PORT")" $(IFS=','; I=0; for NODE in $AKKA_SEED_NODES; do echo "-Dakka.cluster.seed-nodes.$I=akka.tcp://accessservice@$NODE"; I=$(expr $I + 1); done) play.core.server.ProdServerStart &lt;/entryPoint&gt; &lt;/build&gt; &lt;/image&gt; &lt;/images&gt; &lt;/configuration&gt; &lt;/plugin&gt; </code></pre> <p>After that, I build my project using:</p> <pre><code>eval $(minikube docker-env) clean package docker:build </code></pre> <p>And I think that it was succeeded because when I executed "docker images", I had:</p> <p><a href="https://i.stack.imgur.com/AIYQU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AIYQU.png" alt="enter image description here"></a></p> <p>But my problem is when I tried to deploy my services, I got this error:</p> <blockquote> <p>Container image is not present with pull policy of NeverError syncing pod <a href="https://i.stack.imgur.com/TTVXc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TTVXc.png" alt="enter image description here"></a></p> </blockquote> <p>Do you have any explication for that? please.</p> <p>*** Edit 1 ****</p> <blockquote> <p>kubectl describe po accessservice-0 <a href="https://i.stack.imgur.com/X0G64.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X0G64.png" alt="enter image description here"></a></p> </blockquote>
<p>You need to use other <code>imagePullPolicy</code> different than <code>Never</code>, otherwise kubernetes will never try to download the image for your container. <a href="https://kubernetes.io/docs/concepts/configuration/overview/#container-images" rel="nofollow noreferrer">You can choose</a> between Always or IfNotPresent, which will download the image only if it's not already downloaded. For example</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-app spec: containers: - name: my-app image: my-app-image imagePullPolicy: Always </code></pre>
<p>I am trying to access the kubernetes Dashboard using the config file. From the authentication when I select config file its giving ‘<code>Not enough data to create auth info structure</code>.’ Bu the same config file work for kubectl command.</p> <p><a href="https://i.stack.imgur.com/VEzb9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VEzb9.png" alt="enter image description here"></a></p> <p>here is my config file.</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://kubemaster:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>Any help to resolve this issue?</p> <p>Thanks SR</p>
<p>After looking at this answer <a href="https://stackoverflow.com/questions/46664104/how-to-sign-in-kubernetes-dashboard">How to sign in kubernetes dashboard?</a> and source code figured the kubeconfig authentication.</p> <p>After kubeadm install on the master server get the <strong>default</strong> service account token and add it to config file. Then use the config file to authenticate.</p> <p>You can use this to add the token.</p> <pre><code>#!/bin/bash TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}') kubectl config set-credentials kubernetes-admin --token="${TOKEN}" </code></pre> <p>your config file should be looking like this.</p> <pre><code>kubectl config view |cut -c1-50|tail -10 name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.ey </code></pre>
<p>Update:</p> <p>I got the NodePort to work: <code>kubectl get services</code></p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 7d my-release-nginx-ingress-controller NodePort 10.105.64.135 &lt;none&gt; 80:32706/TCP,443:32253/TCP 10m my-release-nginx-ingress-default-backend ClusterIP 10.98.230.24 &lt;none&gt; 80/TCP 10m </code></pre> <p>Do I port-forward then?</p> <p>Installing Ingress using Helm on Docker for Mac(Edge with Kubernetes)</p> <p><a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress</a></p> <p>Will this work on localhost - and if so, how to access a service?</p> <p>Steps:</p> <ol> <li><code>helm install stable/nginx-ingress</code></li> </ol> <p>Output:</p> <pre><code>NAME: washing-jackal LAST DEPLOYED: Thu Jan 18 12:57:40 2018 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==&gt; v1/ConfigMap NAME DATA AGE washing-jackal-nginx-ingress-controller 1 1s ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE washing-jackal-nginx-ingress-controller LoadBalancer 10.105.122.1 &lt;pending&gt; 80:31494/TCP,443:32136/TCP 1s washing-jackal-nginx-ingress-default-backend ClusterIP 10.103.189.14 &lt;none&gt; 80/TCP 1s ==&gt; v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE washing-jackal-nginx-ingress-controller 1 1 1 0 0s washing-jackal-nginx-ingress-default-backend 1 1 1 0 0s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE washing-jackal-nginx-ingress-controller-5b4d86c948-xxlrt 0/1 ContainerCreating 0 0s washing-jackal-nginx-ingress-default-backend-57947f94c6-h4sz6 0/1 ContainerCreating 0 0s NOTES: The nginx-ingress controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace default get services -o wide -w washing-jackal-nginx-ingress-controller' An example Ingress that makes use of the controller: apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: example namespace: foo spec: rules: - host: www.example.com http: paths: - backend: serviceName: exampleService servicePort: 80 path: / # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: &lt;base64 encoded cert&gt; tls.key: &lt;base64 encoded key&gt; type: kubernetes.io/tls </code></pre>
<p>As far as I can tell from the output you posted, everything should be running smoothly in your local kubernetes cluster.</p> <p>However, your ingress controller is exposed using a <code>LoadBalancer Service</code> as you can tell from the following portion of the output you posted:</p> <pre><code>==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE washing-jackal-nginx-ingress-controller LoadBalancer 10.105.122.1 &lt;pending&gt; 80:31494/TCP,443:32136/TCP 1s </code></pre> <p>Services of type LoadBalancer require support from the underlying infrastructure, and will not work in your local environment.</p> <p>However, a LoadBalancer service is also a <code>NodePort</code> Service. In fact you can see in the above snippet of output that your ingress controller is listening to the following ports:</p> <pre><code>80:31494/TCP,443:32136/TCP </code></pre> <p>This means you should be able to reach your ingress controller on port 31494 and 32136 on your node's ip address.</p> <p>You could make your ingress controller listen to more standard ports, such as 80 and 443, but you'll probably have to edit manually the resources created by the helm chart to do so.</p>
<p>I am using Kubernetes 1.8.6 on Google Kubernetes Engine and have a pod running Alpine as part of a <code>StatefulSet</code>.</p> <p>I have logged into my pod using <code>kubectl exec -it my-pod-0 -- /bin/sh</code> and then run the following commands at the prompt:</p> <pre><code>$ CA_CERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt $ TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) $ NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace) $ curl --cacert $CA_CERT -H "Authorization: Bearer $TOKEN" "https://kubernetes /api/v1/namespaces/$NAMESPACE/services/" </code></pre> <p>Unfortunately a 403 Forbidden error is returned:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "services is forbidden: User \"system:serviceaccount:default:default\" cannot list services in the namespace \"default\": Unknown user \"system:serviceaccount:default:default\"", "reason": "Forbidden", "details": { "kind": "services" }, "code": 403 </code></pre> <p>What am I doing wrong?</p>
<p>You're not doing anything wrong. That pod's service account (specified in the pod's serviceAccountName) simply doesn't have any API permissions. </p> <p>You can grant a view role to that service account like this:</p> <pre><code>kubectl create rolebinding default-viewer \ --clusterrole=view \ --serviceaccount=default:default \ --namespace=default </code></pre> <p>See <a href="https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions" rel="noreferrer">https://kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions</a> for more details about granting permissions to service accounts. </p>
<p>Use <code>helm</code> install can set value when install a chart like:</p> <pre><code>helm install --set favoriteDrink=slurm ./mychart </code></pre> <p>Now want to set value like:</p> <pre><code>helm install --set aws.subnets=&quot;subnet-123456, subnet-654321&quot; ./mychart </code></pre> <p>But failed:</p> <pre><code>Error: failed parsing --set data: key &quot; subnet-654321&quot; has no value </code></pre> <p>It seems that <code>helm</code>'s <code>--set</code> know comma <code>,</code> and check the next string as a key. So can't use in this case when set such string?</p> <hr /> <h1>Tested this way</h1> <pre><code>helm install charts/mychart \ --set aws.subnets={subnet-123456,subnet-654321} </code></pre> <p>Got error:</p> <pre><code>Error: This command needs 1 argument: chart name </code></pre> <h1>This way works</h1> <pre><code>helm install charts/mychart \ --set aws.subnets=&quot;subnet-123456\,subnet-654321&quot; </code></pre> <h2>Reference</h2> <blockquote> <p><a href="https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set" rel="noreferrer">https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set</a></p> </blockquote>
<p>According to <a href="https://github.com/kubernetes/helm/issues/1987#issuecomment-280497496" rel="noreferrer">https://github.com/kubernetes/helm/issues/1987#issuecomment-280497496</a>, you set multiple values using curly braces, for example:</p> <pre><code>--set foo={a,b,c} </code></pre> <p>So, in your case it would be like this</p> <pre><code>--set aws.subnets={subnet-123456,subnet-654321} </code></pre>
<p>I did nginx ingress controller tutorial from <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md" rel="noreferrer">github</a> and exposed kubernetes dashboard</p> <pre><code>kubernetes-dashboard NodePort 10.233.53.77 &lt;none&gt; 443:31925/TCP 20d </code></pre> <p>created ingress </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/ssl-passthrough: "true" nginx.org/ssl-backends: "kubernetes-dashboard" kubernetes.io/ingress.allow-http: "false" name: dashboard-ingress namespace: kube-system spec: tls: - hosts: - serverdnsname secretName: kubernetes-dashboard-certs rules: - host: serverdnsname http: paths: - path: /dashboard backend: serviceName: kubernetes-dashboard servicePort: 443 </code></pre> <hr> <pre><code>ingress-nginx ingress-nginx NodePort 10.233.21.200 &lt;none&gt; 80:30827/TCP,443:32536/TCP 5h </code></pre> <p><a href="https://serverdnsname:32536/dashboard" rel="noreferrer">https://serverdnsname:32536/dashboard</a> but dashboard throws error </p> <pre><code>2018/01/18 14:42:51 http: TLS handshake error from ipWhichEndsWith.77:52686: tls: first record does not look like a TLS handshake </code></pre> <p>and ingress controller logs</p> <pre><code>2018/01/18 14:42:51 [error] 864#864: *37 upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 10.233.82.1, server: serverdnsname, request: "GET /dashboard HTTP/2.0", upstream: "http://ipWhichEndsWith.249:8443/dashboard", host: "serverdnsname:32536" 10.233.82.1 - [10.233.82.1] - - [18/Jan/2018:14:42:51 +0000] "GET /dashboard HTTP/2.0" 009 7 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36 OPR/49.0.2725.64" 25 0.001 [kube-system-kubernetes-dashboard-443] ipWhichEndsWith.249:8443 7 0.001 200 </code></pre> <p>On my mind it is related with nginx redirection to upstream: "<a href="http://ipWhichEndsWith.249:8443/dashboard" rel="noreferrer">http://ipWhichEndsWith.249:8443/dashboard</a>" . tried to update controller image version to 0.9.0-beta.19 - didnt help</p> <p>Thank you for any help.</p>
<p>As you pointed out, looks like nginx is proxying your https request to <code>ipWhichEndsWith.249:8443</code>, which is an HTTPS endpoint, using <code>http</code> as protocol. </p> <p>You should add the following annotation to your PodSpec:</p> <blockquote> <p><strong>LATEST</strong></p> <p>This annotation was added to replace the deprecated annotation since 0.18.0</p> <p><a href="https://github.com/kubernetes/ingress-nginx/pull/2871" rel="noreferrer">#2871</a> Add support for AJP protocol</p> </blockquote> <pre><code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" </code></pre> <hr> <blockquote> <p><strong>DEPRECATED</strong> </p> <p>This annotation was deprecated in 0.18.0 and removed after the release of 0.20.0</p> <p><a href="https://github.com/kubernetes/ingress-nginx/pull/3203" rel="noreferrer">#3203</a> Remove annotations grpc-backend and secure-backend already deprecated</p> </blockquote> <pre><code>nginx.ingress.kubernetes.io/secure-backends: "true" </code></pre> <hr> <p>This should make nginx forward your request to the pods with https.</p> <p>Source: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#backend-protocol" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#backend-protocol</a></p> <p>Docs: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol</a></p>
<p>I'm quite new to the kubernetes and just wondering about several things.</p> <p>Consider I have two repositories (front-e, backe-e). Each of them builds into the docker image. </p> <p>I want to manage these two images with the kubernetes and run them inside the cluster. </p> <p>What is the best practice to store deployments *.yml files in such a case? The separate repo? </p>
<p>IMHO the best practice for this is <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a>, the Kubernetes, package manager. In short, it allows you to use k8s manifests, such as deployment, service, persistence, etc., with the option to parameterize these with variables.</p> <p>You ask if it is better to keep these files in-repo or outside. Both ways have their pros and cons.</p> <h3>the helm way</h3> <p>When using helm, the best practice is to make a separate repo for only the helm charts (chart = kubernetes package). The advantage is here that you can build your central repository, where all your "packages" are collected including their version history.</p> <h3>in-repo</h3> <p>This makes the work for other developers easier, as everything related to your project, is in its repository.</p> <h3>no helm, just the yaml files</h3> <p>You just can use the raw manifests, which is much easier and also has fewer options. My personal best practice here is: placing pure config files in-repo, until I push it to the helm level, and place it in a central chart.</p>
<p>I'm using Openshift and Kubernetes as cloud platform for my application. For test purposes I need to intercept incoming http requests to my pods. Is this possible to do that with Kubernetes client library or maybe it can be configured with yaml?</p>
<p>Simple answer is no, you can't.</p> <p>One of the ways to overcome this is to exec into your container (<code>kubectl exec -it &lt;pod&gt; bash</code>), install tcpdump and run something like <code>tcpdump -i eth0 -n</code>.</p> <p>A more reasonable way to have it solved on infra level is to use some tracing tool like Jaeger/Zipkin</p>
<p>I have created self-signed user certificates for my kubernetes cluster and now want to distribute respective kubeconfig files to the users. </p> <p>How to I transform the .crt and .key files I used for the process to kubeconfig inline format? </p> <p>Here is a redacted sample inline representation of the crt file:</p> <pre><code>LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMrekNDQWVPZ0F3SUJBZ0lNRlFwTllCZ2hwSWFBclNJYU1BMEdDU3FHU0liM0RRRUJDd1VBTUJVeEV6QVIKQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13SGhjTk1UZ3dNVEUwTVRNeU9ERTVXaGNOTWpnd01URTBNVE15T0RFNQpXakFyTVJjd0ZRWURWUVFLRXc1emVYTjBaVzA2YldGemRHVnljekVRTUE0R0ExVUVBeE1IYTNWaVpXTm1aekNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWkraGFFb1d6Mk8yYUN5d3JhOHo4UHAKem9jUTBHK2JnTVFGQSttZzZCQkhRWCt1UFhQbVBpM2FOSjBmcXBsM0sySjkvbkNodVo4T0JRT1ZHa3ZGaDdIbApWQmR6WG9qOHZ6ZUplUko4SFBlNDV5NXJYQnZtUkRUYUhGSWJ1ZWdGYzlYRFNCemtob21jYTlKOHdXSS9nUHdpCnNaaTczd2o1TVdtbnk3MlRyQ1RuZktUTzVXY1IyT2txRGNCalhPb2thWTFqckkrSlpSdmpIa1FobnB2bTNrWW4KVnNo-----------xdTZiS1k1WVFYYUwvNXNRcTFKNXMyVnh1ckRaQ2c5anA4ZlQrMEUzbWwvM1lkTEF0MjI0NG1Ec1MKdFFjM2k5Nk8rM0xFeU90REsraW5vKzB1WnZWZ0lrazZhOG9LS0hxaks5b21oOWg3WjE1UjVwWkxIa3dTVmluYwpLbHZGOUt4WXJrekdidmFiRUZZd0p1ejFTdW02ZkJ4dDQ5THkvWDFKQkZ1K1pnWDFPWjdnM2ZPeGt3WWdtVVBnCmt5Mmx1Zk1MZWI2SzdwOVdkaUxsUFAyRWk4aG9CWTNXQk9UQk1kcXY1Wm01VWpUUm9sZkIrTXZwTEp0ZlFOST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= </code></pre>
<p>Simply encode it with Base 64 : <code>cat mycert.crt | base64 -w0</code></p>
<p>I'm trying to run pod with Cassandra database, below is its deployment description:</p> <pre><code>- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cassandra namespace: test spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 2 selector: matchLabels: app: cassandra template: metadata: labels: app: cassandra spec: containers: - env: - name: MAX_HEAP_SIZE value: 1024M - name: HEAP_NEWSIZE value: 1024M image: cassandra:3.10 name: cassandra ports: - containerPort: 9042 protocol: TCP </code></pre> <p>The pod gets created and then goes into CrashLoopBackOff. When I try <code>kubectl describe</code> here's what I see:</p> <pre><code>Name: cassandra-6b5f5c46cf-zpwlx Namespace: test Node: minikube/192.168.99.102 Start Time: Thu, 18 Jan 2018 15:26:05 +0200 Labels: app=cassandra pod-template-hash=2619170279 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"test","name":"cassandra-6b5f5c46cf","uid":"22f28f45-fc53-11e7-ae64-08002798f... Status: Running IP: 172.17.0.7 Controlled By: ReplicaSet/cassandra-6b5f5c46cf Containers: cassandra: Container ID: docker://b3477788391622145350e870c00e19561ee662946aa5a307cc8bea28fc874544 Image: cassandra:3.10 Image ID: docker-pullable://cassandra@sha256:af21476b230507c6869d758e4dec134886210bd89d56deade90bc835a1c0af37 Port: 9042/TCP State: Terminated Reason: Error Exit Code: 137 Started: Thu, 18 Jan 2018 15:26:26 +0200 Finished: Thu, 18 Jan 2018 15:26:28 +0200 Last State: Terminated Reason: Error Exit Code: 137 Started: Thu, 18 Jan 2018 15:26:11 +0200 Finished: Thu, 18 Jan 2018 15:26:14 +0200 Ready: False Restart Count: 2 Environment: MAX_HEAP_SIZE: 1024M HEAP_NEWSIZE: 1024M Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-77lfg (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-77lfg: Type: Secret (a volume populated by a Secret) SecretName: default-token-77lfg Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28s default-scheduler Successfully assigned cassandra-6b5f5c46cf-zpwlx to minikube Normal SuccessfulMountVolume 28s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-77lfg" Normal Pulled 7s (x3 over 27s) kubelet, minikube Container image "cassandra:3.10" already present on machine Normal Created 7s (x3 over 27s) kubelet, minikube Created container Normal Started 6s (x3 over 27s) kubelet, minikube Started container Warning BackOff 4s (x2 over 18s) kubelet, minikube Back-off restarting failed container Warning FailedSync 4s (x2 over 18s) kubelet, minikube Error syncing pod </code></pre> <p>The error reporting is completely useless: it's just some generic messages that tell nothing about the problem.</p> <p>There's a suspicious paragraph in pod's description: volumes. I didn't ask to mount any volumes for this container. However, after some web search, I think that whatever is mounted in this container is just some technical aspect of how Kubernetes works and has no actual meaning.</p> <p>Whatever the case: how can I get more information from minikube about what it was trying to do, and what failed?</p>
<p>Your pod is in <code>CrashLoopBackoff</code> state. This means that the container inside your pod is terminating its execution, kubernetes is trying to run it again, but it terminates again, giving you a <code>Crash Loop</code>.</p> <p>I suggest you to take a look at the container's output by running:</p> <pre><code>kubectl -n test logs -f cassandra-6b5f5c46cf-zpwlx </code></pre> <p>That should be cassandra's output and should explain the reason cassandra is not running.</p>
<p>Istio and gRPC seem complementary and I'd like to use both in the clusters.</p> <p>The thing is that they both add an extra container which receives/proxy communication between pods / microservices.</p> <p>Is it advised or not to use both in parallel in all pods?</p> <p>Are there particular adaptations to do if one uses both?</p>
<p>Istio and gRPC do work well together, when declaring your services ports' to istio just make sure to name them <code>grpc-something</code> so the proxy knows it is h2/grpc traffic and route it properly</p> <p>You mention that gRPC adds an extra container - why not having your service speak gRPC natively ? </p> <p>We do have future plans with protocol transcoding and rich integrated gRPC/istio libraries that would skip layers but that's not there yet.</p>
<p>Not really a programming question but quite curious to know how does Kubernetes or Minikube manage secrets &amp; uses it on multiple nodes/pods?</p> <p>Let's say if I create a secret to pull image with <code>kubectl</code> as below - </p> <pre><code>$ kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=$USERNM --docker-password=$PASSWD [email protected] </code></pre> <p>What processes will occur in the backend and how will k8s or Minikube use those on multiple nodes/pods?</p>
<p>All data in Kubernetes is managed by the <code>API Server</code> component that performs CRUD operations on the data store (current only option is <code>etcd</code>). </p> <p>When you submit a <code>secret</code> with <code>kubectl</code> to the <code>API Server</code> it stores the resource and data in <code>etcd</code>. It is recommended to enable encryption for secrets in in the API Server (through setting the right flags) so that the data is encrypted at rest, otherwise anyone with access to <code>etcd</code> will be able to read your <code>secrets</code> in plain text.</p> <p>When the <code>secret</code> is needed for either mounting in a <code>Pod</code> or in your example for pulling a Docker image from a private registry, it is requested from the <code>API Server</code> by the node-local <code>kubelet</code> and kept in <code>tmpfs</code> so it never touches any hard disk unencrypted.</p> <p>Here another security recommendation comes into play, which is called Node Authorization (again set up by setting the right flags and distributing certificates to API Server and Kubelets). With Node Authorization enabled you can make sure that a <code>kubelet</code> can only request resources (incl. <code>secrets</code>) that are meant to be run on that specific node, so a hacked node just exposes the resources on that single node and not everything.</p>
<p>I'm trying to use PodSecurityPolicy to harden my cluster. I created 2 Pod Security Policies, one is default which allows minimal privilege and the other one is restricted which allows all privileges of a pod can request by SecurityContext. The default one can be used by any service account, the restricted one can only be used by some service accounts.</p> <p>As said in the official document, "most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet, or other templated controller via the controller manager. Granting the controller access to the policy would grant access for all pods created by that the controller, so the preferred method for authorizing policies is to grant access to the pod’s service account". But it seems any user has permissions to create deployment can specify any service account in the yaml by spec.ServiceAccountName. That means as long as some naughty one knows the service account name that has access to the restricted PodSecurityPolicy, he can create a pod with that service account which will allow this pod to get escalated privileges. Then he can do anything in the pod.</p> <p>So is there a way to prevent misuse of service account in a pod? For example, if a user wants to create a pod with a service account which he doesn't have permission to use, the apiserver would block the request.</p> <p>Can anyone give any ideas? Thanks!</p>
<p>Namespaces allow you to limit a user to a set of service accounts. Untrusted users can be limited to namespaces containing only low-privileged service accounts.</p>
<p>For a rails app on production is it a good practice, besides the autoscaler, to set up some puma workers? or is it better just to have more running pods?</p>
<p>For my experience, it is better to have an higher number of smaller (in terms or resource occupation) pods rather then a smaller number of bigger pods.</p> <p>The reason why I came to this thinking are:</p> <p>1) a smaller pod is quicker to spawn up and to be moved around by kube controller;</p> <p>2) the failure of a pod instance is less impacting on the system overall performance (because there's an higher number of other replicas running);</p> <p>3) a bigger pod could require the cluster autoscaler to spawn up a new node more frequently (it requires more resources to be available on a node in order to be scheduled).</p> <p>That's my thought, I'd love to have other opinions though.</p>
<p>Use <code>helm</code> installed <code>Prometheus</code> and <code>Grafana</code> on <code>minikube</code> at local.</p> <pre><code>$ helm install stable/prometheus $ helm install stable/grafana </code></pre> <p>Prometheus server, alertmanager grafana can run after set port-forward:</p> <pre><code>$ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 9090 $ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 9093 $ export POD_NAME=$(kubectl get pods --namespace default -l "app=excited-crocodile-grafana,component=grafana" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 3000 </code></pre> <p><a href="https://i.stack.imgur.com/sD6Di.png" rel="noreferrer"><img src="https://i.stack.imgur.com/sD6Di.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/UPXUF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UPXUF.png" alt="enter image description here"></a></p> <p>Add Data Source from grafana, got <code>HTTP Error Bad Gateway</code> error:</p> <p><a href="https://i.stack.imgur.com/Cnn1B.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Cnn1B.png" alt="enter image description here"></a></p> <p>Import dashboard 315 from:</p> <blockquote> <p><a href="https://grafana.com/dashboards/315" rel="noreferrer">https://grafana.com/dashboards/315</a></p> </blockquote> <p>Then check <code>Kubernetes cluster monitoring (via Prometheus)</code>, got <code>Templating init failed</code> error:</p> <p><a href="https://i.stack.imgur.com/Wm2kt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Wm2kt.png" alt="enter image description here"></a></p> <p>Why?</p>
<p>In the HTTP settings of Grafana you set <code>Access</code> to <code>Proxy</code>, which means that Grafana wants to access Prometheus. Since Kubernetes uses an overlay network, it is a different IP.</p> <p>There are two ways of solving this:</p> <ol> <li>Set <code>Access</code> to <code>Direct</code>, so the browser directly connects to Prometheus.</li> <li>Use the Kubernetes-internal IP or domain name. I don't know about the Prometheus Helm-chart, but assuming there is a <code>Service</code> named <code>prometheus</code>, something like <code>http://prometheus:9090</code> should work.</li> </ol>
<p>I am pretty new to Kubernetes and wanted to setup Kafka and zookeeper with it. I was able to setup Apache Kafka and Zookeeper in Kubernetes using StatefulSets. I followed <a href="https://github.com/kow3ns/kubernetes-zookeeper" rel="noreferrer">this</a> and <a href="https://github.com/kow3ns/kubernetes-kafka" rel="noreferrer">this</a> to build my manifest file. I made 1 replica of kafka and zookeeper each and also used persistent volumes. All pods are running and ready.</p> <p>I tried to expose kafka and used <code>Service</code> for this by specifying a nodePort(30010). Seemingly this would expose kafka to the outside world where they can send messages to the kafka broker and also consume from it.</p> <p>But in my Java application, I made a consumer and added the bootstrapServer as <code>&lt;ip-address&gt;:30010</code>, the following logs were displayed:</p> <pre><code>INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 2147483647 rack: null) for group workerListener. INFO o.a.k.c.c.i.AbstractCoordinator - Marking the coordinator kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 2147483647 rack: null) dead for group workerListener </code></pre> <p>Interestingly, when I tested the cluster using <code>kubectl</code> commands, I was able to produce and consume messages:</p> <pre><code>kubectl run -ti --image=gcr.io/google_containers/kubernetes-kafka:1.0-10.2.1 produce --restart=Never --rm \ -- kafka-console-producer.sh --topic test --broker-list kafka-0.kafka-hs.default.svc.cluster.local:9093 done; kubectl run -ti --image=gcr.io/google_containers/kubernetes-kafka:1.0-10.2.1 consume --restart=Never --rm -- kafka-console-consumer.sh --topic test --bootstrap-server kafka-0.kafka-hs.default.svc.cluster.local:9093 </code></pre> <p>Can someone point me in the right direction why it is marking the coordinator as dead?</p> <p><strong>kafka.yml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-hs labels: app: kafka spec: ports: - port: 9093 name: server clusterIP: None selector: app: kafka --- apiVersion: v1 kind: Service metadata: name: kafka-cs labels: app: kafka spec: type: NodePort ports: - port: 9093 nodePort: 30010 protocol: TCP selector: app: kafka --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: kafka spec: serviceName: kafka-hs replicas: 1 podManagementPolicy: Parallel updateStrategy: type: RollingUpdate template: metadata: labels: app: kafka spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - kafka topologyKey: "kubernetes.io/hostname" podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: "app" operator: In values: - zk topologyKey: "kubernetes.io/hostname" terminationGracePeriodSeconds: 300 containers: - name: k8skafka imagePullPolicy: Always image: gcr.io/google_containers/kubernetes-kafka:1.0-10.2.1 resources: requests: memory: "1Gi" cpu: "0.5" ports: - containerPort: 9093 name: server command: - sh - -c - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \ --override listeners=PLAINTEXT://:9093 \ --override zookeeper.connect=zk-cs.default.svc.cluster.local:2181 \ --override log.dir=/var/lib/kafka \ --override auto.create.topics.enable=true \ --override auto.leader.rebalance.enable=true \ --override background.threads=10 \ --override compression.type=producer \ --override delete.topic.enable=false \ --override leader.imbalance.check.interval.seconds=300 \ --override leader.imbalance.per.broker.percentage=10 \ --override log.flush.interval.messages=9223372036854775807 \ --override log.flush.offset.checkpoint.interval.ms=60000 \ --override log.flush.scheduler.interval.ms=9223372036854775807 \ --override log.retention.bytes=-1 \ --override log.retention.hours=168 \ --override log.roll.hours=168 \ --override log.roll.jitter.hours=0 \ --override log.segment.bytes=1073741824 \ --override log.segment.delete.delay.ms=60000 \ --override message.max.bytes=1000012 \ --override min.insync.replicas=1 \ --override num.io.threads=8 \ --override num.network.threads=3 \ --override num.recovery.threads.per.data.dir=1 \ --override num.replica.fetchers=1 \ --override offset.metadata.max.bytes=4096 \ --override offsets.commit.required.acks=-1 \ --override offsets.commit.timeout.ms=5000 \ --override offsets.load.buffer.size=5242880 \ --override offsets.retention.check.interval.ms=600000 \ --override offsets.retention.minutes=1440 \ --override offsets.topic.compression.codec=0 \ --override offsets.topic.num.partitions=50 \ --override offsets.topic.replication.factor=3 \ --override offsets.topic.segment.bytes=104857600 \ --override queued.max.requests=500 \ --override quota.consumer.default=9223372036854775807 \ --override quota.producer.default=9223372036854775807 \ --override replica.fetch.min.bytes=1 \ --override replica.fetch.wait.max.ms=500 \ --override replica.high.watermark.checkpoint.interval.ms=5000 \ --override replica.lag.time.max.ms=10000 \ --override replica.socket.receive.buffer.bytes=65536 \ --override replica.socket.timeout.ms=30000 \ --override request.timeout.ms=30000 \ --override socket.receive.buffer.bytes=102400 \ --override socket.request.max.bytes=104857600 \ --override socket.send.buffer.bytes=102400 \ --override unclean.leader.election.enable=true \ --override zookeeper.session.timeout.ms=6000 \ --override zookeeper.set.acl=false \ --override broker.id.generation.enable=true \ --override connections.max.idle.ms=600000 \ --override controlled.shutdown.enable=true \ --override controlled.shutdown.max.retries=3 \ --override controlled.shutdown.retry.backoff.ms=5000 \ --override controller.socket.timeout.ms=30000 \ --override default.replication.factor=1 \ --override fetch.purgatory.purge.interval.requests=1000 \ --override group.max.session.timeout.ms=300000 \ --override group.min.session.timeout.ms=6000 \ --override inter.broker.protocol.version=0.10.2-IV0 \ --override log.cleaner.backoff.ms=15000 \ --override log.cleaner.dedupe.buffer.size=134217728 \ --override log.cleaner.delete.retention.ms=86400000 \ --override log.cleaner.enable=true \ --override log.cleaner.io.buffer.load.factor=0.9 \ --override log.cleaner.io.buffer.size=524288 \ --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \ --override log.cleaner.min.cleanable.ratio=0.5 \ --override log.cleaner.min.compaction.lag.ms=0 \ --override log.cleaner.threads=1 \ --override log.cleanup.policy=delete \ --override log.index.interval.bytes=4096 \ --override log.index.size.max.bytes=10485760 \ --override log.message.timestamp.difference.max.ms=9223372036854775807 \ --override log.message.timestamp.type=CreateTime \ --override log.preallocate=false \ --override log.retention.check.interval.ms=300000 \ --override max.connections.per.ip=2147483647 \ --override num.partitions=1 \ --override producer.purgatory.purge.interval.requests=1000 \ --override replica.fetch.backoff.ms=1000 \ --override replica.fetch.max.bytes=1048576 \ --override replica.fetch.response.max.bytes=10485760 \ --override reserved.broker.max.id=1000 " env: - name: KAFKA_HEAP_OPTS value : "-Xmx512M -Xms512M" - name: KAFKA_OPTS value: "-Dlogging.level=INFO" volumeMounts: - name: kafka-pv-volume mountPath: /var/lib/kafka readinessProbe: exec: command: - sh - -c - "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093" securityContext: runAsUser: 0 fsGroup: 1000 volumeClaimTemplates: - metadata: name: kafka-pv-volume spec: storageClassName: manual accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi </code></pre> <p><strong>zookeeper.yml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: zk-hs labels: app: zk spec: ports: - port: 2888 name: server - port: 3888 name: leader-election clusterIP: None selector: app: zk --- apiVersion: v1 kind: Service metadata: name: zk-cs labels: app: zk spec: ports: - port: 2181 name: client selector: app: zk --- apiVersion: apps/v1 kind: StatefulSet metadata: name: zk spec: selector: matchLabels: app: zk serviceName: zk-hs replicas: 1 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: app: zk spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app" operator: In values: - zk topologyKey: "kubernetes.io/hostname" containers: - name: kubernetes-zookeeper imagePullPolicy: Always image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10" resources: requests: memory: "1Gi" cpu: "0.5" ports: - containerPort: 2181 name: client - containerPort: 2888 name: server - containerPort: 3888 name: leader-election command: - sh - -c - "start-zookeeper \ --servers=1 \ --data_dir=/var/lib/zookeeper/data \ --data_log_dir=/var/lib/zookeeper/data/log \ --conf_dir=/opt/zookeeper/conf \ --client_port=2181 \ --election_port=3888 \ --server_port=2888 \ --tick_time=2000 \ --init_limit=10 \ --sync_limit=5 \ --heap=512M \ --max_client_cnxns=60 \ --snap_retain_count=3 \ --purge_interval=12 \ --max_session_timeout=40000 \ --min_session_timeout=4000 \ --log_level=INFO" readinessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "zookeeper-ready 2181" initialDelaySeconds: 10 timeoutSeconds: 5 volumeMounts: - name: pv-volume mountPath: /var/lib/zookeeper securityContext: runAsUser: 0 fsGroup: 1000 volumeClaimTemplates: - metadata: name: pv-volume spec: storageClassName: manual accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi </code></pre> <p><strong>EDIT:</strong></p> <p>I changed log level to TRACE. These are the logs I got</p> <pre><code>2018-01-11 18:56:24,617 TRACE o.a.k.c.NetworkClient - Completed receive from node -1, for key 3, received {brokers=[{node_id=0,host=kafka-0.kafka-hs.default.svc.cluster.local,port=9093,rack=null}],cluster_id=LwSLmJpTQf6tSKPsfvriIg,controller_id=0,topic_metadata=[{topic_error_code=0,topic=mdm.worker.request,is_internal=false,partition_metadata=[{partition_error_code=0,partition_id=0,leader=0,replicas=[0],isr=[0]}]}]} 2018-01-11 18:56:24,621 DEBUG o.a.k.c.Metadata - Updated cluster metadata version 2 to Cluster(id = LwSLmJpTQf6tSKPsfvriIg, nodes = [kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 0 rack: null)], partitions = [Partition(topic = mdm.worker.request, partition = 0, leader = 0, replicas = [0], isr = [0])]) 2018-01-11 18:56:24,622 TRACE o.a.k.c.NetworkClient - Completed receive from node -1, for key 10, received {error_code=0,coordinator={node_id=0,host=kafka-0.kafka-hs.default.svc.cluster.local,port=9093}} 2018-01-11 18:56:24,624 DEBUG o.a.k.c.c.i.AbstractCoordinator - Received GroupCoordinator response ClientResponse(receivedTimeMs=1515678984622, latencyMs=798, disconnected=false, requestHeader={api_key=10,api_version=0,correlation_id=0,client_id=consumer-1}, responseBody=FindCoordinatorResponse(throttleTimeMs=0, errorMessage='null', error=NONE, node=kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 0 rack: null))) for group workerListener 2018-01-11 18:56:24,625 INFO o.a.k.c.c.i.AbstractCoordinator - Discovered coordinator kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 2147483647 rack: null) for group workerListener. 2018-01-11 18:56:24,625 DEBUG o.a.k.c.NetworkClient - Initiating connection to node 2147483647 at kafka-0.kafka-hs.default.svc.cluster.local:9093. 2018-01-11 18:56:24,633 DEBUG o.a.k.c.NetworkClient - Error connecting to node 2147483647 at kafka-0.kafka-hs.default.svc.cluster.local:9093: java.io.IOException: Can't resolve address: kafka-0.kafka-hs.default.svc.cluster.local:9093 at org.apache.kafka.common.network.Selector.connect(Selector.java:195) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:762) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:224) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.tryConnect(ConsumerNetworkClient.java:462) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$GroupCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:598) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$GroupCoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:579) at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204) at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167) at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:488) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:348) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:214) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:200) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:614) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.UnresolvedAddressException: null at sun.nio.ch.Net.checkAddress(Net.java:101) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) at org.apache.kafka.common.network.Selector.connect(Selector.java:192) ... 22 common frames omitted 2018-01-11 18:56:24,634 INFO o.a.k.c.c.i.AbstractCoordinator - Marking the coordinator kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 2147483647 rack: null) dead for group workerListener 2018-01-11 18:56:24,735 TRACE o.a.k.c.NetworkClient - Found least loaded node kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 0 rack: null) 2018-01-11 18:56:24,735 DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending GroupCoordinator request for group workerListener to broker kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 0 rack: null) 2018-01-11 18:56:24,735 DEBUG o.a.k.c.NetworkClient - Initiating connection to node 0 at kafka-0.kafka-hs.default.svc.cluster.local:9093. 2018-01-11 18:56:24,736 DEBUG o.a.k.c.NetworkClient - Error connecting to node 0 at kafka-0.kafka-hs.default.svc.cluster.local:9093: java.io.IOException: Can't resolve address: kafka-0.kafka-hs.default.svc.cluster.local:9093 at org.apache.kafka.common.network.Selector.connect(Selector.java:195) at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:762) at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:224) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:408) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:223) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:214) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:200) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:286) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:614) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) Caused by: java.nio.channels.UnresolvedAddressException: null at sun.nio.ch.Net.checkAddress(Net.java:101) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) at org.apache.kafka.common.network.Selector.connect(Selector.java:192) ... 15 common frames omitted 2018-01-11 18:56:24,737 TRACE o.a.k.c.NetworkClient - Removing node kafka-0.kafka-hs.default.svc.cluster.local:9093 (id: 0 rack: null) from least loaded node selection: is-blacked-out: true, in-flight-requests: 0 2018-01-11 18:56:24,737 TRACE o.a.k.c.NetworkClient - Least loaded node selection failed to find an available node 2018-01-11 18:56:24,738 DEBUG o.a.k.c.NetworkClient - Give up sending metadata request since no node is available </code></pre>
<p>I had the same problem as you last week and solved it, so it's possible to expose Kafka outside Kubernetes!</p> <p><strong>Solution:</strong> In your Kafka <code>broker-config.yaml</code> you should map cluster external IP to your local DNS</p> <pre><code>kafka-I.kafka-hs.default.svc.cluster.local:9093 </code></pre> <p><strong>How To:</strong></p> <p>add those to your <em>server.properties</em> file:</p> <pre><code>listener.security.protocol.map=INTERNAL_PLAINTEXT:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT inter.broker.listener.name=INTERNAL_PLAINTEXT </code></pre> <p>if you have an <em>init</em> which run before <em>server.properties</em> you should add those:</p> <pre><code># add unique label to each pod kubectl label pods ${HOSTNAME} kafka-set-component=${HOSTNAME} EXTERNAL_LISTENER_IP=&lt;YOUR_KUBERNETES_CLUSTER_EXTERNAL_IP&gt; EXTERNAL_LISTENER_PORT=$((30093 + ${HOSTNAME##*-})) sed -i "s/#listeners=PLAINTEXT:\/\/:9092/listeners=INTERNAL_PLAINTEXT:\/\/0.0.0.0:9092,EXTERNAL_PLAINTEXT:\/\/0.0.0.0:9093/" /etc/kafka/server.properties sed -i "s/#advertised.listeners=PLAINTEXT:\/\/your.host.name:9092/advertised.listeners=INTERNAL_PLAINTEXT:\/\/$HOSTNAME.broker.kafka.svc.cluster.local:9092,EXTERNAL_PLAINTEXT:\/\/$EXTERNAL_LISTENER_IP:$EXTERNAL_LISTENER_PORT/" /etc/kafka/server.properties </code></pre> <p>otherwise you should find a way to add replace configurations in your <em>server.properties</em> at runtime.</p> <p><strong>Notice</strong> that you <strong>must</strong> have those lines commented in your server.properties file</p> <pre><code>#listeners=PLAINTEXT://:9092 #advertised.listeners=PLAINTEXT://your.host.name:9092 </code></pre> <p><strong>Services:</strong> Create headless service to map local DNS and a service for each broker you have:</p> <pre><code># A headless service to create DNS records --- apiVersion: v1 kind: Service metadata: name: broker namespace: kafka spec: ports: - port: 9092 # [podname].broker.kafka.svc.cluster.local clusterIP: None selector: app: kafka --- apiVersion: v1 kind: Service metadata: name: broker-0 namespace: kafka spec: type: NodePort ports: - port: 9093 nodePort: 30093 selector: kafka-set-component: kafka-0 --- apiVersion: v1 kind: Service metadata: name: broker-1 namespace: kafka spec: type: NodePort ports: - port: 9093 nodePort: 30094 selector: kafka-set-component: kafka-1 --- apiVersion: v1 kind: Service metadata: name: broker-2 namespace: kafka spec: type: NodePort ports: - port: 9093 nodePort: 30095 selector: kafka-set-component: kafka-2 </code></pre> <p><strong>Notes:</strong> - If you are running on GKE:</p> <ol> <li><em>YOUR_KUBERNETES_CLUSTER_EXTERNAL_IP</em> which declared in the <em>server.properties</em> init can be found via <code>gcloud compute instances list</code></li> <li>Also you must give permission to the firewall <code>gcloud compute firewall-rules create kafka-external --allow tcp:30093,tcp:30094,tcp:30095</code></li> </ol>
<p>I'm quite new to the kubernetes and just wondering about several things.</p> <p>Consider I have two repositories (front-e, backe-e). Each of them builds into the docker image. </p> <p>I want to manage these two images with the kubernetes and run them inside the cluster. </p> <p>What is the best practice to store deployments *.yml files in such a case? The separate repo? </p>
<p>Similar answer to a similar question here: <a href="https://stackoverflow.com/questions/47168381/best-practices-for-storing-kubernetes-configuration-in-source-control">Best practices for storing kubernetes configuration in source control</a></p> <p>Like David says, you can always try using <a href="https://github.com/kubernetes/helm" rel="nofollow noreferrer">helm</a>. However, I would recommend against helm for simple projects or if you're just starting out. I find it complex and hesitate to run another stateful component on my k8s cluster. </p> <p>I'm listing out some other options below.</p> <p><strong>TL;DR: (In order of personal preference)</strong></p> <ul> <li>Monorepo with all source code and yaml files in one place</li> <li>yaml files in a seperate k8s repo</li> <li>yaml files in source code repos (next to the Dockerfile that you presumably have in each repo)</li> </ul> <hr> <p><strong>1. Monorepo</strong></p> <ul> <li>Seperate folders for each microservice. </li> <li>Microservice specific k8s yamls (deployment.yml, service.yml) go inside the microservice folders</li> <li>Folder for cluster-wide configuration and k8s resources (ingress/api-gateway)</li> <li>Folder for integration tests</li> </ul> <p>If you're worried about monorepos, do read <a href="https://medium.com/@maoberlehner/monorepos-in-the-wild-33c6eb246cb9" rel="nofollow noreferrer">about</a> <a href="https://developer.atlassian.com/blog/2015/10/monorepos-in-git/" rel="nofollow noreferrer">them</a> before you decide to ditch them because they don't sound elegant ;)</p> <p>Protip: Add some client-side jinja templating if you have multiple k8s clusters and you don't want to write a different k8s file per cluster.</p> <p><strong>2. Separate k8s repo</strong></p> <p>Same as 1, but without the source code. </p> <p><strong>3. k8s files inside separate source repos</strong></p> <p>Keep the deployment, service yml files within the repos. This is probably the best if both microservices are completely decoupled, can be independently tested and don't really need any common k8s resources that need to be created. I've never come across this in practice though. I've had to merge repos of 7 different microservices into one repo for release sanity.</p>
<p>Update: Ok I am not alone: <a href="https://github.com/docker/for-mac/issues/2445" rel="nofollow noreferrer">https://github.com/docker/for-mac/issues/2445</a></p> <p>Following <a href="https://github.com/kubernetes/examples.git" rel="nofollow noreferrer">this</a>, he gets an External-ip:</p> <p>I got the YAML files from GitHub.</p> <pre><code>wordpress LoadBalancer 10.108.161.250 &lt;pending&gt; 80:30806/TCP </code></pre> <p>Why are my LoadBalancer just pending when his(in the video) is not?</p> <p>And please note that he do has a LoadBalencer that expose an external ip for the service.</p>
<p>The <code>LoadBalancer</code> services require support from the underlying infrastructure. They work automatically if you deploy them in supported providers such as AWS or GKE.</p> <p>They doesn't work if you deploy them locally on your Mac or with minikube</p> <p>Details here: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers</a></p>
<p>I am trying to create a Service Account with 'roles/container.admin' and i get an error saying that the role is not supported for this resource.</p> <pre><code>$ gcloud iam service-accounts add-iam-policy-binding [email protected] --member='serviceAccount:[email protected]' --role='roles/container.admin' ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) INVALID_ARGUMENT: Role roles/container.admin is not supported for this resource. </code></pre> <p>If I create a Service Account from the CONSOLE UI I can add this role without a problem.</p>
<p>You have to use <code>gcloud projects</code> to add roles for a service account at a project level as shown <a href="https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource" rel="noreferrer">here</a>.</p> <p>This works for me:</p> <pre><code>gcloud projects add-iam-policy-binding PROJECT_ID \ --member serviceAccount:[email protected] \ --role roles/container.admin </code></pre>
<p>Anyone try to run vm for production on a Kubernetes cluster. Is their a way to run kvm instance inside a pod ? I know that google run all the vm inside container is it planned for kubernetes ? Thank you</p>
<p>Another option is KubeVirt: <a href="https://github.com/kubevirt/" rel="nofollow noreferrer">https://github.com/kubevirt/</a></p> <p>An add-on to Kubernetes to run virtual machines, in the sense of classical virtual machines, as you can run them on VMWare, oVirt, OpenStack.</p> <p>The goal is to support migration of (currently) virtual machine workloads to containers, as well as having the ability to keep workloads virtualized if needed - but keep them close (as in: on) the container infrasturcture.</p> <p>KubeVirt provides an explicit API around virtualization features, see <a href="https://kubevirt.gitbooks.io/user-guide/" rel="nofollow noreferrer">https://kubevirt.gitbooks.io/user-guide/</a>.</p> <p>Only nit, it's still pretty much in WIP, should be usable soon tho.</p>
<p>I am trying to develop a Helm chart for an application to ease release management and deployment of the application to kubernetes. In order to do so, i have written a pre-install hook in the Helm chart. </p> <pre><code> apiVersion: batch/v1 kind: Job metadata: name: px-etcd-preinstall-hook labels: heritage: {{.Release.Service | quote }} release: {{.Release.Name | quote }} chart: "{{.Chart.Name}}-{{.Chart.Version}}" annotations: "helm.sh/hook": pre-install "helm.sh/hook-weight": "-5" "helm.sh/hook-delete-policy": hook-succeeded, hook-failed spec: backoffLimit: 2 template: spec: restartPolicy: Never containers: - name: pre-install-job imagePullPolicy: Always image: "hrishi/px-etcd-preinstall-hook:v1" command: ['/bin/sh'] args: ['/usr/bin/etcdStatus.sh',"{{ .Values.etcdEndPoint }}"] </code></pre> <p>This docker container just checks if an ETCD endpoint is accessible or not. Idea is for it to wait for a few seconds and a few tries and then exit.<br> Here is the initial shell script which runs as part of this container. </p> <pre><code>set -x echo "Initializing..." svcname=$1 echo $svcname etcdURL=$(echo "$svcname" | awk -F: '{ st = index($0,":");print substr($0,st+1)}') echo $etcdURL response=$(curl --write-out %{http_code} --silent --output /dev/null "$etcdURL/version") echo $response if [[ "$response" != 200 ]] then echo "Provided etcd url is not reachable. Exiting.." exit 1 fi </code></pre> <p>All is well and fine if the ETCD url is accessible, but if the etcd url is inaccessible then I get an error stating <code>Error: Job failed: BackoffLimitExceeded”</code></p> <p>I want to check if there is a way of setting a user friendly error message stating that the url isnt accessible or something like that. Seems there isnt a way to do it right now, not that i know of. I tried this to just be a <code>Pod</code> instead of a <code>Job</code> and that doesnt work either. </p> <p>Looked up at the docs for Helm but couldnt seem to find any information regarding this. </p>
<p>I don't think is possible. But I'd take a different approach.</p> <p>If your application requires ETCD, why don't you check if ETCD is accesible as one of your <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">Pod probes</a>, like liveness or readiness? That way, if there is no connectivity between your application and ETCD, your application won't start and you'll know that the probe failed when describing your Pod, in a more <em>kubernetes way</em>.</p> <p>Furthermore, you can even make <code>helm install</code> <a href="https://github.com/kubernetes/helm/blob/master/docs/using_helm.md#helpful-options-for-installupgraderollback" rel="nofollow noreferrer">to wait until all the Pod's are <code>Ready</code></a>, meaning that the command <code>helm install</code> would fail if your application didn't connect to ETCD.</p>
<p>We are using the kubernetes python client (4.0.0) in combination with google's kubernetes engine (master + nodepools run k8s 1.8.4) to periodically schedule workloads on kubernetes. The simplified version of the script we use to creates the pod, attach to the the logs and report the end status of the pod looks as follows:</p> <pre><code>config.load_kube_config(persist_config=False) v1 = client.CoreV1Api() v1.create_namespaced_pod(body=pod_specs_dict, namespace=args.namespace) logging_response = v1.read_namespaced_pod_log( name=pod_name, namespace=args.namespace, follow=True, _preload_content=False ) for line in logging_response: line = line.rstrip() logging.info(line) status_response = v1.read_namespaced_pod_status(pod_name, namespace=args.namespace) print("Pod ended in status: {}".format(status_response.status.phase)) </code></pre> <p>Everything works pretty fine, however we are experiencing some authentication issues. Authentication happens through the default <code>gcp</code> auth-provider, for which I obtained the initial access token by running a <code>kubectl container cluster get-credentials</code> manually on the scheduler. At some random timeframes, some API calls result in a 401 response from the API server. My guess is that this happens whenever the access token is expired, and the script tries to obtain a new access token. However it happens that multiple scripts are running concurrently on the scheduler, resulting in obtaining a new API key multiple times of which only one is still valid. I tried out multiple ways to fix the issue (use <code>persist_config=True</code>, retry 401's after reloading the config,...) without any success. As I am not completely aware how the gcp authentication and the kubernetes python client config work (and docs for both are rather scarce), I am a bit left in the dark. </p> <p>Should we use another authentication method instead of the <code>gcp</code> auth-provider? Is this a bug in the kubernetes python client? Should we use multiple config files?</p>
<p>In the end we have solved this by using bearer token authentication, instead of relying on the default gcloud authentication method.</p> <p>Here are the steps that I did to achieve this.</p> <p>First create a service account in the desired namespace, by creating a file with the following content.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: &lt;name_of_service_account&gt; </code></pre> <p>Then use this file to create the service account</p> <pre><code>kubectl create -f &lt;path_to_file&gt; --namespace=&lt;namespace_name&gt; </code></pre> <p>Each service account has a bearer token linked to it, which can be used for authentication. This bearer token is automatically mounted as a secret into the namespace. To find out what this token is, first find the name of the secret (is of the form <code>&lt;service_account_name&gt;-token-&lt;random_string&gt;</code>) and then use that name to get to content.</p> <pre><code># To search for out service account's token name kubectl get secrets --namespace=&lt;namespace_name&gt; # To find the token name kubectl describe secret/&lt;secret_name&gt; </code></pre> <p>After this you should find out the ip address of the API server, and the <strong>Cluster CA certificate</strong> of the kubernetes cluster. This can be done by going to the kubernetes engine detail page on google cloud console. Copy the content of the certificate into a local file.</p> <p>You can now use the bearer token to authenticate via the kubernetes python client, as follows:</p> <pre><code>from kubernetes import client configuration = client.Configuration() configuration.api_key["authorization"] = '&lt;bearer_token&gt;' configuration.api_key_prefix['authorization'] = 'Bearer' configuration.host = 'https://&lt;ip_of_api_server&gt;' configuration.ssl_ca_cert = '&lt;path_to_cluster_ca_certificate&gt;' v1 = client.CoreV1Api(client.ApiClient(configuration)) </code></pre>
<p>When I run an image in Kubernetes with <code>kubectl run</code>, environment variables are injected into the container.</p> <p>My problem is that the values are wrong. I do not have anything running at <code>10.0.0.1</code>. I believe the correct value there would be <code>10.1.0.1</code>. This misconfiguration causes, as far as I know, among other things, the error from kube-dns reproduced below.</p> <p>I would like to ask how are these variables injected into the container, preferably for a link into the code which takes care of this (I could not find anything). Also, some hints where the value 10.0.0.1 could be coming from.</p> <p>pod variables:</p> <pre><code>$ kubectl run -i --image=busybox --restart=Never -t busybox If you don't see a command prompt, try pressing enter. / # env KUBERNETES_SERVICE_PORT=443 KUBERNETES_PORT=tcp://10.0.0.1:443 HOSTNAME=busybox SHLVL=1 HOME=/root TERM=xterm KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443 KUBERNETES_SERVICE_HOST=10.0.0.1 PWD=/ </code></pre> <p>kube-dns error:</p> <pre><code>$ kubectl --namespace kube-system logs kube-dns-2190035132-gxf80 kubedns [...] E0119 10:04:05.271499 55 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.0.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.0.0.1:443: i/o timeout I0119 10:04:05.771477 55 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... </code></pre> <p>The closest thing to <code>10.0.0.1</code> that I have in my config is <code>--service-cluster-ip-range=10.0.0.0/24</code> parameter I am giving to <code>kube-apiserver</code>.</p> <p>I have the IP <code>10.0.0.1</code> in my etcd, in</p> <pre><code># ETCDCTL_API=3 etcdctl get "" --from-key [...] /registry/services/specs/default/kubernetes k8s v1Service kubernetes▒default"*$b198bc22-fcff-11e7-83a9-185e0fec8ce528B Z component apiserverZ provider kuberneteszC ▒ httpsTCP▒▒(10.0.0.1" ClusterIPClientIPBRZ`▒ ▒" /registry/services/specs/kube-system/kubernetes-dashboard k8s v1Service kubernetes-dashboard▒ kube-system"*$b9f0daef-fcff-11e7-83a9-185e0fec8ce528B ԾZ, addonmanager.kubernetes.io/mode ReconcileZ ppkubernetes-dashboardZ* kubernetes.io/minikube-addons dashboardZ3 &amp;kubernetes.io/minikube-addons-endpoint dashboardb 0kubectl.kubernetes.io/last-applied-configuration{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard","kubernetes.io/minikube-addons-endpoint":"dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"nodePort":30000,"port":80,"targetPort":9090}],"selector":{"app":"kubernetes-dashboard"},"type":"NodePort"}} z_ TCP▒PG▒( ppkubernetes-dashboard▒ 10.0.0.82NodePort:NoneBRZCluster`▒ ▒" </code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/kubelet/envvars/envvars.go#L45-L48" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/kubelet/envvars/envvars.go#L45-L48</a> which I found via <code>git grep SERVICE_PORT</code></p> <p>It's possible that if your <code>kubernetes.default.svc.cluster.local</code> is pointing to the wrong IP, then <code>kubectl --namespace=kube-system edit svc kubernetes</code> and changing the <code>ClusterIP</code> would sort that out; I don't have a cluster in front of me to test it, though</p>
<p>Use <code>Helm</code> installed <code>Prometheus</code> and <code>Grafana</code> in a kubernetes cluster:</p> <pre><code>helm install stable/prometheus helm install stable/grafana </code></pre> <p>It has an <code>alertmanage</code> service.</p> <p><a href="https://i.stack.imgur.com/ePQ9w.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ePQ9w.png" alt="enter image description here"></a></p> <p>But I saw a blog introduced how to setup alertmanager config with yaml files:</p> <blockquote> <p><a href="http://blog.wercker.com/how-to-setup-alerts-on-prometheus" rel="noreferrer">http://blog.wercker.com/how-to-setup-alerts-on-prometheus</a></p> </blockquote> <p>Is it possible to use the current way(installed by helm) to set some <code>alert rules</code> and config for <code>CPU</code>, <code>memory</code> and send Email without create other yaml files?</p> <p>I saw a introduction for k8s <code>configmap</code> to <code>alertmanager</code>:</p> <blockquote> <p><a href="https://github.com/kubernetes/charts/tree/master/stable/prometheus#configmap-files" rel="noreferrer">https://github.com/kubernetes/charts/tree/master/stable/prometheus#configmap-files</a></p> </blockquote> <p>But not clear how to use and how to do.</p> <hr> <h1>Edit</h1> <p>I downloaded source code of <code>stable/prometheus</code> to see what it do. From the <code>values.yaml</code> file I found:</p> <pre><code>serverFiles: alerts: "" rules: "" prometheus.yml: |- rule_files: - /etc/config/rules - /etc/config/alerts scrape_configs: - job_name: prometheus static_configs: - targets: - localhost:9090 </code></pre> <blockquote> <p><a href="https://github.com/kubernetes/charts/blob/master/stable/prometheus/values.yaml#L600" rel="noreferrer">https://github.com/kubernetes/charts/blob/master/stable/prometheus/values.yaml#L600</a></p> </blockquote> <p>So I think should write to this config file by myself to define alert <code>rules</code> and <code>alertmanager</code> here. But don't clear about this block:</p> <pre><code> rule_files: - /etc/config/rules - /etc/config/alerts </code></pre> <p>Maybe it's meaning the path in the container. But there isn't any file now. Should add here:</p> <pre><code>serverFiles: alert: "" rules: "" </code></pre> <h1>Edit 2</h1> <p>After set <code>alert rules</code> and <code>alertmanager</code> configuration in <code>values.yaml</code>:</p> <pre><code>## Prometheus server ConfigMap entries ## serverFiles: alerts: "" rules: |- # # CPU Alerts # ALERT HighCPU IF ((sum(node_cpu{mode=~"user|nice|system|irq|softirq|steal|idle|iowait"}) by (instance, job)) - ( sum(node_cpu{mode=~"idle|iowait"}) by (instance,job) ) ) / (sum(node_cpu{mode=~"user|nice|system|irq|softirq|steal|idle|iowait"}) by (instance, job)) * 100 &gt; 95 FOR 10m LABELS { service = "backend" } ANNOTATIONS { summary = "High CPU Usage", description = "This machine has really high CPU usage for over 10m", } # TEST ALERT APIHighRequestLatency IF api_http_request_latencies_second{quantile="0.5"} &gt;1 FOR 1m ANNOTATIONS { summary = "High request latency on {{$labels.instance }}", description = "{{ $labels.instance }} has amedian request latency above 1s (current value: {{ $value }}s)", } </code></pre> <p>Ran <code>helm install prometheus/</code> to install it.</p> <p>Start <code>port-forward</code> for <code>alertmanager</code> component:</p> <pre><code>export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 </code></pre> <p>Then access browser to <code>http://127.0.0.1:9003</code>, got these messages:</p> <pre><code>Forwarding from 127.0.0.1:9093 -&gt; 9093 Handling connection for 9093 Handling connection for 9093 E0122 17:41:53.229084 7159 portforward.go:331] an error occurred forwarding 9093 -&gt; 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31237.140275133073152] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 E0122 17:41:53.243511 7159 portforward.go:331] an error occurred forwarding 9093 -&gt; 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31238.140565602109184] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused E0122 17:41:53.246011 7159 portforward.go:331] an error occurred forwarding 9093 -&gt; 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:54 socat[31239.140184300869376] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 Handling connection for 9093 E0122 17:41:53.846399 7159 portforward.go:331] an error occurred forwarding 9093 -&gt; 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31250.140004515874560] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused E0122 17:41:53.847821 7159 portforward.go:331] an error occurred forwarding 9093 -&gt; 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31251.140355466835712] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused Handling connection for 9093 E0122 17:41:53.858521 7159 portforward.go:331] an error occurred forwarding 9093 -&gt; 9093: error forwarding port 9093 to pod 6614ee96df545c266e5fff18023f8f7c87981f3340ee8913acf3d8da0e39e906, uid : exit status 1: 2018/01/22 08:37:55 socat[31252.140268300003072] E connect(5, AF=2 127.0.0.1:9093, 16): Connection refused </code></pre> <p>Why?</p> <p>When I check <code>kubectl describe po illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6</code>, got:</p> <pre><code>Name: illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6 Namespace: default Node: minikube/192.168.99.100 Start Time: Mon, 22 Jan 2018 17:33:54 +0900 Labels: app=prometheus component=alertmanager pod-template-hash=1438303657 release=illocutionary-heron Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"illocutionary-heron-prometheus-alertmanager-587d747b9c","uid":"f... Status: Running IP: 172.17.0.10 Created By: ReplicaSet/illocutionary-heron-prometheus-alertmanager-587d747b9c Controlled By: ReplicaSet/illocutionary-heron-prometheus-alertmanager-587d747b9c Containers: prometheus-alertmanager: Container ID: docker://0808a3ecdf1fa94b36a1bf4b8f0d9d2933bc38afa8b25e09d0d86f036ac3165b Image: prom/alertmanager:v0.9.1 Image ID: docker-pullable://prom/alertmanager@sha256:ed926b227327eecfa61a9703702c9b16fc7fe95b69e22baa656d93cfbe098320 Port: 9093/TCP Args: --config.file=/etc/config/alertmanager.yml --storage.path=/data State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 22 Jan 2018 17:55:24 +0900 Finished: Mon, 22 Jan 2018 17:55:24 +0900 Ready: False Restart Count: 9 Readiness: http-get http://:9093/%23/status delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /data from storage-volume (rw) /etc/config from config-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5b8l (ro) prometheus-alertmanager-configmap-reload: Container ID: docker://b4a349bf7be4ea78abe6899ad0173147f0d3f6ff1005bc513b2c0ac726385f0b Image: jimmidyson/configmap-reload:v0.1 Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:2d40c2eaa6f435b2511d0cfc5f6c0a681eeb2eaa455a5d5ac25f88ce5139986e Port: &lt;none&gt; Args: --volume-dir=/etc/config --webhook-url=http://localhost:9093/-/reload State: Running Started: Mon, 22 Jan 2018 17:33:56 +0900 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /etc/config from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5b8l (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: illocutionary-heron-prometheus-alertmanager Optional: false storage-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: illocutionary-heron-prometheus-alertmanager ReadOnly: false default-token-h5b8l: Type: Secret (a volume populated by a Secret) SecretName: default-token-h5b8l Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 29m (x2 over 29m) default-scheduler PersistentVolumeClaim is not bound: "illocutionary-heron-prometheus-alertmanager" Normal Scheduled 29m default-scheduler Successfully assigned illocutionary-heron-prometheus-alertmanager-587d747b9c-qwmm6 to minikube Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "config-volume" Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "pvc-fa84b197-ff4e-11e7-a584-0800270fb7fc" Normal SuccessfulMountVolume 29m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-h5b8l" Normal Started 29m kubelet, minikube Started container Normal Created 29m kubelet, minikube Created container Normal Pulled 29m kubelet, minikube Container image "jimmidyson/configmap-reload:v0.1" already present on machine Normal Started 29m (x3 over 29m) kubelet, minikube Started container Normal Created 29m (x4 over 29m) kubelet, minikube Created container Normal Pulled 29m (x4 over 29m) kubelet, minikube Container image "prom/alertmanager:v0.9.1" already present on machine Warning BackOff 9m (x91 over 29m) kubelet, minikube Back-off restarting failed container Warning FailedSync 4m (x113 over 29m) kubelet, minikube Error syncing pod </code></pre> <h1>Edit 3</h1> <p><code>alertmanager</code> config in <code>values.yaml</code> file:</p> <pre><code>## alertmanager ConfigMap entries ## alertmanagerFiles: alertmanager.yml: |- global: resolve_timeout: 5m smtp_smarthost: smtp.gmail.com:587 smtp_from: [email protected] smtp_auth_username: [email protected] smtp_auth_password: sender_password receivers: - name: default-receiver email_configs: - to: [email protected] route: group_wait: 10s group_interval: 5m receiver: default-receiver repeat_interval: 3h </code></pre> <p>Not work. Got errors above.</p> <pre><code>alertmanagerFiles: alertmanager.yml: |- global: # slack_api_url: '' receivers: - name: default-receiver # slack_configs: # - channel: '@you' # send_resolved: true route: group_wait: 10s group_interval: 5m receiver: default-receiver repeat_interval </code></pre> <p>Works without any error.</p> <p>So, the problem was the <code>email_configs</code> config method.</p>
<p>The <code>alerts</code> and <code>rules</code> keys in the <code>serverFiles</code> group of the <code>values.yaml</code> file are mounted in the Prometheus container in the <code>/etc/config</code> folder. You can put in there the configuration you want (for example take inspiration by the blog post you linked) and it will be used by Prometheus to handle the alerts.</p> <p>For example, a simple rule could be set like this:</p> <pre><code>serverFiles: alerts: | ALERT cpu_threshold_exceeded IF (100 * (1 - avg by(job)(irate(node_cpu{mode='idle'}[5m])))) &gt; 80 FOR 300s LABELS { severity = "warning", } ANNOTATIONS { summary = "CPU usage &gt; 80% for {{ $labels.job }}", description = "CPU usage avg for last 5m: {{ $value }}", } </code></pre>
<p>I have am using Google Kubernetes Engine 1.8.6 and have a load balancer configured as below:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-load-balancer spec: ports: - port: 19222 name: my-test-port protocol: TCP targetPort: 19222 - port: 9222 name: my-prod-port protocol: TCP targetPort: 9222 selector: app: test-app-stateful-set type: LoadBalancer </code></pre> <p>This allows someone connecting to the load balancer's external IP to be routed to my <code>test-app-stateful-set</code> when they use port 19222. However I would like connections to port 9222 from the same external IP sent to my <code>prod-app-stateful-set</code> instead. How can I configure my service to do this?</p>
<p>You can't do this directly, because a <code>Service</code> has a single <code>NodeSelector</code> that is valid for all its ports. </p> <p>One option you have is to run a third pod that acts as a proxy. It receives connections on both ports and forwards the traffic to your backend a based on the port the incoming traffic is coming through.</p> <p>You could use <code>nginx</code> for this proxy pod.</p> <p>A sample configuration for your nginx could be the following:</p> <pre><code>stream { server { listen 19222; proxy_pass &lt;test-service-name&gt;:19222; } server { listen 9222; proxy_pass &lt;prod-service-name&gt;:9222; } } </code></pre> <p>Of course, your load balancer service has to be adjusted as well in order to make the selector match your new nginx pod and you need to create two different services for your production and test pods</p>
<p>List installed <code>Helm</code> resources:</p> <pre><code>$ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE myresource1 1 Fri Jan 19 10:00:02 2018 DEPLOYED my-chart-1.0.0 default myresource2 1 Sat Jan 20 10:01:01 2018 DEPLOYED my-chart-2.0.0 default myresource3 1 Sun Jan 21 10:02:02 2018 DEPLOYED my-chart-3.0.0 default </code></pre> <p>There is a way to delete one resource:</p> <blockquote> <p><a href="https://github.com/kubernetes/helm/blob/master/docs/using_helm.md#helm-delete-deleting-a-release" rel="nofollow noreferrer">https://github.com/kubernetes/helm/blob/master/docs/using_helm.md#helm-delete-deleting-a-release</a></p> </blockquote> <p>Is it possible to delete all resources one time?</p>
<p>I don't think there is a built-in way of doing it, but you can use some good old bash</p> <pre><code>helm list --short | xargs -L1 helm delete </code></pre> <p>Remember to add the <code>--purge</code> flag if you want to delete everything related to the release.</p>
<p>I'd like to know how I can add a ConfigMap to a specific pod spawned by a StatefulSet.</p> <p>I've looked at the spec and I can't figure out how to ensure that a pod only mounts a ConfigMap with a specific label. I've already got a workaround ready in the form of a container. However I'd still like to know if this is possible out of the box.</p>
<p>You can't threat a pod inside the same StatefulSet as a special pod. Simply because you can only specify a single <code>PodSpec</code> that's valid for the whole set.</p> <p>What you could do, but it is still an hack, is mount all versions of the config files in every pod and run an entrypoint script that uses a different config file based on the pod name. I have no example to show but it should be quite easy</p>
<p>I'm trying to use nginx to reverse proxy into kubernetes pods running various web apps. My issue is that I can only proxy when location is set to <code>/</code> and not <code>/someplace</code></p> <p>I know the internal IPs are working because both web apps load successfully when I use them with <code>location /</code>, and I can curl the webpages internally. </p> <p>What I would like to happen</p> <p><a href="http://ServerIP/app1" rel="nofollow noreferrer">http://ServerIP/app1</a> to route to <a href="http://Pod1IP:3000" rel="nofollow noreferrer">http://Pod1IP:3000</a> <a href="http://ServerIP/app2" rel="nofollow noreferrer">http://ServerIP/app2</a> to route to <a href="http://Pod2IP:80" rel="nofollow noreferrer">http://Pod2IP:80</a></p> <p>In this manner I could easily run all my apps on the same port. </p> <p>What I believe is happening <a href="http://ServerIP/app1" rel="nofollow noreferrer">http://ServerIP/app1</a> --> httpL//Pod1IP:3000/app1</p> <p>I tried solving this by doing a rewrite of the URI like below, that resulted in a blank page loading when I tried to access <code>/app1</code> </p> <pre><code>server { listen 80; location = /app1/ { rewrite ^.*$ / break; proxy_pass http://10.82.5.80:80; } location / { proxy_pass http://10.106.228.213:15672; } } </code></pre> <p>Any ideas where I messed up?</p> <p>The webapp I am trying to use is called RabbitMQ UI. To try and debug my situation I curled three different URLs to see what nginx was responding with.</p> <p>Curling the <code>/</code> loads the correct html page, and it works in the browser.</p> <p><code>curl http://10.82.5.80/</code></p> <pre><code>&lt;!doctype html&gt; &lt;meta http-equiv="X-UA-Compatible" content="IE=edge" /&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;RabbitMQ Management&lt;/title&gt; &lt;script src="js/ejs-1.0.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/jquery-1.12.4.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/jquery.flot-0.8.1.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/jquery.flot-0.8.1.time.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/sammy-0.7.6.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/json2-2016.10.28.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/base64.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/global.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/main.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/prefs.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/formatters.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/charts.js" type="text/javascript"&gt;&lt;/script&gt; &lt;link href="css/main.css" rel="stylesheet" type="text/css"/&gt; &lt;link href="favicon.ico" rel="shortcut icon" type="image/x-icon"/&gt; &lt;!--[if lte IE 8]&gt; &lt;script src="js/excanvas.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;link href="css/evil.css" rel="stylesheet" type="text/css"/&gt; &lt;![endif]--&gt; &lt;/head&gt; &lt;body&gt; &lt;div id="outer"&gt;&lt;/div&gt; &lt;div id="debug"&gt;&lt;/div&gt; &lt;div id="scratch"&gt;&lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Curling <code>/rabbitmq</code> returns not found or Moved Permanently</p> <p><code>curl http://10.82.5.80/rabbitmq</code></p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;301 Moved Permanently&lt;/title&gt;&lt;/head&gt; &lt;body bgcolor="white"&gt; &lt;center&gt;&lt;h1&gt;301 Moved Permanently&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.10.3 (Ubuntu)&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Curling the <code>/rabbitmq/</code> location gives the correct page, but loads a blank in the browser. I think this is because the browser cannot reference the js files present in the html doc? </p> <p>curl <a href="http://10.82.5.80/rabbitmq/" rel="nofollow noreferrer">http://10.82.5.80/rabbitmq/</a></p> <pre><code>&lt;!doctype html&gt; &lt;meta http-equiv="X-UA-Compatible" content="IE=edge" /&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;RabbitMQ Management&lt;/title&gt; &lt;script src="js/ejs-1.0.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/jquery-1.12.4.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/jquery.flot-0.8.1.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/jquery.flot-0.8.1.time.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/sammy-0.7.6.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/json2-2016.10.28.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/base64.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/global.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/main.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/prefs.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/formatters.js" type="text/javascript"&gt;&lt;/script&gt; &lt;script src="js/charts.js" type="text/javascript"&gt;&lt;/script&gt; &lt;link href="css/main.css" rel="stylesheet" type="text/css"/&gt; &lt;link href="favicon.ico" rel="shortcut icon" type="image/x-icon"/&gt; &lt;!--[if lte IE 8]&gt; &lt;script src="js/excanvas.min.js" type="text/javascript"&gt;&lt;/script&gt; &lt;link href="css/evil.css" rel="stylesheet" type="text/css"/&gt; &lt;![endif]--&gt; &lt;/head&gt; &lt;body&gt; &lt;div id="outer"&gt;&lt;/div&gt; &lt;div id="debug"&gt;&lt;/div&gt; &lt;div id="scratch"&gt;&lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
<p>Try adding the URL in your proxy location like this:</p> <pre><code>location = /app1/ { proxy_pass http://10.82.5:80/; } </code></pre> <p>Adding a trailing <code>/</code> at the end of the proxy_pass forces nginx to strip the <code>/app1</code> prefix from your requests while sending it to the backend.</p> <p>Explanation on how it works on the official nginx page: <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&amp;_ga=1.74997266.187384914.1443061481#proxy_pass" rel="nofollow noreferrer">http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&amp;_ga=1.74997266.187384914.1443061481#proxy_pass</a></p> <pre><code>If the proxy_pass directive is specified with a URI, then when a request is passed to the server, the part of a normalized request URI matching the location is replaced by a URI specified in the directive: </code></pre>
<p>I try to use the post steps with the Jenkins kubernetes plugin. Does anyone has an idea?</p> <pre><code>java.lang.NoSuchMethodError: No such DSL method 'post' found among steps </code></pre> <p>My pipeline:</p> <pre><code>podTemplate( label: 'jenkins-pipeline', cloud: 'minikube', volumes: [ hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'), ]) { node('jenkins-pipeline') { stage('test') { container('maven') { println 'do some testing stuff' } } post { always { println "test" } } } } </code></pre>
<p>As of this writing, <a href="https://issues.jenkins-ci.org/browse/JENKINS-44160" rel="nofollow noreferrer"><code>Post</code> is only supported in declarative pipelines</a>. </p> <p>You could have a look at their <a href="https://github.com/jenkinsci/kubernetes-plugin/blob/master/examples/declarative.groovy" rel="nofollow noreferrer">declarative example</a> if you absolutely must use <code>post</code>.</p> <pre><code>pipeline { agent { kubernetes { //cloud 'kubernetes' label 'mypod' containerTemplate { name 'maven' image 'maven:3.3.9-jdk-8-alpine' ttyEnabled true command 'cat' } } } stages { stage('Run maven') { steps { container('maven') { sh 'mvn -version' } } } } } </code></pre>
<p>I am new to Kubernetes, and have a question about it. </p> <p>When we create a statefulset, it gets associated with its PVC and the PVC will be associated a storageclass. </p> <p>So when we execute command "kubectl delete namespace", should it delete the storageclasses also?</p> <p>P.S. The cluster is running on AWS.</p>
<p><strong>Not All Objects are in a Namespace</strong></p> <p>Most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. And low-level resources, such as nodes and persistentVolumes, are not in any namespace. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#a-note-on-namespaces" rel="nofollow noreferrer">Source.</a></p> <p>The storage class is not a namespace object. Try to run <code>$ kubectl get storageclass --all-namespaces</code> and you will notice that there is not even the indication of the namespace:</p> <pre><code>[email protected]:~$ kubectl get storageclass --all-namespaces NAMESPACE NAME PROVISIONER slow kubernetes.io/gce-pd standard (default) kubernetes.io/gce-pd </code></pre> <p>Therefore I have never paid attention, but I believe that if you delete a namespace nothing will happen to the Storage class objects.</p> <p><strong>Update:</strong></p> <p>I created a namespace class &quot;paolo&quot; the following StorageClass:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow namespace: paolo provisioner: kubernetes.io/gce-pd parameters: type: pd-standard zones: us-central1-a, us-central1-b </code></pre> <p>I didn't received any error, I deleted the namespace paolo and as expected the StorageClass was still there</p> <p>My test has been performed on Google Cloud Platform.</p>
<p>I will try to explain my problem below, </p> <ol> <li><p>Create a pod using deployment and then apply one more update on it using <code>kubectl apply -f sampledep.yaml</code>.</p></li> <li><p>Pod name would have changed if we do <code>kubectl get pods</code>.</p></li> <li><p>So whatever logs we had in our previous pod no longer exist or can be retrieved.</p></li> </ol> <p>I will list down the questions below now:</p> <ol> <li>Is there a way to retrieve old pod logs for that application?</li> <li>Is there a way to configure size of logs that can be accumulated for a pod?</li> <li>What happens to pod if log gets accumulated too much and there is no space left in it!</li> <li>What is recommended way to view/manage logs in kubernetes for pods deployed?</li> </ol>
<ol> <li><p>Try to run <code>kubectl get pods --show-all</code>. If you can find your pod there you can just use <code>kubectl logs &lt;pod name&gt;</code>. If not, I don't think you can retrieve the logs anymore.</p></li> <li><p>The recommended way to manage logs in k8s is using an addon like <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">fluentd-elasticsearch</a>. This way, you never save logs on the pod FS itself, you just print logs from your container to STDOUT and fluentd will automatically ship the logs to Elasticsearch, which you can later on interrogate with Kibana. There's no need to limit accumulated logs for a pod since it is never accumulated on the pods itself.</p></li> </ol>
<p>In the kubernetes go client, what is a <code>clientset</code>?</p> <p>It is defined in multiple places. </p> <ol> <li><p>In the <code>client-go</code> package. <a href="https://github.com/kubernetes/client-go/blob/62b2cb756b8cea8fba00764ff123993eb44dbd48/kubernetes/clientset.go#L120" rel="noreferrer">https://github.com/kubernetes/client-go/blob/62b2cb756b8cea8fba00764ff123993eb44dbd48/kubernetes/clientset.go#L120</a></p></li> <li><p>In the <code>kubernetes</code> package <a href="https://github.com/kubernetes/kubernetes/blob/80e344644e2b6222296f2f03551a8d0273c7cbce/pkg/client/clientset_generated/internalclientset/clientset.go#L64" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/80e344644e2b6222296f2f03551a8d0273c7cbce/pkg/client/clientset_generated/internalclientset/clientset.go#L64</a></p></li> </ol> <p>The documentation says the same thing for both of them:</p> <blockquote> <p>Clientset contains the clients for groups. Each group has exactly one version included in a Clientset.</p> </blockquote> <p>This is confusing. What is a group?</p>
<p>Every resource type in Kubernetes (Pods, Deployments, Services and so on) is a member of an <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups" rel="noreferrer">API group</a>. These logically "group" the different types. Some examples of groups are</p> <ul> <li><code>core</code></li> <li><code>extensions</code></li> <li><code>batch</code></li> <li><code>apps</code></li> <li><code>authentication</code></li> <li><code>autoscaling</code></li> </ul> <p><a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning" rel="noreferrer">Groups also contain versions</a>. Versions allow developers to introduce breaking changes to APIs, and manage them as they do. Some examples of versions inside a group</p> <ul> <li><code>core/v1</code></li> <li><code>extensions/v1beta</code></li> <li><code>apps/v1beta1</code></li> <li><code>batch/v1</code>, <code>batch/v2alpha1</code> (notice the two versions inside the same group)</li> <li><code>authentication/v1</code>, <code>authentication/v1beta1</code></li> <li><code>autoscaling/v1</code>, <code>autoscaling/v2alpha1</code></li> </ul> <p>So the client documentation is saying that it's creating a different client for every group.</p>
<p>I'm going to upgrade my Kubernetes cluster to the version <code>1.8.7</code>. Does anybody know which docker version is best compatible with it?</p> <p>This is what I found on the Kubernetes official page, but I suppose it might be for the latest k8s release (<code>1.9</code>)?</p> <blockquote> <p>On each of your machines, install Docker. Version v1.12 is recommended, but v1.11, v1.13 and 17.03 are known to work as well. Versions 17.06+ might work, but have not yet been tested and verified by the Kubernetes node team.</p> </blockquote> <p>Thank you!</p>
<p>According to the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md/#external-dependencies" rel="nofollow noreferrer">kubernetes v1.8.0 changelog</a></p> <blockquote> <p>Continuous integration builds use Docker versions 1.11.2, 1.12.6, 1.13.1, and 17.03.2. These versions were validated on Kubernetes 1.8.</p> </blockquote> <p>So any of these version should work fine.</p>
<p>I need to be able to run a shell script (my script is for initializing my db cluster) to initialize my pods in Kubernetes,</p> <p>I don't want to create my script inside my dockerfile because I get my image directly from the web so I don't want to touch it.</p> <p>So I want to know if there is a way to get my script in to one of my volumes so I can execute it like that:</p> <pre><code>spec: containers: - name: command-demo-container image: debian command: ["./init.sh"] restartPolicy: OnFailure </code></pre>
<p>It depends what exactly does your init script do. But the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">InitContainers</a> should be helpful in such cases. Init containers are run before the main application container is started and can do some preparation work such as create configuration files etc. </p> <p>You would still need your own Docker image, but it doesn't have to be the same image as the database one.</p>
<p>I have created a development deployment for an application using kops, kubectl, and EC2.</p> <p>When I set up this deployment, I created a Kops IAM user as specified in <a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">this</a> guide. Everything has worked fine for me managing this deployment. </p> <p>I am now leaving the project for another job and have to allow someone else to take over this deployment. I tried having them use <code>aws configure</code> and enter the appropriate kops IAM user creds, but the kops user still does not show up for this person when they run <code>aws iam list-users</code>. </p> <p><em>What is the best way to share this IAM user with this new developer?</em></p> <p>I have stumbled upon <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html" rel="nofollow noreferrer">this guide</a> which states I can Delegate Access Across AWS Accounts Using IAM Roles, but I am not sure if this is the correct solution? Shouldn't the new developer just be able to enter the Kops IAM user cred info to access its resources? </p> <p>Forgive me, for I am not very experienced with aws-cli and this deployment process. I just took on this responsibility on our team because no one else was confident they could do it.</p> <p>Thanks!</p>
<p>I think the best way to handle this would be to enter the AWS Console as the Root. Go to IAM and select the kops user. In the Security credentials tab, create a new access key and share the credentials with the other developer by forwarding him/her the csv file. Once he/she downloads the csv have them try the <code>aws configure</code> and enter the new access credentials. Letme know if this works!</p>
<p>Im running:</p> <ul> <li><strong>OpenShift Master:</strong> v3.6.0+c4dd4cf</li> <li><strong>Kubernetes Master:</strong> v1.6.1+5115d708d7</li> </ul> <p>I had to restart the master node, and some pods are failing to start.</p> <p>When I <code>describe</code> the problematic pods I see:</p> <pre><code>Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 default-scheduler Normal Scheduled Successfully assigned mds-3-build to apps.teammachine.us 3m 8s 15 kubelet, my.domain.com Warning FailedSync Error syncing pod </code></pre> <p>No where does it provide any useful information in the <code>describe</code> output besides the <code>Error Syncing Pod</code> message.</p> <p>What can I do to fix, and troubleshoot, this issue?</p>
<p>run this command: </p> <p><code>oc get events</code> </p> <p>That should give you more useful information.</p>