prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I was just wondering, why it is useful to run etcd cluster <a href="https://github.com/kubernetes/charts/tree/master/incubator/etcd" rel="nofollow noreferrer">inside Kubernetes</a>, when Kubernetes itself <a href="https://kubernetes.io/docs/concepts/overview/components/#etcd" rel="nofollow noreferrer">depends on etcd</a>. </p> <p>It just does not make sense to me, as if I have HA Kube, I am also forced to have HA etcd outside. Hence to reason to install it again inside... </p>
<p>If you're using Kubernetes via <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">GKE</a>, the underlying Etcd cluster is not exposed in any way.</p>
<p>Is there any way I can run container in k8s as root user or other user. Like in docker </p> <pre><code>docker run --user &lt;user&gt; &lt;image&gt; </code></pre> <p>Is there any yaml configuration for running with </p>
<p>As described in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container" rel="noreferrer">kubernetes docs</a>, you can set the security context for the container and set the <code>runAsUser</code> property as such:</p> <pre><code> containers: - name: ... image: ... securityContext: runAsUser: 0 </code></pre> <p>This will make the container execute internally as the root user.</p>
<p>It is possible to create a service that only points to a pod, created by a statefulset?</p> <p>The solutions that make me would be:</p> <ul> <li>Put as a provider on behalf of the pod. </li> <li>Dynamic labels with the name of the pod.</li> </ul>
<p>As per Kubernetes 1.9 you can use: statefulset.kubernetes.io/pod-name</p> <p>From the documentation:</p> <blockquote> <p>"When the StatefulSet controller creates a Pod, it adds a label, statefulset.kubernetes.io/pod-name, that is set to the name of the Pod. This label allows you to attach a Service to a specific Pod in the StatefulSet."</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label</a></p>
<p>I'm very new to Kubernetes, I have deployed a cluster on Kubernetes. Created a deployment and set POD's count to 2. I didn't create HPA for this deployment.</p> <p>I'm using Google Cloud for this. I enabled autoscaling for the cluster. min is 2 and max is 30.</p> <p>I got the <strong>OOMKilled</strong> error in my deployment.</p> <p>So the question is</p> <p>So only HPA can increase/decrease PODS count am I right ??. In that case, HPA based on memory and CPU is a must and should for every deployment.</p> <p>Please correct me if I'm wrong.</p>
<p>You can use <code>kubectl</code> to change the number of pods running:</p> <p><code>kubectl scale deployment &lt;DEPLOYMENT NAME&gt; --replicas=&lt;#PODS&gt;</code> <br> <code>kubectl scale deployment student-app --replicas=2</code></p> <p>You can find more info at the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#scale" rel="noreferrer">docs page</a></p>
<p>I have worked on Kubernetes and currently reading about Service Fabric, I know Service Fabric provides microservices framework models like stateful, stateless and actor but other than that it also provides <code>GuestExecutables</code> or <code>Containers</code> as well which is what Kubernetes also does manage/orchestrate containers. Can anyone explain a detailed difference between the two? </p>
<p>Caveat: as noted by <a href="https://stackoverflow.com/users/163341/joniba">joniba</a> in the comments, the original answer (see below) presents Fabric and Kubernetes are somehow similar, with the differences being nuanced.</p> <p>That contasts with <a href="https://twitter.com/benmorrisuk/" rel="nofollow noreferrer">Ben Morris</a>'s take, which asked in Feb. 2019: &quot;<a href="https://www.ben-morris.com/azure-service-fabric-kubernetes/" rel="nofollow noreferrer">Now that Kubernetes is on Azure, what is Service Fabric for?</a>&quot;:</p> <blockquote> <p>One of the sticking points of Service Fabric has always been the difficulty involved in managing a cluster with patchy documentation and a tendency towards unhelpful error messages.<br /> Deploying a cluster with Azure Service Fabric spares you some of the operational pain around node management, but it doesn't change the experience of building applications using the underlying SDKs.</p> </blockquote> <p>For the &quot;nuances&quot; differences, read on (original answer):</p> <hr /> <p>Original answer:</p> <p>You can see in this project <a href="https://github.com/paolosalvatori/service-fabric-acs-kubernetes-multi-container-app" rel="nofollow noreferrer"><code>paolosalvatori/service-fabric-acs-kubernetes-multi-container-app</code></a>the same containers implemented both in Service Fabric, and in Kubernetes.</p> <p>Their &quot;service&quot; (for external ingress access) is different, with Kubernetes being a bit more complete and diverse: see <a href="https://github.com/paolosalvatori/service-fabric-acs-kubernetes-multi-container-app#services" rel="nofollow noreferrer">Services</a>.</p> <p>The reality is: there are &quot;two slightly different offering&quot; because of <strong>market pressure</strong>.<br /> The <a href="https://en.wikipedia.org/wiki/Microsoft_Azure" rel="nofollow noreferrer">Microsoft Azure platform</a>, initially released in 2010, has implemented its own Microsoft Azure Fabric Controller, in order to ensure the services and environment do not fail if one or more of the servers fails <em>within the Microsoft data center</em>, and which also provides the management of the user's Web application such as memory allocation and load balancing.</p> <p>But in order to attract other clients on their own Microsoft Data Center, they had to adapt to <strong><a href="https://en.wikipedia.org/wiki/Kubernetes" rel="nofollow noreferrer">Kubernetes</a></strong>, released initially in 2014, which is now (2018) either adopted or closely considered by... pretty much everybody (as <a href="https://www.sdxcentral.com/articles/news/how-kubernetes-conquered-2017-and-is-positioned-for-2018/2017/12/" rel="nofollow noreferrer">reported in late December</a>)<br /> (That does not mean one is &quot;better&quot; than the other,<br /> only that the &quot;other&quot; is more &quot;visible&quot; than the first ;) )</p> <p>So it is less about &quot;a detailed difference between the two&quot;, and more about the ability to integrate Kubernetes-based system on Microsoft Data Centers.</p> <p>This is in line (source: <a href="http://www.zdnet.com/article/kubernetes-will-rule-the-hyperscale-data-center-in-2018/" rel="nofollow noreferrer">detailed here</a>) with Microsoft continued its unprecedented shift toward an open (read: non-proprietary) staging platform for Azure (<a href="https://deis.com/blog/2017/deis-to-join-microsoft/" rel="nofollow noreferrer">with Deis</a>).<br /> And <a href="http://www.zdnet.com/article/kubernetes-orchestrator-now-available-on-microsofts-azure-container-service/" rel="nofollow noreferrer">Kubernetes orchestrator is available on Microsoft's Azure Container Service since February 2017</a>.</p> <hr /> <p>You can see other differences in their architecture of a deployed application:</p> <p>Service Fabric:</p> <p><a href="https://i.stack.imgur.com/0IKfQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0IKfQ.png" alt="https://github.com/paolosalvatori/service-fabric-acs-kubernetes-multi-container-app/raw/master/Images/ServiceFabricArchitecture.png" /></a></p> <p>Vs. Kubernetes:</p> <p><a href="https://i.stack.imgur.com/eqvFr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eqvFr.png" alt="https://github.com/paolosalvatori/service-fabric-acs-kubernetes-multi-container-app/raw/master/Images/KubernetesArchitecture.png" /></a></p> <hr /> <p><a href="https://stackoverflow.com/users/5157351/thieme">thieme</a> mentions <a href="https://stackoverflow.com/questions/48415057/difference-between-kubernetes-and-service-fabric/48415605?noredirect=1#comment90812372_48415605">in the comments</a> the article &quot;<strong><a href="https://blogs.msdn.microsoft.com/azuredev/2018/08/15/service-fabric-and-kubernetes-comparison-part-1-distributed-systems-architecture/" rel="nofollow noreferrer">Service Fabric and Kubernetes comparison, part 1 – Distributed Systems Architecture</a></strong>&quot;, from <a href="https://github.com/mkosieradzk" rel="nofollow noreferrer">Marcin Kosieradzki</a>.</p>
<ul> <li>I have multiple kubernetes clusters running on GKE (let's say clusterA and clusterB)</li> <li>I want to access both of those clusters from client-go in an app that is running in one of those clusters (e.g. access clusterB from an app that is running on clusterA)</li> </ul> <p>I general for authenticating with kubernetes clusters from client-go I see that I have two options:</p> <ul> <li>InCluster config</li> <li>or from kube config file</li> </ul> <p>So it is easy to access clusterA from clusterA but not clusterB from clusterA.</p> <p>What are my options here? It seems that I just cannot pass <code>GOOGLE_APPLICATION_CREDENTIALS</code> and hope that client-go will take care of itself.</p> <p>So my thinking:</p> <ul> <li>create a dedicated IAM service account</li> <li>create kube config with tokens for both clusters by doing <code>gcloud container clusters get-credentials clusterA</code> and <code>gcloud container clusters get-credentials clusterB</code></li> <li>use that kube config file in client-go via <code>BuildConfigFromFlags</code> on clusterA</li> </ul> <p>Is this the correct approach, or is there a simpler way? I see that tokens have an expiration date?</p> <p><strong>Update:</strong></p> <p>It seems I can also use <code>CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True gcloud beta container clusters get-credentials clusterB --zone</code>. Which would add certificates to kube conf which I could use. But AFAIK those certificates cannot be revoked</p>
<p>client-go needs to know about:</p> <ol> <li>cluster master’s IP address</li> <li>cluster’s CA certificate</li> </ol> <p>(If you're using GKE, you can see these info in <code>$HOME/.kube/config</code>, populated by <code>gcloud container clusters get-credentials</code> command).</p> <p>I recommend you to either:</p> <ol> <li>Have a kubeconfig file that contains these info for clusters A &amp; B</li> <li>Use GKE API to retrieve these info for clusters A &amp; B (<a href="https://ahmet.im/blog/gke-api/" rel="noreferrer">example here</a>) (You'll need a service account to do this, explained below.)</li> </ol> <p>Once you can create a <code>*rest.Config</code> object in client-go, client-go will use the auth plugin that's specified in the kubeconfig file (or its in-memory equivalent you constructed). In <code>gcp</code> auth plugin, it knows how to retrieve a token.</p> <p>Then, <a href="https://cloud.google.com/iam/docs/creating-managing-service-accounts" rel="noreferrer">Create a Cloud IAM Service Account</a> and give it "Container Developer" role. Download its key.</p> <p>Now, you have two options:</p> <h3>Option 1: your program uses gcloud</h3> <pre><code>gcloud auth activate-service-account --key-file=key.json KUBECONFIG=a.yaml gcloud container clusters get-credentials clusterA KUBECONFIG=b.yaml gcloud container clusters get-credentials clusterB </code></pre> <p>Then create 2 different <code>*rest.Client</code> objects, one created from <code>a.yaml</code>, another from <code>b.yaml</code> in your program.</p> <p>Now your program will rely on <code>gcloud</code> binary to retrieve token every time your token expires (every 1 hour).</p> <h3>Option 2: use GOOGLE_APPLICATION_CREDENTIALS</h3> <ol> <li>Don't install gcloud to your program’s environment.</li> <li>Set your key.json to GOOGLE_APPLICATION_CREDENTIALS environment variable for your program.</li> <li>Figure out a way to get cluster IP/CA (explained above) so you can construct two different <code>*rest.Config</code> objects for cluster A &amp; B.</li> <li>Now your program will use the specified key file to get an access_token to Google API every time it expires (every 1h).</li> </ol> <p>Hope this helps.</p> <p>P.S. do not forget to <code>import _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"</code> in your Go program. This loads the gcp auth plugin!</p>
<p>In kubernetes 1.8, when I create a deployment for example</p> <pre><code>apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 </code></pre> <p>Then when I do a </p> <pre><code>kubectl get deploy nginx-deployment -o yaml </code></pre> <p>I got</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: 2018-01-24T01:01:01Z .... </code></pre> <p>Why the apiversion is extension/v1beta1 instead of apiVersion: apps/v1beta2 </p>
<p>When you create a deployment, the apiserver persists it and is capable of converting the persisted deployment into any supported version. </p> <p><code>kubectl get deployments</code> actually requests the extensions/v1beta1 version (you can see this by adding --v=6)</p> <p>To get apps/v1beta2 deployments, do <code>kubectl get deployments.v1beta2.apps</code></p>
<p>I want to use existing AWS ALB for my kubernetes setup. i.e. I don't want alb-ingress-controller create or update any existing AWS resource ie. Target groups, roles etc. </p> <p>How can I make ALB to communicate with Kubernetes cluster, henceforth passing the request to existing services and getting the response back to ALB to display in the front end?</p> <p>I tried <a href="https://github.com/coreos/alb-ingress-controller/" rel="nofollow noreferrer">this</a> but it will create new ALB for new ingress resource. I want to use the existing one.</p>
<p>You basically have to open a node port on the instances where the Kubernetes Pods are running. Then you need to let the ALB point to those instances. There are two ways of configuring this. Either via Pods or via Services.</p> <p>To configure it via a <strong>Service</strong> you need to specify <code>.spec.ports[].nodePort</code>. In the default setup the port needs to be between <code>30000</code> and <code>32000</code>. This port gets opened on every node and will be redirected to the specified Pods (which might be on any other node). This has the downside that there is another hop, which also can cost money when using a multi-AZ setup. An example Service could look like this:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: my-frontend labels: app: my-frontend spec: type: NodePort selector: app: my-frontend ports: - port: 8080 nodePort: 30082 </code></pre> <p>To configure it via a <strong>Pod</strong> you need to specify <code>.spec.containers[].ports[].hostPort</code>. This can be any port number, but it has to be free on the node where the Pod gets scheduled. This means that there can only be one Pod per node and it might conflict with ports from other applications. This has the downside that not all instances will be healthy from an ALB point-of-view, since only nodes with that Pod accept traffic. You could add a sidecar container which registers the current node on the ALB, but this would mean additional complexity. An example could look like this:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-frontend labels: app: my-frontend spec: replicas: 3 selector: matchLabels: app: my-frontend template: metadata: name: my-frontend labels: app: my-frontend spec: containers: - name: nginx image: "nginx" ports: - containerPort: 80 hostPort: 8080 </code></pre>
<p>I'm new to Kubernetes. With the help of Kubernetes documentation , I installed <code>minikube</code>(v0.24.1) and <code>kubectl</code> in my Windows machine. VirtualBox(Version 5.1.18) is also installed in my machine.</p> <p>Before starting the <code>minikube</code>, i have executed <code>set HTTP_PROXY=xx.xx.xx:8080</code> and <code>set NO_PROXY=localhost,127.0.0.0/8,192.0.0.0/8</code> in Windows command prompt</p> <p>Started <code>minikube</code> by passing Proxy Details :</p> <pre><code>C:\minikube&gt;minikube start --memory 4096 --vm-driver=virtualbox --docker-env http_proxy=xx.xx.xx:8080 --docker-env https_proxy=xx.xx.xx:8080 --docker-env no_proxy=localhost,127.0.0.0/8,192.0.0.0/8 --cache-images=false Starting local Kubernetes v1.9.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. </code></pre> <p>Error from Kubectl : </p> <pre><code>C:\minikube&gt;kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"} Unable to connect to the server: dial tcp 192.168.99.100:8443: connectex: An attempt was made to access a socket in a way forbidden by its access permissions. </code></pre> <p>Minikube Logs:</p> <pre><code>Feb 01 08:48:35 minikube localkube[2941]: E0201 08:48:35.223594 2941 proxier.go:1701] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking file not found in $PATH Feb 01 08:48:38 minikube localkube[2941]: I0201 08:48:38.738404 2941 node_controller.go:857] Controller detected that some Nodes are Ready. Exiting master disruption mode. Feb 01 08:48:45 minikube localkube[2941]: W0201 08:48:45.765543 2941 conversion.go:110] Could not get instant cpu stats: different number of cpus Feb 01 08:48:55 minikube localkube[2941]: W0201 08:48:55.776172 2941 conversion.go:110] Could not get instant cpu stats: different number of cpus Feb 01 08:49:24 minikube localkube[2941]: E0201 08:49:24.338731 2941 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address Feb 01 08:50:24 minikube localkube[2941]: E0201 08:50:24.341890 2941 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address Feb 01 08:51:24 minikube localkube[2941]: E0201 08:51:24.342845 2941 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address Feb 01 08:52:24 minikube localkube[2941]: E0201 08:52:24.344497 2941 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address Feb 01 08:53:24 minikube localkube[2941]: E0201 08:53:24.349095 2941 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address Feb 01 08:54:24 minikube localkube[2941]: E0201 08:54:24.351143 2941 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address </code></pre> <p>How to fix this issue?</p>
<p>As @ivthillo already was pointing out, the issue may occur because you are behind a proxy. In this link they explain solutions to configure minikube when you are behind a proxy: <a href="https://github.com/kubernetes/minikube/issues/530#issuecomment-250801735" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/530#issuecomment-250801735</a></p> <p>Try starting minikube like this:</p> <pre><code>minikube start --docker-env HTTP_PROXY=http://$YOURPROXY:PORT \ --docker-env HTTPS_PROXY=https://$YOURPROXY:PORT </code></pre> <p>This should configure docker to use your proxy. </p> <p>Here they also propose other solutions: <a href="https://github.com/kubernetes/minikube/issues/530#issuecomment-347718692" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/530#issuecomment-347718692</a></p>
<p>I want to run my automation test script on kubernetes via the selenium hub and the chrome node containers. My test script is also in the form of a container and running as a pod. My test script uses localhost:4444 to connect to grid. But the grid has the NodePort of 31376 and it keeps changing everytime i create a new selenium grid service.</p> <p>Is there any way i can keep a constant NodePort for my selenium hub so that my script could run.</p> <p>Selenium hub service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: selenium-hub labels: app: selenium-hub spec: ports: - port: 4444 targetPort: 4444 name: port0 selector: app: selenium-hub type: NodePort sessionAffinity: None </code></pre> <p>I don't want to change the link to selenium hub every time I execute my command.</p> <p>This is my service description :-</p> <pre><code>C:\KUBE&gt;kubectl describe service selenium-hub Name: selenium-hub Namespace: default Labels: app=selenium-hub Annotations: &lt;none&gt; Selector: app=selenium-hub Type: NodePort IP: 10.106.49.182 Port: port0 4444/TCP TargetPort: 4444/TCP NodePort: port0 31376/TCP Endpoints: Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>Thanks.</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">the documentation on <code>NodePort</code> services</a></p> <blockquote> <p>If you want a specific port number, you can specify a value in the nodePort field, and the system will allocate you that port or else the API transaction will fail (i.e. you need to take care about possible port collisions yourself). The value you specify must be in the configured range for node ports.</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-the-service" rel="nofollow noreferrer">Here is an example</a>, which in your case would look like</p> <pre><code>apiVersion: v1 kind: Service metadata: name: selenium-hub labels: app: selenium-hub spec: ports: - port: 4444 targetPort: 4444 name: port0 nodePort: &lt;your-desired-port&gt; selector: app: selenium-hub type: NodePort sessionAffinity: None </code></pre>
<p>Using google Kubernetes engine:</p> <pre><code>kubectl cluster-info kubernetes-dashboard is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy </code></pre> <p>If I go to the link:</p> <p>I get to a forbidden page and if I accepts I get the following:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "services \"kubernetes-dashboard\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\": Unknown user \"system:anonymous\"", "reason": "Forbidden", "details": { "name": "kubernetes-dashboard", "kind": "services" }, "code": 403 } </code></pre> <p>Is it not possible to access the dashboard?</p>
<p>That url points to the Kubernetes API that requires authentication, and it's not the place to access the dashboard via web.</p> <p>If you want to access the kubernetes dashboard, there are different options</p> <ul> <li>Use <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui" rel="nofollow noreferrer">kubectl proxy</a> to access the dashboard on <a href="http://localhost:8001/ui" rel="nofollow noreferrer">http://localhost:8001/ui</a>.</li> <li>Use port-forwarding to access the Pod that's running the dashboard. Useful while developing.</li> <li>Expose the dashboard using a service of type <code>NodePort</code>. Then you can access the dashboard on <code>node_public_ip:NodePort</code>.</li> <li>Deploy an ingress controller and define an ingress rule that exposes the dashboard on a custom domain.</li> </ul>
<p>I have the following system in mind: A master program that polls a list of tasks to see if they should be launched (based on some trigger information). The tasks themselves are container images in some repository. Tasks are executed as jobs on a Kubernetes cluster to ensure that they are run to completion. The master program is a container executing in a pod that is kept running indefinitely by a replication controller.</p> <p>However, I have not stumbled upon this pattern of launching jobs from a pod. Every tutorial seems to be assuming that I just call kubectl from outside the cluster. Of course I could do this but then I would have to ensure the master program's availability and reliability through some other system. So am I missing something? Launching one-off jobs from inside an indefinitely running pod seems to me as a perfectly valid use case for Kubernetes.</p>
<p>Your master program can utilize the Kubernetes client libraries to preform operations on a cluster. Find a complete example <a href="https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration" rel="nofollow noreferrer">here</a>.</p>
<p>I have set up my Kubernetes 1.3.4 cluster on GCE with </p> <p><code>export KUBE_ENABLE_CLUSTER_MONITORING=google</code></p> <p>This works quite nicely, I get application logs (for some reason in the <em>Container Engine</em> section, but well) and also pod and node metrics.</p> <p>The only thing that is missing are the node memory metrics, only CPU is shown (see screenshot)</p> <p><a href="http://i.stack.imgur.com/fhmYH.png" rel="noreferrer">No memory metrics</a></p> <p>In the heapster logs I see tons of lines like this</p> <pre><code>{ metadata: { severity: "ERROR" projectId: "&lt;project-id&gt;" serviceName: "container.googleapis.com" zone: "europe-west1-d" labels: { container.googleapis.com/cluster_name: "production" compute.googleapis.com/resource_type: "instance" compute.googleapis.com/resource_name: "fluentd-cloud-logging-production-minion-group-p0w8" container.googleapis.com/instance_id: "6772154497331326454" container.googleapis.com/pod_name: "heapster-v1.1.0-2102007506-23b3e" compute.googleapis.com/resource_id: "6772154497331326454" container.googleapis.com/stream: "stderr" container.googleapis.com/namespace_name: "kube-system" container.googleapis.com/container_name: "heapster" } timestamp: "2016-09-13T14:40:08.000Z" projectNumber: "930564692351" } textPayload: "E0913 14:40:08.665035 1 gcm.go:179] Error while sending request to GCM googleapi: Error 400: Timeseries 76, point: start is not older than end, for a cumulative metric, invalidParameter " insertId: "pt5bo7g132r266" log: "heapster" } </code></pre> <p>Not sure if this is related. </p> <p>Any ideas?</p>
<p>If you are running your cluster using GCE instead of GKE You should install the <a href="https://cloud.google.com/monitoring/agent/install-agent#linux-install" rel="nofollow noreferrer">stackdriver agent</a> and verify the credentials that agent is using to communicate with stackdriver <a href="https://cloud.google.com/monitoring/agent/troubleshooting#verify-project" rel="nofollow noreferrer">link</a> </p> <p>If you are using linux you can install the agent by executing:</p> <pre><code>curl -sSO https://dl.google.com/cloudagents/install-monitoring-agent.sh sudo bash install-monitoring-agent.sh </code></pre> <p>and you can check your credentials running the following command:</p> <pre><code>sudo cat $GOOGLE_APPLICATION_CREDENTIALS sudo cat /etc/google/auth/application_default_credentials.json </code></pre>
<p>When a Kubernetes service is exposed via an <code>Ingress</code> object, is the load balancer "phisically" deployed in the cluster, i.e. as some <code>pod</code> controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?</p> <p>Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?</p>
<p>I would like to make some clarification concerning the Google Ingress Controller starting from its definition:</p> <blockquote> <blockquote> <p>An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.</p> </blockquote> </blockquote> <p>First of all if you want to understand better its behaviour I suggest you to read the official Kubernetes GitHub <a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">description</a> of this resource.</p> <p>In particular notice that:</p> <ul> <li><p>It is a Daemon</p></li> <li><p>It is deployed in a pod</p></li> <li><p>It is in kube-system namespace</p></li> <li><p>It is <a href="https://github.com/kubernetes/contrib/issues/1733#issuecomment-246410234" rel="nofollow noreferrer">hidden</a> to the customer</p></li> </ul> <p>However you will not be able to "see" this resource for example running : <code>kubectl get all --all-namaspaces</code>, because it is running on the master and not showed to the customer since it is a managed resource considered essential for the operation of the platform itself. As stated in the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="nofollow noreferrer">official</a> documentation:</p> <blockquote> <p>GCE/Google Kubernetes Engine deploys an ingress controller on the master</p> </blockquote> <p>Note that the master itself of any the Google Cloud Kubernetes clusters is not accessible to the user and completely managed.</p>
<p>I am new to jaeger and I am facing issues with finding the services list in the jaeger UI.</p> <p>Below are the .yaml configurations I prepared to run jaeger with my spring boot app on Kubernetes using minikube locally.</p> <p><code>kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/elasticsearch.yml --namespace=kube-system</code></p> <p><code>kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/jaeger-production-template.yml --namespace=kube-system</code></p> <p>Created deployment for my spring boot app and jaeger agent to run on the same container</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tax-app-deployment spec: template: metadata: labels: app: tax-app version: latest spec: containers: - image: tax-app name: tax-app imagePullPolicy: IfNotPresent ports: - containerPort: 8080 - image: jaegertracing/jaeger-agent imagePullPolicy: IfNotPresent name: jaeger-agent ports: - containerPort: 5775 protocol: UDP - containerPort: 5778 - containerPort: 6831 protocol: UDP - containerPort: 6832 protocol: UDP command: - "/go/bin/agent-linux" - "--collector.host-port=jaeger-collector.jaeger-infra.svc:14267" </code></pre> <p>And the spring boot app service yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: tax labels: app: tax-app jaeger-infra: tax-service spec: ports: - name: tax-port port: 8080 protocol: TCP targetPort: 8080 clusterIP: None selector: jaeger-infra: jaeger-tax </code></pre> <p>I am getting </p> <blockquote> <p>No service dependencies found</p> </blockquote>
<p>Service graph data must be generated in Jaeger. Currently it's possible with via a Spark job here: <a href="https://github.com/jaegertracing/spark-dependencies" rel="nofollow noreferrer">https://github.com/jaegertracing/spark-dependencies</a></p>
<p>I am defining a kubernetes service like this:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: de-identity-svc labels: app: api-identity environment: de product: api annotations: service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "app=api-identity,environment=de,product=api" service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 service.beta.kubernetes.io/aws-load-balancer-type: nlb spec: type: LoadBalancer selector: app: api-identity environment: de ports: - port: 80 protocol: TCP </code></pre> <p>However, when the load balancer is created in AWS, it is created with type <code>Classic</code> instead of the expected <code>network</code>.</p> <hr> <h3>Edit</h3> <p>The kubernetes version info is this:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.11", GitCommit:"b13f2fd682d56eab7a6a2b5a1cab1a3d2c8bdd55", GitTreeState:"clean", BuildDate:"2017-11-25T17:51:39Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <h3>Edit 2</h3> <p>As <a href="https://stackoverflow.com/users/5054939/vdmeent">@vdMeent</a> notes, this feature was added in Kubernetes 1.9 (<a href="https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/</a>)</p>
<p>You should upgrade your server Kubernetes version to 1.9 or above, as NLB is only available for <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#aws" rel="nofollow noreferrer">Kubernetes 1.9 and up</a>. Please note that NLB is still in alpha, so you shouldn't use it for anything substantial like production environments.</p>
<p>In this repository <a href="https://github.com/mappedinn/kubernetes-nfs-volume-on-gke" rel="nofollow noreferrer">https://github.com/mappedinn/kubernetes-nfs-volume-on-gke</a> I am trying to share a volume through NFS service on GKE. The NFS file sharing is successful if hard coded IP address is used. </p> <p>But, in my point of view, it would be better to use DNS name in stead of hard coded IP address.</p> <p>Below is the declaration of the NFS service being used for sharing a volume in Google Cloud Platform:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nfs-server spec: ports: - name: nfs port: 2049 - name: mountd port: 20048 - name: rpcbind port: 111 selector: role: nfs-server </code></pre> <p>Below is the definition of the PersistentVolume with hard coded IP address:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: wp01-pv-data spec: capacity: storage: 5Gi accessModes: - ReadWriteMany nfs: server: 10.247.248.43 # with hard coded IP, it works path: "/" </code></pre> <p>Below is the definition of the PersistentVolume with DNS name:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: wp01-pv-data spec: capacity: storage: 5Gi accessModes: - ReadWriteMany nfs: server: nfs-service.default.svc.cluster.local # with DNS, it does not works path: "/" </code></pre> <p>I am using this <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a> for getting the DNS of the service. Is there any thing missed?</p> <p>Thanks</p>
<p>The problem is in DNS resolution on node it self. Mounting of the NFS share to the pod is a job of kubelet that is launched on the node. Hence the DNS resolution happens according to /etc/resolv.conf on the node it self as well. What could suffice is adding a <code>nameserver &lt;your_kubedns_service_ip&gt;</code> to the nodes <code>/etc/resolv.conf</code>, but it can become somewhat chicken-and-egg problem in some corner cases</p>
<p>How can an application running in a pod on Kubernetes cluster find the currently running number of pods of the same image (instances of the same image)? Is there also a way to uniquely identify each pod in a collection of pods of the same type?</p> <p>For Eg. If I have 3 pods of the same image running in my Kubernetes Cluster, I want my application running in the pods to know there are 3 instances running at the moment and possibly be able to identify them as 0 or 1 or 2 in the collection of 3 pods based on may be start time.</p>
<p>If your application needs to know that information, you will need the following:</p> <ul> <li>Access the Kubernetes API from within your application and ask for the information. Depending on the language you are using, you can find different client libraries: <a href="https://github.com/kubernetes-client" rel="nofollow noreferrer">https://github.com/kubernetes-client</a></li> <li>An easy way to count the pods of the same collection would be using labels. However, if you are using the API, you can directly parse the information in your application and filter whatever you need.</li> <li>In order to access the information from the API, the pod will need a service account with the proper privileges, otherwise the RBAC default directives will not allow your application to retrieve that information. This link will help you: <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/</a></li> </ul>
<p>I'm stepping through Kubernetes in Action to get more than just familiarity with Kubernetes.</p> <p>I already had a Docker Hub account that I've been using for Docker-specific experiments.</p> <p>As described in chapter 2 of the book, I built the toy "kubia" image, and I was able to push it to Docker Hub. I verified this again by logging into Docker Hub and seeing the image.</p> <p>I'm doing this on Centos7.</p> <p>I then run the following to create the replication controller and pod running my image:</p> <pre><code>kubectl run kubia --image=davidmichaelkarr/kubia --port=8080 --generator=run/v1 </code></pre> <p>I waited a while for statuses to change, but it never finishes downloading the image, when I describe the pod, I see something like this:</p> <pre><code> Normal Scheduled 24m default-scheduler Successfully assigned kubia-25th5 to minikube Normal SuccessfulMountVolume 24m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-x5nl4" Normal Pulling 22m (x4 over 24m) kubelet, minikube pulling image "davidmichaelkarr/kubia" Warning Failed 22m (x4 over 24m) kubelet, minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>So I then constructed the following command:</p> <pre><code>curl -v -u 'davidmichaelkarr:**' 'https://registry-1.docker.io/v2/' </code></pre> <p>Which uses the same password I use for Docker Hub (they should be the same, right?).</p> <p>This gives me the following:</p> <pre><code>* About to connect() to proxy *** port 8080 (#0) * Trying **.**.**.**... * Connected to *** (**.**.**.**) port 8080 (#0) * Establish HTTP proxy tunnel to registry-1.docker.io:443 * Server auth using Basic with user 'davidmichaelkarr' &gt; CONNECT registry-1.docker.io:443 HTTP/1.1 &gt; Host: registry-1.docker.io:443 &gt; User-Agent: curl/7.29.0 &gt; Proxy-Connection: Keep-Alive &gt; &lt; HTTP/1.1 200 Connection established &lt; * Proxy replied OK to CONNECT request * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=*.docker.io * start date: Aug 02 00:00:00 2017 GMT * expire date: Sep 02 12:00:00 2018 GMT * common name: *.docker.io * issuer: CN=Amazon,OU=Server CA 1B,O=Amazon,C=US * Server auth using Basic with user 'davidmichaelkarr' &gt; GET /v2/ HTTP/1.1 &gt; Authorization: Basic *** &gt; User-Agent: curl/7.29.0 &gt; Host: registry-1.docker.io &gt; Accept: */* &gt; &lt; HTTP/1.1 401 Unauthorized &lt; Content-Type: application/json; charset=utf-8 &lt; Docker-Distribution-Api-Version: registry/2.0 &lt; Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io" &lt; Date: Wed, 24 Jan 2018 18:34:39 GMT &lt; Content-Length: 87 &lt; Strict-Transport-Security: max-age=31536000 &lt; {"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]} * Connection #0 to host *** left intact </code></pre> <p>I don't understand why this is failing auth.</p> <p><strong>Update</strong>:</p> <p>Based on the first answer and the info I got from this <a href="https://stackoverflow.com/questions/40288077/how-to-pass-image-pull-secret-while-using-kubectl-run-command">other question</a>, I edited the description of the service account, adding the "imagePullSecrets" key, then I deleted the replicationcontroller again and recreated it. The result appeared to be identical.</p> <p>This is the command I ran to create the secret:</p> <pre><code>kubectl create secret docker-registry regsecret --docker-server=registry-1.docker.io --docker-username=davidmichaelkarr --docker-password=** --docker-email=** </code></pre> <p>Then I obtained the yaml for the serviceaccount, added the key reference for the secret, then set that yaml as the settings for the serviceaccount.</p> <p>This are the current settings for the service account:</p> <pre><code>$ kubectl get serviceaccount default -o yaml apiVersion: v1 imagePullSecrets: - name: regsecret kind: ServiceAccount metadata: creationTimestamp: 2018-01-24T00:05:01Z name: default namespace: default resourceVersion: "81492" selfLink: /api/v1/namespaces/default/serviceaccounts/default uid: 38e2882c-009a-11e8-bf43-080027ae527b secrets: - name: default-token-x5nl4 </code></pre> <p>Here's the updated events list from the describe of the pod after doing this:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m default-scheduler Successfully assigned kubia-f56th to minikube Normal SuccessfulMountVolume 7m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-x5nl4" Normal Pulling 5m (x4 over 7m) kubelet, minikube pulling image "davidmichaelkarr/kubia" Warning Failed 5m (x4 over 7m) kubelet, minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Normal BackOff 4m (x6 over 7m) kubelet, minikube Back-off pulling image "davidmichaelkarr/kubia" Warning FailedSync 2m (x18 over 7m) kubelet, minikube Error syncing pod </code></pre> <p>What else might I be doing wrong?</p> <p><strong>Update</strong>:</p> <p>I think it's likely that all these issues with authentication are unrelated to the real issue. The key point is what I see in the pod description (breaking into multiple lines to make it easier to see):</p> <pre><code>Warning Failed 22m (x4 over 24m) kubelet, minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>The last line seems like the most important piece of information at this point. It's not failing authentication, it's timing out the connection. In my experience, something like this is usually caused by issues getting through a firewall/proxy. We do have an internal proxy, and I have those environment variables set in my environment, but what about the "serviceaccount" that kubectl is using to make this connection? Do I have to somehow set a proxy configuration in the serviceaccount description?</p>
<p>You need to make sure the Docker daemon running in the Minikube VM uses your corporate proxy by starting minikube along these lines:</p> <p><code>minikube start --docker-env http_proxy=http://proxy.corp.com:port --docker-env https_proxy=http://proxy.corp.com:port --docker-env no_proxy=192.168.99.0/24</code></p>
<p>I created a Pod that have <code>@EnableTaskLauncher</code> with <code>spring-cloud-deployer-kubernetes</code>. It is receiving task requests through <code>spring-cloud-stream</code> and launching the tasks.</p> <p>Everything is working perfectly except that I want the task to be launched as <code>Kind: Job</code> instead of <code>Kind: Deployment</code> .</p> <p>I could not find any configuration or property in <code>spring-cloud-deployer-kubernetes</code> that do this or if it is available .</p>
<p>We moved away from the Jobs to Bare-pods model for Spring Cloud Task (in SCDF) to better control its lifecycle such as the clean shutdown of the container when the SCT-operation is complete.</p> <p>However, there's <a href="https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/pull/163" rel="nofollow noreferrer">spring-cloud/spring-cloud-deployer-kubernetes#163</a> that adds an option to choose between Jobs vs. Pods for Tasks. Please try it out and give us feedback on the PR. </p>
<p>I have pod and its purpose is to take the incoming data and write it to the host volume. I'm running this pod in all the minions.</p> <p>Now when i setup NodePort service to this pods, traffic will go to 1 pod at a time. </p> <p>But how do i send request to all this pods in different minions? How to i bypass the load-balancing here? I want that data to be available in all the minions host volume.</p>
<p>Here's a method that works as long as you can send the requests from a container inside the k8s network (this may not match the OP's desire exactly, but I'm guessing this may work for someone googling this).</p> <p>You have to look up the pods somehow. Here I'm finding all pods in the <code>staging</code> namespace with the label <code>app=hot-app</code>:</p> <pre><code>kubectl get pods -l app=hot-app -n staging -o json | jq -r '.items[].status.podIP' </code></pre> <p>this example uses the awesome jq tool to parse the resulting json and grab the pod ips, but you can parse the json in other ways, including with kubectl itself.</p> <p>this returns something like this:</p> <pre><code>10.245.4.253 10.245.21.143 </code></pre> <p>you can find the internal port like this (example has just one container, so one unique port):</p> <pre><code>kubectl get pods -l app=hot-app -n staging -o json | jq -r '.items[].spec.containers[].ports[].containerPort' | sort | uniq 8080 </code></pre> <p>then you get inside a container in your k8s cluster with curl, combine the ips and port from the previous commands, and hit the pods like this:</p> <pre><code>curl 10.245.4.253:8080/hot-path curl 10.245.21.143:8080/hot-path </code></pre>
<p>I'm attempting to create a cluster on Google Kubernetes Engine that runs nginx, RStudio server and two Shiny apps, following and adapting <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="noreferrer">this guide</a>.</p> <p>I have 4 workloads that are all green in the UI, deployed via:</p> <pre><code>kubectl run nginx --image=nginx --port=80 kubectl run rstudio --image gcr.io/gcer-public/persistent-rstudio:latest --port 8787 kubectl run shiny1 --image gcr.io/gcer-public/shiny-googleauthrdemo:latest --port 3838 kubectl run shiny5 --image=flaviobarros/shiny-wordcloud --port=80 </code></pre> <p>They were then all exposed as node ports via:</p> <pre><code>kubectl expose deployment nginx --target-port=80 --type=NodePort kubectl expose deployment rstudio --target-port=8787 --type=NodePort kubectl expose deployment shiny1 --target-port=3838 --type=NodePort kubectl expose deployment shiny5 --target-port=80 --type=NodePort </code></pre> <p>..that are all green in the UI.</p> <p>I then deployed this Ingress backend</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: r-ingress spec: rules: - http: paths: - path: / backend: serviceName: nginx servicePort: 80 - path: /rstudio/ backend: serviceName: rstudio servicePort: 8787 - path: /shiny1/ backend: serviceName: shiny1 servicePort: 3838 - path: /shiny5/ backend: serviceName: shiny5 servicePort: 80 </code></pre> <p>The result is that the nginx routing works great, I can see "Welcome to nginx" webpage from home, but the three other paths I get:</p> <ul> <li>/rstudio/ - <code>Input/output error</code></li> <li>/shiny1/ - Page not found (the Shiny 404 page)</li> <li>/shiny5/ - Page not found (the Shiny 404 page)</li> </ul> <p>The RStudio and Shiny workloads both work when exposing via the single load balancer, mapped to 8787 and 3838 respectively.</p> <p>Can anyone point to where I'm going wrong?</p> <p>Qs:</p> <ul> <li>Do the Dockerfiles need to be adapted so they all give a 200 status on port 80 when requesting "/"? Do I need to change the health checker? I tried changing the RStudio login page (that 302 to /auth-sign-in if you are not logged in) but no luck</li> <li>Both RStudio and Shiny need websockets - does this affect this?</li> <li>Does session affinity need to be on? I tried adding that with IP but same errors. </li> </ul>
<p>the most likely problem you have is that when you go with this ingress you attached your URI is different then with direct accesc ( /shiny1/ vs / ) so your app is lost and has no content for that uri.</p> <p>With Nginx Ingress Controller you can use <code>ingress.kubernetes.io/rewrite-target: /</code> annotation to mitigate this and make sure that / is accessed even when there is a subfolder in the ingress path.</p>
<p>I am using the Mirantis kubeadm-dind-cluster repository (<a href="https://github.com/Mirantis/kubeadm-dind-cluster" rel="nofollow noreferrer">https://github.com/Mirantis/kubeadm-dind-cluster</a>) as my Kubernetes install; I came across this error when attempting to run a container -</p> <pre><code>panic: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:default:default" cannot create customresourcedefinitions.apiextensions.k8s.io at the cluster scope </code></pre> <p>So I attempted to add cluster-admin permissions to my account:</p> <pre><code>kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts </code></pre> <p>And get the following error:</p> <pre><code>Error: unknown flag: --clusterrole </code></pre> <p>Why is this? How do I fix this or get around it? I'm not sure how to convert the command into a YAML file to "kubectl create -f" to but it seems like that might be the way to go. </p> <p>All three nodes are on version 1.8.6.</p>
<p>What version of kubectl are you using? Be sure you are using a version that includes the <code>kubectl create clusterrolebinding</code> command</p> <p>If your version of kubectl does not support that command, you can try creating it directly via a yaml file (though I'm not sure whether 1.5.x kubectl was happy submitting versions of API objects it didn't know about):</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: serviceaccounts-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts </code></pre>
<p>We are building workflow scheduling application. We found Airflow as a good option for workflow manager and Kubernetes as good option for Cluster manager. Thus, flow would be, </p> <ol> <li>We will submit workflow DAG to Airflow.</li> <li>Airflow should submit the tasks of a given DAG to Kubernetes by specifying docker image.</li> <li>Kubernetes should execute the task by running docker container on an available EC2 worker node of a cluster. </li> </ol> <p>On searching, we found, Airflow has Operators for integrating with ECS, Mesos but not for Kubernetes. However, we found a request for Kubernetes Operator on <a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71013666" rel="noreferrer">Airflow wiki</a>, but not any further update on it. </p> <p>So, the question to be simply put is, how to integrate Airflow with Kubernetes?</p>
<p>This is in flight right now. You just can follow along with <a href="https://issues.apache.org/jira/browse/AIRFLOW-1314" rel="noreferrer">this major jira ticket</a></p> <p>One of the more stable branches (work is being led by a lot of this team) is located in the <a href="https://github.com/bloomberg/airflow/tree/airflow-kubernetes-executor" rel="noreferrer">bloomberg fork on github in the airflow-kubernetes-executor branch</a> though it is in the process of being rebased off of a constantly moving airflow master.</p> <p>I have a branch on my fork that addresses many of the short term issues and runs well enough called <a href="https://github.com/jzucker2/airflow/tree/frankensteins-monster" rel="noreferrer">frankensteins-monster</a>. Use this at your own risk though it works for me right now. I am building a docker image using the <code>build.sh</code> script located in <code>scripts/ci/kubernetes/docker</code>.</p> <p>Good luck!</p>
<p>From what I understand, using multiple replicas as well as auto-scaling is supposed to help in the case that lots of people visit your website and make calls to services provided by your Kubernetes cluster.<br> </p> <p>How do the replicas help with scaling?<br></p> <p>Aren't these extra pods all just running on the same computer with constant resources?<br> That would mean that they're all limited by a constant amount of CPU and memory.</p>
<p>Kubernetes has couple of scaling mechanisms. Horizontal Pod Autoscaler being the basic, but not the only one.</p> <p>With HPA you can spin up additional PODs according to some metrics (most commonly cpu and memory). At some point you will hit a moment when your cluster nodes do not have enough resources to satisfy resource requirements of your pods (you will have pods in <code>Pending</code> state due to lack of nodes available for scheduling).</p> <p>At that point a Cluster Autoscaler can kick in and ie. scale AWS ASG (or some other cloud-ish node pool) to add new node to the cluster and make space for the pending pod(s)</p>
<p>I'm attempting to run a 3-node Kubernetes cluster. I have the cluster up and running sufficiently that I have services running on different nodes. Unfortunately, I don't seem to be able to get NodePort based services to work correctly (as I understand correctness anyway...). My issue is that any NodePort services I define are available externally only on the node where their pod is running, and my understanding is that they should be available externally on any node in the cluster.</p> <p>One example is a local Jira service, which should be running on port 8082 (internally) and on 32760 externally. Here is the service definition (just the service part):</p> <pre><code>apiVersion: v1 kind: Service metadata: name: jira namespace: wittlesouth spec: ports: - port: 8082 selector: app: jira type: NodePort </code></pre> <p>Here's the output of kubectl get service --namespace wittle south</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins NodePort 10.100.119.22 &lt;none&gt; 8081:31377/TCP 3d jira NodePort 10.105.148.66 &lt;none&gt; 8082:32760/TCP 9h ws-mysql ExternalName &lt;none&gt; mysql.default.svc.cluster.local 3306/TCP 1d </code></pre> <p>The pod for this service has a HostPort set for 8082. The three nodes in the cluster are nuc1, nuc2, nuc3:</p> <pre><code>Eric:~ eric$ kubectl get nodes NAME STATUS ROLES AGE VERSION nuc1 Ready master 3d v1.9.2 nuc2 Ready &lt;none&gt; 2d v1.9.2 nuc3 Ready &lt;none&gt; 2d v1.9.2 </code></pre> <p>Here are the results of trying to access the Jira instance via both the host and node ports:</p> <pre><code>Eric:~ eric$ curl https://nuc1.wittlesouth.com:8082/ curl: (7) Failed to connect to nuc1.wittlesouth.com port 8082: Connection refused Eric:~ eric$ curl https://nuc2.wittlesouth.com:8082/ curl: (7) Failed to connect to nuc2.wittlesouth.com port 8082: Connection refused Eric:~ eric$ curl https://nuc3.wittlesouth.com:8082/ curl: (51) SSL: no alternative certificate subject name matches target host name 'nuc3.wittlesouth.com' Eric:~ eric$ curl https://nuc3.wittlesouth.com:32760/ curl: (51) SSL: no alternative certificate subject name matches target host name 'nuc3.wittlesouth.com' Eric:~ eric$ curl https://nuc2.wittlesouth.com:32760/ ^C Eric:~ eric$ curl https://nuc1.wittlesouth.com:32760/ curl: (7) Failed to connect to nuc1.wittlesouth.com port 32760: Operation timed out </code></pre> <p>Based on my reading, it appears that cube-proxy is not doing what it is supposed to. I tried reading through the documentation for troubleshooting cube-proxy, it appears to be slightly out of date (when I grep for hostname in iptables-save, it finds nothing). Here is the kubernetes version information:</p> <pre><code>Eric:~ eric$ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>It appears that kube-proxy is running:</p> <pre><code>eric@nuc2:~$ ps waux | grep kube-proxy root 1963 0.5 0.1 54992 37556 ? Ssl 21:43 0:02 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf eric 3654 0.0 0.0 14224 1028 pts/0 S+ 21:52 0:00 grep --color=auto kube-proxy </code></pre> <p>and</p> <pre><code>Eric:~ eric$ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE calico-etcd-6vspc 1/1 Running 3 2d calico-kube-controllers-d669cc78f-b67rc 1/1 Running 5 3d calico-node-526md 2/2 Running 9 3d calico-node-5trgt 2/2 Running 3 2d calico-node-r9ww4 2/2 Running 3 2d etcd-nuc1 1/1 Running 6 3d kube-apiserver-nuc1 1/1 Running 7 3d kube-controller-manager-nuc1 1/1 Running 6 3d kube-dns-6f4fd4bdf-dt5fp 3/3 Running 12 3d kube-proxy-8xf4r 1/1 Running 1 2d kube-proxy-tq4wk 1/1 Running 4 3d kube-proxy-wcsxt 1/1 Running 1 2d kube-registry-proxy-cv8x9 1/1 Running 4 3d kube-registry-proxy-khpdx 1/1 Running 1 2d kube-registry-proxy-r5qcv 1/1 Running 1 2d kube-registry-v0-wcs5w 1/1 Running 2 3d kube-scheduler-nuc1 1/1 Running 6 3d kubernetes-dashboard-845747bdd4-dp7gg 1/1 Running 4 3d </code></pre> <p>It appears that cube-proxy is creating iptables entries for my service:</p> <pre><code>eric@nuc1:/var/lib$ sudo iptables-save | grep hostnames eric@nuc1:/var/lib$ sudo iptables-save | grep jira -A KUBE-NODEPORTS -p tcp -m comment --comment "wittlesouth/jira:" -m tcp --dport 32760 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "wittlesouth/jira:" -m tcp --dport 32760 -j KUBE-SVC-MO7XZ6ASHGM5BOPI -A KUBE-SEP-LP4GHTW6PY2HYMO6 -s 192.168.124.202/32 -m comment --comment "wittlesouth/jira:" -j KUBE-MARK-MASQ -A KUBE-SEP-LP4GHTW6PY2HYMO6 -p tcp -m comment --comment "wittlesouth/jira:" -m tcp -j DNAT --to-destination 192.168.124.202:8082 -A KUBE-SERVICES ! -s 10.5.0.0/16 -d 10.105.148.66/32 -p tcp -m comment --comment "wittlesouth/jira: cluster IP" -m tcp --dport 8082 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.105.148.66/32 -p tcp -m comment --comment "wittlesouth/jira: cluster IP" -m tcp --dport 8082 -j KUBE-SVC-MO7XZ6ASHGM5BOPI -A KUBE-SVC-MO7XZ6ASHGM5BOPI -m comment --comment "wittlesouth/jira:" -j KUBE-SEP-LP4GHTW6PY2HYMO6 </code></pre> <p>Unfortunately, I know nothing about iptables at this point, so I don't know if those entries look correct or not. I'm suspicious that my non-default network setting during kubeadm init may be related to this, as I was trying to set up Kubernetes to not use the same IP address range of my network (which is 192.168 based). The kubeadm init statement I used was:</p> <pre><code>kubeadm init --pod-network-cidr=10.5.0.0/16 --apiserver-cert-extra-sans ['kubemaster.wittlesouth.com','192.168.5.10' </code></pre> <p>If you've noticed that I'm using calico which defaults to a pod network pool of 192.168.0.0, I modified the pod network pool setting for calico when I created the calico service (not sure if that is related or not).</p> <p>At this point, I'm concluding either I don't understand how NodePort services are supposed to work, or there is something wrong with my cluster configuration. Any suggestions on next steps to diagnose would be greatly appreciated!</p>
<p>When you define a NodePort service there are actually three ports in play:</p> <ul> <li>The container port: this is the port your pod is actually listening on, and it's only available when directly hitting your container from within the cluster, pod to pod (JIRA's default port would be 8080). You set the <code>targetPort</code> in your service to this port.</li> <li>The service port: this is the load balanced port the service itself exposes internally in the cluster. With a single pod there's no load balancing at play, but it's still the entry point to your service. The <code>port</code> in your service definition defines this. If you don't specify a <code>targetPort</code> then it assumes <code>port</code> and <code>targetPort</code> are the same.</li> <li>The node port: The port exposed on each worker node that routes to your service. This is a port typically in the 30000-33000 range (depending on how your cluster if configured). This is the only port that you would be able to access from outside the cluster. This is defined with <code>nodePort</code>.</li> </ul> <p>Assuming that you are running JIRA on the standard port, you would want a service definition something like:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: jira namespace: wittlesouth spec: ports: - port: 80 # this is the service port, can be anything targetPort: 8080 # this is the container port (must match the port your pod is listening on) nodePort: 32000 # if you don't specify this it randomly picks an available port in your NodePort range selector: app: jira type: NodePort </code></pre> <p>So, if you use that configuration an incoming request to your NodePort service goes: NodePort (32000) -> service (80) -> pod (8080). (Internally it might actually bypass the service, I'm not 100% sure about that, but you can conceptually think about it in this way).</p> <p>It also appears that you're trying to hit JIRA directly with HTTPS. Did you configure a certificate in your JIRA pod? If so you need to make sure it's a valid cert for <code>nuc1.wittlesouth.com</code> or tell curl to ignore certificate validation errors with <code>curl -k</code>.</p>
<p>I am trying to install kubernetes on ubuntu 16.04. I am able to install other kubernetes components but i dont know if kube-proxy is installed? Should i get separate binary package for it or does it come prepackaged with kubernetes apt-get installation?</p>
<p>In regular apt-get packages you would normally find kubectl, kubeadm and kubelet. If you use kubeadm to create the cluster it will automatically prepare kube-proxy as well (in the form of a container, as the rest of the elements of the kubernetes control panel). Therefore, you wouldn't need to install it separately. </p> <p>If you use the official kubernetes tarball and try to manually install the cluster by yourself, you will need to configure kube-proxy just like the rest of the elements, but the binaries will be included in the tarball. This documentation shows the essential options to configure it: <a href="https://kubernetes.io/docs/getting-started-guides/scratch/#kube-proxy" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/scratch/#kube-proxy</a>. Another resource is Kubernetes the hard way: <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md</a></p>
<p>We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).</p> <p>But when I am trying to create more the one replica, my pods are not creating successfully. When I am trying to create volumes, it's creating in only one availability zone. If my pod is created in a different zone node, since my volume is already created in different zone, due to that my pod is not creating successfully. How to create volumes in different zones for same application? How to make it successful, along with replica? How to create my persistent volumes claims?</p> <pre><code>--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mongo-pvc labels: type: amazonEBS spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: ReplicationController metadata: labels: name: mongo-pp name: mongo-controller-pp spec: replicas: 2 template: metadata: labels: name: mongo-pp spec: containers: - image: mongo name: mongo-pp ports: - name: mongo-pp containerPort: 27017 hostPort: 27017 volumeMounts: - mountPath: "/opt/couchbase/var" name: mypd1 volumes: - name: mypd1 persistentVolumeClaim: claimName: mongo-pvc </code></pre>
<p>When you are using ReadWriteOnce volumes (ones that can not be mounted to multiple pods at the same time), simple PV/PVC creation will not cut it.</p> <p>Both PV and PVC are pretty "singular" in a way that if you refer in Deployment to a particular claim name, your pods will all try to get the same one claim and the same one pv bound to that claim, resulting in a race condition where only one of the pods will be the first and only allowed to mount that RWO storage.</p> <p>To mitigate this, you should use not PVC directly but via volumeClaimTemplates that will create PVC dynamicaly for every new pod scaled, like below :</p> <pre><code> volumeClaimTemplates: - metadata: name: claimname spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi </code></pre>
<p>Is there a way to find the pod name for a given Docker container ID?</p> <p>I can do it the other way round "kubectl describe pods" but then I have to run it on all the pods.</p>
<p>Yes, you can get the pod name given a container ID using the following <code>kubectl</code> request:</p> <pre><code>kubectl get pod -o jsonpath='{range .items[?(@.status.containerStatuses[].containerID=="docker://&lt;container_id&gt;")]}{.metadata.name}{end}' -n &lt;namespace&gt; </code></pre> <p>where <code>&lt;container_id&gt;</code> is the long docker container ID and <code>&lt;namespace&gt;</code> is namespace which we can skip if our pod is in default namespace.</p> <p>For example:</p> <pre><code>kubectl get pod -o jsonpath='{range .items[?(@.status.containerStatuses[].containerID=="docker://686bc30be6e870023dcf611f7a7808516e041c892a236e565ba2bd3e0569ff7a")]}{.metadata.name}{end}' nginx-deployment-569477d6d8-xtf42 </code></pre>
<p>I am trying to create an orient db deployment on kubernetes cluster using the following yaml file using the orientdb:2.125 docker image from docker hub.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: orientdb namespace: default labels: name: orientdb spec: replicas: 2 revisionHistoryLimit: 100 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: service: orientdb spec: containers: # Custom pod name. - name: orientdb-node image: orientdb:2.1.25 imagePullPolicy: Always ports: - name: http-port containerPort: 2480 # WEB port number. - name: binary-port containerPort: 2424 livenessProbe: httpGet: path: / port: http-port initialDelaySeconds: 60 timeoutSeconds: 30 readinessProbe: httpGet: path: / port: http-port initialDelaySeconds: 5 timeoutSeconds: 5 </code></pre> <p>But I am getting the following message</p> <pre><code>Readiness probe errored: gzip: invalid header Liveness probe errored: gzip: invalid header </code></pre> <p>How do I fix the readiness and liveness probe for orient db?</p>
<p>orientdb web application on port 2480 returns gzipped HTTP response, so you should add custom HTTP headers to support this into your <code>httpGet</code> livenessProbe and readinessProbe:</p> <pre><code>livenessProbe: httpGet: path: / port: http-port httpHeaders: - name: Accept-Encoding value: gzip initialDelaySeconds: 60 timeoutSeconds: 30 readinessProbe: httpGet: path: / port: http-port httpHeaders: - name: Accept-Encoding value: gzip initialDelaySeconds: 5 timeoutSeconds: 5 </code></pre>
<p>We have a GKE cluster with:</p> <ul> <li>master nodes with version 1.6.13-gke.0</li> <li>2 node pools with version 1.6.11-gke.0</li> </ul> <p>We have Stackdriver Monitoring and Logging activated.</p> <p>On 2018-01-22, masters where upgraded by Google to version 1.7.11-gke.1.</p> <p>After this upgrade, we have a lot of errors like these:</p> <pre><code>I 2018-01-25 11:35:23 +0000 [error]: Exception emitting record: No such file or directory @ sys_fail2 - (/var/log/fluentd-buffers/kubernetes.system.buffer..b5638802e3e04e72f.log, /var/log/fluentd-buffers/kubernetes.system.buffer..q5638802e3e04e72f.log) I 2018-01-25 11:35:23 +0000 [warn]: emit transaction failed: error_class=Errno::ENOENT error="No such file or directory @ sys_fail2 - (/var/log/fluentd-buffers/kubernetes.system.buffer..b5638802e3e04e72f.log, /var/log/fluentd-buffers/kubernetes.system.buffer..q5638802e3e04e72f.log)" tag="docker" I 2018-01-25 11:35:23 +0000 [warn]: suppressed same stacktrace </code></pre> <p>Those messages are flooding our logs ~ 25Gb of logs each day, and are generated by pods managed by a DaemonSet called fluentd-gcp-v2.0.9 .</p> <p>We found that it's a <a href="https://github.com/kubernetes/kubernetes/issues/56653" rel="nofollow noreferrer">bug</a> fixed on 1.8 and <a href="https://github.com/kubernetes/kubernetes/pull/57048" rel="nofollow noreferrer">backported to 1.7.12</a>.</p> <p>My questions are:</p> <ol> <li>Should we upgrade masters to version 1.7.12 ? Is it safe to do it? OR</li> <li>Is there any other alternative to test before upgrading?</li> </ol> <p>Thanks in advance.</p>
<p>First of all, the answer to question 2.</p> <p>As alternatives we could have:</p> <ul> <li>filtered fluentd to ignore logs from fluentd-gcp pods OR</li> <li>deactivate Stackdriver monitoring and logging</li> </ul> <p>To answer question 1:</p> <p>We upgraded to 1.7.12 in a <strong>test environment</strong>. The process took 3 minutes. During this period of time, we could not edit our cluster nor access it with kubectl (as expected).</p> <p>After the upgrade, we <strong>deleted</strong> all our pods called <strong>fluentd-gcp-*</strong> and the flood stopped instantly:</p> <pre><code>for pod in $(kubectl get pods -nkube-system | grep fluentd-gcp | awk '{print $1}'); do \ kubectl -nkube-system delete pod $pod; \ sleep 20; \ done; </code></pre>
<p>I am trying to install kubernetes on ubuntu 16.04. I am able to install other kubernetes components but i dont know if kube-proxy is installed? Should i get separate binary package for it or does it come prepackaged with kubernetes apt-get installation?</p>
<p>In most cases installing kube-proxy onthe node it self is not required as a common pattern is running kube-proxy as a DaemonSet in your kube cluster.</p>
<p>To start of I have tested the tutorial at <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p> <p>which works fine. I also tested the same tutorial but added a tls secret as well to test https which also worked fine.</p> <p>My problems arise when I create my own image. Here is the steps I take:</p> <ol> <li>The Dockerfile:</li> </ol> <pre> # We label our stage as "builder" FROM node:9.4.0-alpine as builder COPY package.json package-lock.json ./ ## Storing node modules on a separate layer will prevent unnecessary npm installs at each build RUN npm i && mkdir /srv/cs-ui && cp -R ./node_modules ./srv/cs-ui WORKDIR /srv/cs-ui COPY . . ## Build the angular app in production mode and store the artifacts in dist folder RUN $(npm bin)/ng build --environment "prod" FROM nginx ## Copy our default nginx config COPY nginx/default.conf /etc/nginx/conf.d/ ## Remove default nginx website RUN rm -rf /usr/share/nginx/html/* ## From "builder" stage copy over the artifacts in dist folder to default nginx nginx public folder COPY --from=builder /srv/cs-ui/dist /usr/share/nginx/html/ </pre> <ol start="2"> <li>The Dockerfile is run with docker-compose file that looks like this:</li> </ol> <pre> version: '2' services: cs-ui: image: "gcr.io/cs-micro/cs-ui:v1" container_name: "cs-ui" tty: true build: . ports: - "80:80" </pre> <ol start="3"> <li>Locally this works without any issues. The next thing I do is to push it to the Container Registry.</li> </ol> <pre>gcloud docker -- push gcr.io/cs-micro/cs-ui:v1</pre> <ol start="4"> <li>After that I create a container:</li> </ol> <pre>kubectl run cs-ui --image=gcr.io/cs-micro/cs-ui:v1 --port=80</pre> <ol start="5"> <li>Then I expose it:</li> </ol> <pre>kubectl expose deployment cs-ui --target-port=80 --type=NodePort</pre> <ol start="6"> <li>Then I run the following ingress file:</li> </ol> <pre> apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress spec: tls: - secretName: tls-certificate backend: serviceName: cs-ui servicePort: 80 </pre> <p>with command:</p> <pre>kubectl apply -f test.yaml</pre> <ol start="7"> <li>kubectl describe service</li> </ol> <pre> Name: cs-ui Namespace: default Labels: run=cs-ui Annotations: Selector: run=cs-ui Type: NodePort IP: 10.35.244.124 Port: 80/TCP TargetPort: 80/TCP NodePort: 30272/TCP Endpoints: 10.32.0.32:80 Session Affinity: None External Traffic Policy: Cluster Events: Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP: 10.35.240.1 Port: https 443/TCP TargetPort: 443/TCP Endpoints: 35.195.192.28:443 Session Affinity: ClientIP Events: </pre> <ol start="8"> <li>kubectl describe deployment</li> </ol> <pre> Name: cs-ui Namespace: default CreationTimestamp: Thu, 25 Jan 2018 12:27:59 +0100 Labels: run=cs-ui Annotations: deployment.kubernetes.io/revision=1 Selector: run=cs-ui Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: run=cs-ui Containers: cs-ui: Image: gcr.io/cs-micro/cs-ui:v1 Port: 80/TCP Environment: Mounts: Volumes: Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable OldReplicaSets: NewReplicaSet: cs-ui-2929390783 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 9m deployment-controller Scaled up replica set cs-ui-2929390783 to 1 </pre> <ol start="9"> <li>kubectl describe ing</li> </ol> <pre> Name: basic-ingress Namespace: default Address: 35.227.220.186 Default backend: cs-ui:80 (10.32.0.32:80) TLS: tls-certificate terminates Rules: Host Path Backends ---- ---- -------- * * cs-ui:80 (10.32.0.32:80) Annotations: https-forwarding-rule: k8s-fws-default-basic-ingress--f5fde3efbfa51336 https-target-proxy: k8s-tps-default-basic-ingress--f5fde3efbfa51336 ssl-cert: k8s-ssl-default-basic-ingress--f5fde3efbfa51336 target-proxy: k8s-tp-default-basic-ingress--f5fde3efbfa51336 url-map: k8s-um-default-basic-ingress--f5fde3efbfa51336 backends: {"k8s-be-30272--f5fde3efbfa51336":"UNHEALTHY"} forwarding-rule: k8s-fw-default-basic-ingress--f5fde3efbfa51336 static-ip: k8s-fw-default-basic-ingress--f5fde3efbfa51336 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 12m loadbalancer-controller default/basic-ingress Normal CREATE 11m loadbalancer-controller ip: 35.227.220.186 Normal Service 6m (x4 over 11m) loadbalancer-controller default backend set to cs-ui:30272 </pre> <ol start="10"> <li>After 3-5 minutes I get Unhealthy and I have no clue why because the setup is almost exactly the same as with their setup.</li> </ol> <p>I have read countless of threads on what to do when you get the backend status of Unhealthy, but none of them have helped. One mentioned to add a firewall rule mention in this tutorial: <a href="https://cloud.google.com/compute/docs/load-balancing/health-checks" rel="noreferrer">https://cloud.google.com/compute/docs/load-balancing/health-checks</a> which I have added, but did not help.</p> <p>If you have any suggestions I will gladly test them.</p>
<p>Turned out our Angular application had a redirect on '/' which gave it a 302 response. This response makes the health check fail and results in a UNHEALTHY state.</p> <p>As soon as we set up a custom health check it worked.</p>
<p>I'm new to Kubernetes and I'm learning. I have my Windows 8 machine where I installed Vagrant. Using vagrant I'm running ubuntu VM and inside that VM I'm running 3 docker containers.</p> <p>Vagrant file:</p> <pre><code>Vagrant.configure(2) do |config| config.vm.box = "test" config.vm.network "public_network" config.vm.network "forwarded_port", guest: 8080, host: 8080 config.vm.network "forwarded_port", guest: 50000, host: 50000 config.vm.network "forwarded_port", guest: 8081, host: 8089 config.vm.network "forwarded_port", guest: 9000, host: 9000 config.vm.network "forwarded_port", guest: 3306, host: 3306 config.vm.provider "virtualbox" do |v| v.memory = 2048 v.cpus = 2 end config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] v.customize ["modifyvm", :id, "--natdnsproxy1", "on"] end end </code></pre> <p>Container in Ubuntu VM :</p> <pre><code>root@vagrant-ubuntu-trusty:~/docker-containers# docker images REPOSITORY TAG IMAGE ID CREATED SIZE dockercontainers_jenkins latest bb1142706601 4 days ago 1.03GB dockercontainers_sonar latest 3f021a73750c 4 days ago 1.61GB dockercontainers_nexus latest ddc31d7ad052 4 days ago 1.06GB jenkins/jenkins lts 279f21046a63 4 days ago 813MB openjdk 8 7c57090325cc 5 weeks ago 737MB </code></pre> <p>In same VM now I installed minikube and kubectl as mentioned in this <a href="https://github.com/kubernetes/minikube" rel="noreferrer">link</a></p> <p>minikube version:</p> <pre><code>minikube version: v0.24.1 </code></pre> <p>kubectl version:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Minikube successfully started in my ubuntu VM. I have created <code>pod.yml</code> file.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: testsonaralm labels: app: sonar_alm spec: containers: - name: alm-sonar image: dockercontainers_sonar:latest imagePullPolicy: IfNotPresent ports: - containerPort: 9000 </code></pre> <p>Using this yml file, I created a pod in minikube</p> <pre><code>root@vagrant-ubuntu-trusty:~/docker-containers# kubectl create -f test_pod.yml pod "testsonaralm" created </code></pre> <p>Now I created a service using <code>kubectl</code> command.</p> <pre><code>root@vagrant-ubuntu-trusty:~/docker-containers# kubectl expose pod testsonaralm --port=9000 --target-port=9000 --name almsonar service "almsonar" exposed root@vagrant-ubuntu-trusty:~/docker-containers# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE almsonar ClusterIP 10.102.86.193 &lt;none&gt; 9000/TCP 10s kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3d </code></pre> <p>When I tried to access the URL from my Host machine, I'm getting "Network Error".</p> <pre><code>root@vagrant-ubuntu-trusty:~/docker-containers# kubectl describe svc almsonar Name: almsonar Namespace: default Labels: app=sonar_alm Annotations: &lt;none&gt; Selector: app=sonar_alm Type: ClusterIP IP: 10.101.237.223 Port: &lt;unset&gt; 9000/TCP TargetPort: 9000/TCP Endpoints: 172.17.0.1:9000 Session Affinity: None Events: &lt;none&gt; root@vagrant-ubuntu-trusty:~/docker-containers# minikube ip 127.0.0.1 </code></pre> <p>When I execute the <code>minikube service almsonar --url</code> command, I get an Empty response. So I deleted the service and created a new service with modified command.</p> <pre><code>root@vagrant-ubuntu-trusty:~/docker-containers# kubectl expose pod testsonaralm --type=NodePort --name almsonar service "almsonar" exposed </code></pre> <p>Now when I run <code>minikube service almsonar --url</code> command,I got an URL as </p> <pre><code>root@vagrant-ubuntu-trusty:~/docker-containers# minikube service almsonar --url http://127.0.0.1:31209 root@vagrant-ubuntu-trusty:~/docker-containers# kubectl describe svc almsonar Name: almsonar Namespace: default Labels: app=sonar_alm Annotations: &lt;none&gt; Selector: app=sonar_alm Type: NodePort IP: 10.101.192.1 Port: &lt;unset&gt; 9000/TCP TargetPort: 9000/TCP NodePort: &lt;unset&gt; 30600/TCP Endpoints: 172.17.0.1:9000 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; root@vagrant-ubuntu-trusty:~/docker-containers# minikube ip 127.0.0.1 </code></pre> <p>I'm unable to access this URL in my Ubuntu VM, </p> <pre><code>root@vagrant-ubuntu-trusty:~/docker-containers# curl http://127.0.0.1:31209 &lt;HTML&gt; &lt;HEAD&gt;&lt;TITLE&gt;Redirection&lt;/TITLE&gt;&lt;/HEAD&gt; &lt;BODY&gt;&lt;H1&gt;Redirect&lt;/H1&gt;&lt;/BODY&gt; </code></pre> <p>When I read the Kubernetes document, the minikube service URL will have a vaild IP. But in my case URL contains localhost IP address.</p>
<p>From what I see, you executed <code>minikube start</code> inside the Ubuntu VM. By default, this would try to download the minikube ISO and launch another VM (VM inside a VM) where minikube would be running. </p> <p>I think that this nested virtualization is causing issues with your installation. If, for some reason, minikube started with <code>--vm-driver=none</code> to avoid the VM inside the VM (I think this is the case, looking at the 127.0.0.1 UP), there are some features that are in beta (by the time this answer was written), which could explain the weird behaviour. Because of this, my first advice would be to execute minikube inside the Windows host so it spins up a VM inside virtualbox. Inside you could reproduce the container environment you previously had, and execute the kubectl commands from the Windows host. Most of the documentation assume this setup, so you would not see differences like 127.0.0.1 in localhost.</p> <p>In any case, in order to have full redirection and check if the service is running, try executing the curl command like this:</p> <pre><code> curl -L http://127.0.0.1:31209 </code></pre>
<p>Is it possible to expose service ports with different policies like NodePort or ClusterIP.</p> <p>For example we have </p> <ul> <li>some public XMPP API that should be accessible for end users</li> <li>internal cluster API to communicate with other micro-services</li> </ul> <p>Right now I can create different services of types NodePort and ClisterIP but is it possible to handle it with single k8s 'Service'?</p>
<p>No, using two distinct services each of its own type is the right way to do it, and there is no harm/overhead in doing it like this</p>
<p>I am trying to use the Kubernetes 1.7.12 fluentd-elasticsearch addon: <a href="https://github.com/kubernetes/kubernetes/tree/v1.7.12/cluster/addons/fluentd-elasticsearch" rel="noreferrer">https://github.com/kubernetes/kubernetes/tree/v1.7.12/cluster/addons/fluentd-elasticsearch</a></p> <p>ElasticSearch starts up and can respond with:</p> <pre><code>{ "name" : "0322714ad5b7", "cluster_name" : "kubernetes-logging", "cluster_uuid" : "_na_", "version" : { "number" : "2.4.1", "build_hash" : "c67dc32e24162035d18d6fe1e952c4cbcbe79d16", "build_timestamp" : "2016-09-27T18:57:55Z", "build_snapshot" : false, "lucene_version" : "5.5.2" }, "tagline" : "You Know, for Search" } </code></pre> <p>But Kibana is still unable to connect to it. The connection error starts out with:</p> <pre><code>{"type":"log","@timestamp":"2018-01-23T07:42:06Z","tags":["warning","elasticsearch"],"pid":6,"message":"Unable to revive connection: http://elasticsearch-logging:9200/"} {"type":"log","@timestamp":"2018-01-23T07:42:06Z","tags":["warning","elasticsearch"],"pid":6,"message":"No living connections"} </code></pre> <p>And after ElasticSearch is up, the error changes to:</p> <pre><code>{"type":"log","@timestamp":"2018-01-23T07:42:08Z","tags":["status","plugin:[email protected]","error"],"pid":6,"state":"red","message":"Status changed from red to red - Service Unavailable","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch-logging:9200."} </code></pre> <p>So it seems as though, Kibana is finally able to get a response from ElasticSearch, but a connection still cannot be established.</p> <p>This is what the Kibana dashboard looks like: <a href="https://i.stack.imgur.com/DafCG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DafCG.png" alt="enter image description here"></a></p> <p>I tried to get the logs to output more information, but do not have enough knowledge about Kibana and ElasticSearch to know what else I can try next.</p> <p>I am able to reproduce the error locally using this <code>docker-compose.yml</code>:</p> <pre><code>version: '2' services: elasticsearch-logging: image: gcr.io/google_containers/elasticsearch:v2.4.1-2 ports: - "9200:9200" - "9300:9300" kibana-logging: image: gcr.io/google_containers/kibana:v4.6.1-1 ports: - "5601:5601" depends_on: - elasticsearch-logging environment: - ELASTICSEARCH_URL=http://elasticsearch-logging:9200 </code></pre> <p>It doesn't look like there should be much involved based on what I can tell from this question: <a href="https://stackoverflow.com/questions/40341346/kibana-on-docker-cannot-connect-to-elasticsearch">Kibana on Docker cannot connect to Elasticsearch</a> and this blog: <a href="https://gunith.github.io/docker-kibana-elasticsearch/" rel="noreferrer">https://gunith.github.io/docker-kibana-elasticsearch/</a></p> <p>But I can't figure out what I'm missing.</p> <p>Any ideas what else I might be able to try?</p> <p>Thank you for your time. :)</p> <p>Update 1:</p> <p><code>curl</code>ing <code>http://elasticsearch-logging</code> on the Kubernetes cluster resulted in the same output:</p> <pre><code>{ "name" : "elasticsearch-logging-v1-68km4", "cluster_name" : "kubernetes-logging", "cluster_uuid" : "_na_", "version" : { "number" : "2.4.1", "build_hash" : "c67dc32e24162035d18d6fe1e952c4cbcbe79d16", "build_timestamp" : "2016-09-27T18:57:55Z", "build_snapshot" : false, "lucene_version" : "5.5.2" }, "tagline" : "You Know, for Search" } </code></pre> <p><code>curl</code>ing <code>http://elasticsearch-logging/_cat/indices?pretty</code> on the Kubernetes cluster timed out because of a proxy rule. Using the <code>docker-compose.yml</code> and <code>curl</code>ing locally (e.g. <code>curl localhost:9200/_cat/indices?pretty</code>) results in:</p> <pre><code>{ "error" : { "root_cause" : [ { "type" : "master_not_discovered_exception", "reason" : null } ], "type" : "master_not_discovered_exception", "reason" : null }, "status" : 503 } </code></pre> <p>The <code>docker-compose</code> logs show:</p> <pre><code>[2018-01-23 17:04:39,110][DEBUG][action.admin.cluster.state] [ac1f2a13a637] no known master node, scheduling a retry [2018-01-23 17:05:09,112][DEBUG][action.admin.cluster.state] [ac1f2a13a637] timed out while retrying [cluster:monitor/state] after failure (timeout [30s]) [2018-01-23 17:05:09,116][WARN ][rest.suppressed ] path: /_cat/indices, params: {pretty=} MasterNotDiscoveredException[null] at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:234) at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:236) at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:804) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) </code></pre> <p>Update 2: Running <code>kubectl --namespace kube-system logs -c kubedns po/kube-dns-667321983-dt5lz --tail 50 --follow</code> yields:</p> <pre><code>I0124 16:43:33.591112 5 dns.go:264] New service: kibana-logging I0124 16:43:33.591225 5 dns.go:264] New service: nginx I0124 16:43:33.591251 5 dns.go:264] New service: registry I0124 16:43:33.591274 5 dns.go:264] New service: sudoe I0124 16:43:33.591295 5 dns.go:264] New service: default-http-backend I0124 16:43:33.591317 5 dns.go:264] New service: kube-dns I0124 16:43:33.591344 5 dns.go:462] Added SRV record &amp;{Host:kube-dns.kube-system.svc.cluster.local. Port:53 Priority:10 Weight:10 Text: Mail:false Ttl:30 TargetStrip:0 Group: Key:} I0124 16:43:33.591369 5 dns.go:462] Added SRV record &amp;{Host:kube-dns.kube-system.svc.cluster.local. Port:53 Priority:10 Weight:10 Text: Mail:false Ttl:30 TargetStrip:0 Group: Key:} I0124 16:43:33.591390 5 dns.go:264] New service: kubernetes I0124 16:43:33.591409 5 dns.go:462] Added SRV record &amp;{Host:kubernetes.default.svc.cluster.local. Port:443 Priority:10 Weight:10 Text: Mail:false Ttl:30 TargetStrip:0 Group: Key:} I0124 16:43:33.591429 5 dns.go:264] New service: elasticsearch-logging </code></pre> <p>Update 3:</p> <p>I'm still trying to get everything to work, but with the help of others, I am confident it is a RBAC issue. I'm not completely sure, but it looks like the elasticsearch nodes were not able to connect with the master (which I never knew was even needed) due to permissions.</p> <p>Here are some steps that helped, in case it helps others starting out:</p> <p>with RBAC on:</p> <pre><code># kubectl --kubeconfig kubeconfig.yaml --namespace kube-system logs po/elasticsearch-logging-v1-wkwcs F0119 00:18:44.285773 9 elasticsearch_logging_discovery.go:60] kube-system namespace doesn't exist: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "kube-system". (get namespaces kube-system) goroutine 1 [running]: k8s.io/kubernetes/vendor/github.com/golang/glog.stacks(0x1f7f600, 0xc400000000, 0xee, 0x1b2) vendor/github.com/golang/glog/glog.go:766 +0xa5 k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).output(0x1f5f5c0, 0xc400000003, 0xc42006c300, 0x1ef20c8, 0x22, 0x3c, 0x0) vendor/github.com/golang/glog/glog.go:717 +0x337 k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).printf(0x1f5f5c0, 0xc400000003, 0x16949d6, 0x1e, 0xc420579ee8, 0x2, 0x2) vendor/github.com/golang/glog/glog.go:655 +0x14c k8s.io/kubernetes/vendor/github.com/golang/glog.Fatalf(0x16949d6, 0x1e, 0xc420579ee8, 0x2, 0x2) vendor/github.com/golang/glog/glog.go:1145 +0x67 main.main() cluster/addons/fluentd-elasticsearch/es-image/elasticsearch_logging_discovery.go:60 +0xb53 [2018-01-19 00:18:45,273][INFO ][node ] [elasticsearch-logging-v1-wkwcs] version[2.4.1], pid[5], build[c67dc32/2016-09-27T18:57:55Z] [2018-01-19 00:18:45,275][INFO ][node ] [elasticsearch-logging-v1-wkwcs] initializing ... </code></pre> <pre><code># kubectl --kubeconfig kubeconfig.yaml --namespace kube-system exec kibana-logging-2104905774-69wgv curl elasticsearch-logging.kube-system:9200/_cat/indices?pretty { "error" : { "root_cause" : [ { "type" : "master_not_discovered_exception", "reason" : null } ], "type" : "master_not_discovered_exception", "reason" : null }, "status" : 503 } </code></pre> <p>With RBAC off:</p> <pre><code># kubectl --kubeconfig kubeconfig.yaml --namespace kube-system log elasticsearch-logging-v1-7shgk [2018-01-26 01:19:52,294][INFO ][node ] [elasticsearch-logging-v1-7shgk] version[2.4.1], pid[5], build[c67dc32/2016-09-27T18:57:55Z] [2018-01-26 01:19:52,294][INFO ][node ] [elasticsearch-logging-v1-7shgk] initializing ... [2018-01-26 01:19:53,077][INFO ][plugins ] [elasticsearch-logging-v1-7shgk] modules [reindex, lang-expression, lang-groovy], plugins [], sites [] </code></pre> <pre><code># kubectl --kubeconfig kubeconfig.yaml --namespace kube-system exec elasticsearch-logging-v1-7shgk curl http://elasticsearch-logging:9200/_cat/indices?pretty % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 40 100 40 0 0 2 0 0:00:20 0:00:15 0:00:05 10 green open .kibana 1 1 1 0 6.2kb 3.1kb </code></pre> <p>Thanks everyone for your help :)</p>
<p>A few troubleshooting tips:</p> <p>1) ensure ElasticSearch is running fine.</p> <p>Enter the container running elasticsearch and run:</p> <p><code>curl localhost:9200</code></p> <p>You should get a JSON, with some data about elasticsearch.</p> <p>2) ensure ElasticSearch is reachable from the kibana container</p> <p>Enter the kibana container and run:</p> <p><code>curl &lt;elasticsearch_service_name&gt;:9200</code></p> <p>You should get the same output as above.</p> <p>3) Ensure your ES indices are fine.</p> <p>Run the following command from the elasticsearch container:</p> <p><code>curl localhost:9200/_cat/indices?pretty</code></p> <p>You should get a table with all indices in your ES cluster and their status (which should be green or yellow in case you only have one ES replica).</p> <p>If one of the above points fails, check the logs of your ES container for any error messages and try to solve them.</p>
<p>How do I create groups in kubernetes? What are the default groups created in kubernetes?</p> <p>In my kubernetes installation, this is the subject section of a ClusterRoleBinding:</p> <pre><code>subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: default - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:default - kind: ServiceAccount name: default namespace: kube-system - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes - apiGroup: rbac.authorization.k8s.io kind: Group name: federation-system - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:federation-system </code></pre> <p>How can I see the components of the groups "default" for example. kubectl get xxx? How can I create my own groups?</p> <p>Any pointer to documentation specific to groups, not about RBAC or Authorization (I always fall in that k8s documentation and it does not explain groups) will be appreciated.</p> <p>Thank you.</p>
<p>you dont create groups in Kubernetes, but the groups are defined in the Identity Provider.</p> <p>For example I use Keystone from Openstack as the Identity provider. I use a Bearer Token to login, and Kubernetes validates the token connecting to Keystone, that answers with my username and my project UUID in Openstack. The project UUID is then the group in Kubernetes.</p> <p>I can write the following:</p> <pre><code>--- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods namespace: default subjects: - kind: Group name: &lt;openstack_project_uuid&gt; apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>or if I want to use the User </p> <pre><code>--- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods namespace: default subjects: - kind: User name: &lt;openstack_user_name&gt; apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>Probably someone make post a similar answer with a LDAP example. I always used only Openstack Keystone. But I hope I answered your question.</p>
<p>I'm interested in getting started with Kubernetes, but my needs are simple and it does not look simple. I have a number of containerized applications that I deploy to container servers. I use nginx as a reverse proxy to expose these applications.</p> <p>As far as I can tell, Kubernetes is meant to simplify management of setups like this. But I'm not sure the setup investment is worth it, given that I only realistically need one instance of each app running.</p> <p>What is the simplest reasonable Kubernetes setup that I can deploy a few containerized applications to?</p> <p>EDIT: If I start using Kubernetes, it will be using only on-site servers. The applications in question are ones I’ve developed for my employer, who requires that everything stays on-site.</p>
<p>As you want to get started with Kubernetes, I assume that you want to set-up for your local development, I think that <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">minikube</a> is a best candidate for this purpose. You can also take a look at interactive <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/" rel="nofollow noreferrer">tutorials</a> from official Kubernetes website, I find it very helpful.</p>
<p>According to <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">Traefik's Kubernetes guide</a>, I should use a Service of type <strong>NodePort</strong> to expose Traefik (YAML file <a href="https://github.com/containous/traefik/blob/master/examples/k8s/traefik-deployment.yaml" rel="nofollow noreferrer">here</a>). However, if I do that, how do I expose one of the IPs from my cluster to the outside world?</p> <p>At this moment, I have edited the example <a href="https://github.com/containous/traefik/blob/master/examples/k8s/traefik-deployment.yaml" rel="nofollow noreferrer">traefik-deployment.yaml</a> file to be a <strong>LoadBalancer</strong> instead, and it seems to work just fine with the IP that the load balancer provides. What am I missing?</p>
<p>Both are valid approaches. The Kubernetes guide basically makes one proposal to set things up (with somewhat of a focus on the local minikube configuration). If you have found a path that's more convenient for you, by all means feel free to stick to it.</p> <p>FWIW, one concern I sometimes hear raised is that cloud-based LoadBalancers may become expensive much faster, whereas with a NodePort you have more control over how requests are routed into your cluster (at the price of reduced convenience).</p>
<p>I'm interested in getting started with Kubernetes, but my needs are simple and it does not look simple. I have a number of containerized applications that I deploy to container servers. I use nginx as a reverse proxy to expose these applications.</p> <p>As far as I can tell, Kubernetes is meant to simplify management of setups like this. But I'm not sure the setup investment is worth it, given that I only realistically need one instance of each app running.</p> <p>What is the simplest reasonable Kubernetes setup that I can deploy a few containerized applications to?</p> <p>EDIT: If I start using Kubernetes, it will be using only on-site servers. The applications in question are ones I’ve developed for my employer, who requires that everything stays on-site.</p>
<ol> <li>On developers machine; you should use minikube.</li> <li>On Azure / Google / Amazon ..etc; you should use managed kubernetes services</li> <li><p>On Prem you should deploy kubernetes with on your own setup.</p> <p>3.1. <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way</a></p> <p>3.2. with kubeadm</p> <p>3.3 with ansible scripts like kubespray</p></li> </ol> <p>If you choose kubeadm installation,while you are upgrading kubernetes cluster, again you should use kubeadm again. Best way to deploy on prem is using kubeadm, kube-spray or automating it with Pivotal's Bosh scripts</p>
<p>I want to deploy a DaemonSet for monitoring purpose. So these Pods need to be deployed in all Nodes.</p> <blockquote> <p>A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.</p> </blockquote> <p>I am using a DaemonSet so that all nodes get a copy.</p> <pre><code> spec: containers: - name: fluentd image: aerocloud.io/containers/fluentd:0.0.1 volumeMounts: - name: varlog mountPath: /var/log volumes: - name: varlog hostPath: path: /var/log </code></pre> <p>When I'm creating this <code>DaemonSet</code> in my Kubernetes cluster, I don't see Pod running in my master node.</p> <p>Pod for this DaemonSet are running in all nodes except Master node.</p> <p>What am I missing here? How can I enforce scheduler to schedule a Pod in Master node?</p>
<p>Since Kubernetes 1.6, DaemonSets do not schedule on master nodes by default. In order to schedule it on master, you have to add a toleration into the Pod spec section:</p> <pre><code>tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule </code></pre> <p>For more details, check out the example YAML files in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noreferrer">Kubernetss DeamonSet docu</a>. It is also mentioned in the chapter <em>How Daemon Pods are Scheduled</em>.</p>
<p>I have a cluster in aws and using kubernetes. I have an app running on a machine (vm) in the same network as the cluster in my browser i can type <a href="http://ipaddress:port/status" rel="nofollow noreferrer">http://ipaddress:port/status</a> and i get a response</p> <p>In my pod i can ping the ip address and i get a response but if i do wget://ipaddress:port/status it doesn't connect.</p> <p>I have tried some things but not able to succeed. How do i get the pod in the cluster to be be able to open this url, what do I need to do?</p>
<p>You can integrate external services within kubernetes. <p>endpoint.yaml</p> <pre><code> kind: Endpoints apiVersion: v1 metadata: name: external-ip-database subsets: - addresses: - ip: 192.168.0.1 ports: - port: 3306 </code></pre> <p>service.yaml</p> <pre><code> apiVersion: v1 kind: Service metadata: name: database spec: ports: - port: 1433 targetPort: 1433 protocol: TCP --- # Because this service has no selector, the corresponding Endpoints # object will not be created. You can manually map the service to # your own specific endpoints: kind: Endpoints apiVersion: v1 metadata: name: database subsets: - addresses: - ip: "192.168.1.103" ports: - port: 1433 </code></pre>
<p>I am trying to pull the image from the ECR repository inside the Kubernetes cluster, but I am not able to do this.</p> <p>I tried creating a secret and updated in the pod file, but I am not able to do this I am getting an error "no basic auth credentials".</p> <p>Please can anyone give me the step by step instructions to pull the image from a ECR repository inside the Kubernetes cluster.</p>
<p>Your problem may be caused by the fact that ECR credentials work only for 12 hours, so maybe you are trying to use expired credentials. </p> <p>I recommend you to have a look at <a href="https://github.com/upmc-enterprises/registry-creds" rel="nofollow noreferrer">upmc-enterprises/registry-creds</a>. This tool can be installed on your cluster and automatically refresh ECR/GCR credentials before they expire.</p>
<p>I have a deployment config for an app, that (among other things) creates a secret for a mysql database:</p> <pre><code>--- apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_USER: my_user MYSQL_PASSWORD: my_random_secret MYSQL_DATABASE: my_db MYSQL_ROOT_PASSWORD: my_random_secret --- etc... </code></pre> <p>The deployment file is under source control, so I don't want to place the secrets there.</p> <p>Does anyone know how I can tell Kubernetes to generate random strings for each variable which has <code>my_random_secret</code> as a value in my example? Preferably something that can be configured using the yaml file, without needing to invoke any extra commands.</p>
<p>As far I have understood that you do not want to keep your secret information locally. So that you need to generate them when you are creating that secret.</p> <p>I think there is a way to create Kubernetes resource using go-template. Didn't find enough information for that. I can't help you in this way.</p> <p>But you can also create secret using script. And your secret will not be exposed.</p> <p>Following script can help you in that case. This will generate random password for you and will create secret with that.</p> <pre><code>cat &lt;&lt;EOF | kubectl create -f - apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_PASSWORD: $(head -c 24 /dev/random | base64) MYSQL_ROOT_PASSWORD: $(head -c 24 /dev/random | base64) stringData: MYSQL_USER: my_user MYSQL_DATABASE: my_db EOF </code></pre> <p>Run this script.</p> <p>Hope it will work for you</p>
<p>We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).</p> <p>I am able to successfully create the file.Volume also creating but my Pods is going to pending state, volume still shows available state in aws. I am not able to see any error logs over there.</p> <p>Storage file:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: mongo-ssd provisioner: kubernetes.io/aws-ebs parameters: type: gp2 </code></pre> <p>Main file:</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: web2 spec: selector: matchLabels: app: mongodb serviceName: "mongodb" replicas: 2 template: metadata: labels: app: mongodb annotations: pod.alpha.kubernetes.io/initialized: "true" spec: containers: - image: mongo name: mongodb ports: - name: web2 containerPort: 27017 hostPort: 27017 volumeMounts: - mountPath: "/opt/couchbase/var" name: mypd1 volumeClaimTemplates: - metadata: name: mypd1 annotations: volume.alpha.kubernetes.io/storage-class: mongo-ssd spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 10Gi </code></pre> <p>Now I am planning to set up a pod Autoscaling. I have seen pod autoscaling for deployment and ReplicationContoller. May I know can we implement pod auto-scaling for Stateful set also?</p>
<p>Horizontal Pod Autoscaler can scale only Deployment, Replica Set or Replication Controller. You cannot scale Stateful Sets. (see <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">Kubernetes Docu for more details</a>)</p> <p>The main reason is that most of the stateful applications running in Stateful Sets (such as your MongoDB) are usually not as easy to scale up / down as the stateless applications running as Deployments. Scaling up and down is usually quite complicated process for stateful apps which you do not want to do only based on the autoscaler. It usually requires some additional support logic in the application it self. And especially with scale down it could also mean risk for your data. The autoscaling is more useful for short term changes in the load. Scaling of Stateful Sets requires more long term thinking. Because of the complexity you do not want your database to be scaling up and down every minute.</p>
<p>I am new to RabbitMQ and I have troubles when handling RabbitMQ cluster.</p> <p>The topology is like:</p> <p><a href="https://i.stack.imgur.com/srHSW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/srHSW.png" alt="enter image description here"></a></p> <p>At first, every is ok. RabbitMQ node1 and RabbitMQ node2 are in a cluster. They are interconnected by a RabbitMQ plugin called autocluster.</p> <p>Then I delete pod rabbitmq-1 by <code>kubectl delete pod rabbitmq-1</code>. And I found that RabbitMQ application in node1 is stopped. I don't understand why RabbittoMQ will stop application if it detects another node's failure. It does not make sense. Is this behaviour designed by RabbitMQ or autocluster? Can you enlighten me?</p> <p>My config is like:</p> <pre><code>[ {rabbit, [ {tcp_listen_options, [ {backlog, 128}, {nodelay, true}, {linger, {true,0}}, {exit_on_close, false}, {sndbuf, 12000}, {recbuf, 12000} ]}, {loopback_users, [&lt;&lt;"guest"&gt;&gt;]}, {log_levels,[{autocluster, debug}, {connection, debug}]}, {cluster_partition_handling, pause_minority}, {vm_memory_high_watermark, {absolute, "3276MiB"}} ]}, {rabbitmq_management, [ {load_definitions, "/etc/rabbitmq/rabbitmq-definitions.json"} ]}, {autocluster, [ {dummy_param_without_comma, true}, {autocluster_log_level, debug}, {backend, etcd}, {autocluster_failure, ignore}, {cleanup_interval, 30}, {cluster_cleanup, false}, {cleanup_warn_only, false}, {etcd_ttl, 30}, {etcd_scheme, http}, {etcd_host, "etcd.kube-system.svc.cluster.local"}, {etcd_port, 2379} ]} ] </code></pre> <p>In my case, x-ha-policy is enabled.</p>
<p>You set <code>cluster_partition_handling</code> to <code>pause_minority</code>. One out of two nodes isn't the majority, so the cluster stops as configured. You either have to add an additional node or set <code>cluster_partition_handling</code> to <code>ignore</code>.</p> <p>From the <a href="https://www.rabbitmq.com/partitions.html" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>In pause-minority mode RabbitMQ will automatically pause cluster nodes which determine themselves to be in a minority (i.e. fewer or equal than half the total number of nodes) after seeing other nodes go down. It therefore chooses partition tolerance over availability from the CAP theorem. This ensures that in the event of a network partition, at most the nodes in a single partition will continue to run. The minority nodes will pause as soon as a partition starts, and will start again when the partition ends.</p> </blockquote>
<p>We have a Kubernetes cluster of web scraping cron jobs set up. All seems to go well until a cron job starts to fail (e.g., when a site structure changes and our scraper no longer works). It looks like every now and then a few failing cron jobs will continue to retry to the point it brings down our cluster. Running <code>kubectl get cronjobs</code> (prior to a cluster failure) will show too many jobs running for a failing job.</p> <p>I've attempted following the note described <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#pod-backoff-failure-policy" rel="noreferrer">here</a> regarding a known issue with the pod backoff failure policy; however, that does not seem to work.</p> <p>Here is our config for reference:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: scrape-al spec: schedule: '*/15 * * * *' concurrencyPolicy: Allow failedJobsHistoryLimit: 0 successfulJobsHistoryLimit: 0 jobTemplate: metadata: labels: app: scrape scrape: al spec: template: spec: containers: - name: scrape-al image: 'govhawk/openstates:1.3.1-beta' command: - /opt/openstates/openstates/pupa-scrape.sh args: - al bills --scrape restartPolicy: Never backoffLimit: 3 </code></pre> <p>Ideally we would prefer that a cron job would be terminated after N retries (e.g., something like <code>kubectl delete cronjob my-cron-job</code> after <code>my-cron-job</code> has failed 5 times). Any ideas or suggestions would be much appreciated. Thanks!</p>
<p>You can tell your Job to stop retrying using <code>backoffLimit</code>.</p> <blockquote> <p>Specifies the number of retries before marking this job failed.</p> </blockquote> <p>In your case</p> <pre><code>spec: template: spec: containers: - name: scrape-al image: 'govhawk/openstates:1.3.1-beta' command: - /opt/openstates/openstates/pupa-scrape.sh args: - al bills --scrape restartPolicy: Never backoffLimit: 3 </code></pre> <p>You set 3 as<code>backoffLimit</code> of your Job. That means when a Job is created by CronJob, It will retry 3 times if fails. This controls Job, not CronJob</p> <p>When Job is failed, another Job will be created again as your scheduled period. </p> <p><strong>You want:</strong> If I am not wrong, <em>you want to stop scheduling new Job, when your scheduled Jobs are failed for 5 times</em>. Right?</p> <p><strong>Answer:</strong> In that case, <em>this is not possible automatically</em>. </p> <p><strong>Possible solution:</strong> You need to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#suspend" rel="noreferrer">suspend</a> CronJob so than it stop scheduling new Job.</p> <pre><code>Suspend: true </code></pre> <p>You can do this manually. If you do not want to do this manually, you need to setup a watcher, that will watch your CronJob status, and will update CronJob to suspend if necessary.</p>
<p>I recently followed the official guide to setup <a href="https://github.com/heketi/heketi/blob/master/doc/admin/install-kubernetes.md" rel="nofollow noreferrer">Heketi</a> on Kubernetes. Everything works fine, except that whenever I create a drive smaller than 2GB it automatically creates a 2GB PV. I couldn't find any docs regarding t his behavior. Can anyone please me understand this behaviour? </p>
<p>The minimum volume size created by Heketi is 1GB.</p> <p>The units followed in heketi(currently) is in terms of GBs and hence for example, when you create volume of size 1.8GB it creates 2GB volume.</p>
<p>Been googling it for a while and can't figure out the answer: suppose I have two containers inside a pod, and one has to send the other some secrets. Should I use https or is it safe to do it over http? If I understand correctly, the traffic inside a pod is firewalled anyway, so you can't eavesdrop on the traffic from outside the pod. So... no need for https? </p>
<p>Containers inside a Pod communicate using the loopback network interface, localhost.</p> <p>TCP packets would get routed back at IP layer itself, if the address is localhost.</p> <blockquote> <p>It is implemented entirely within the operating system's networking software and passes no packets to any network interface controller. Any traffic that a computer program sends to a loopback IP address is simply and immediately passed back up the network software stack as if it had been received from another device.</p> </blockquote> <p>So when communication among Containers inside a Pod, it is not possible to get hijacked/ altered. </p> <p>If you want to understand more, take a look <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727" rel="nofollow noreferrer">understanding-kubernetes-networking</a></p> <p>Hope it answers your question</p>
<p>I've been developing an app on my local laptop (Mac) with Minikube. Instead of packaging the code and files into the docker image, I use <code>hostPath</code> and <code>volumeMount</code> that points to the code/file directory on my Mac, so that I can avoid rebuilding the image every time. </p> <p>Now I would like to do the same iterative testing with google cloud. What's the best way to "mount" my local code/file directory and run pods remotely on the cloud? I don't want to package the code into a docker image, push to dockerhub, and then pull from dockerhub on gcloud. My dockerhub is a free account and would expose my code. </p>
<p><strong>You want:</strong> You want to mount your local file system into your remote Kubernetes cluster.</p> <p><strong>Answer:</strong> As far I know, you can't do this. Its possible in <code>minikube</code>, because, you can mount your local directory with <code>minikube</code>. </p> <p><strong>Solution:</strong> I can tell you an alternative way. May be this is not what you want. But it can help you.</p> <p>Do you use <code>git</code>? If your answer is yes and also if you have no problem to keep your files into git repository, following process will help you.</p> <pre><code>spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /mypath name: git-volume volumes: - name: git-volume gitRepo: repository: "git@somewhere:me/my-git-repository.git" revision: "22f1d8406d464b0c0874075539c1f2e96c253775" </code></pre> <p>When you will create this Pod, <code>my-git-repository</code> will be mounted into your directory <code>/mypath</code> inside your Pod container.</p> <p>Basically, you can tell your Pod to pull this git from specific branch. So every time, you change your code, push it. Then create Pod again. </p> <p>Read <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gitrepo" rel="nofollow noreferrer">volumes/#gitrepo</a></p>
<p>I have install the Grafan in my Kubenernetes 1.9 cluster. When I access with my ingress URL (<a href="http://sample.com/grafana/" rel="noreferrer">http://sample.com/grafana/</a> ) getting the first page. After that javascript, css download not adding /grafana to the URL.</p> <p>here is my ingress rule:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: grafana-ingress-v1 namespace: monitoring annotations: ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginx spec: tls: - hosts: - sample.com secretName: ngerss-tls rules: - host: sample.com http: paths: - path: /grafana/ backend: serviceName: grafana-grafana servicePort: 80 </code></pre> <p>Here I see the discussion about the same topic. but its not helping my issue.</p> <p><a href="https://github.com/kubernetes/contrib/issues/860" rel="noreferrer">https://github.com/kubernetes/contrib/issues/860</a> Below images shows first request goes to /grafana/ but second request didn't get added <code>/grafana/</code> in the url.</p> <p><a href="https://i.stack.imgur.com/UL2Xh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UL2Xh.png" alt="enter image description here"></a></p>
<p>Your ingress rule is correct and nginx creates correct virtual host to forward traffic to grafana's service (I left only needed strings to show):</p> <pre><code> server { server_name sample.com; listen 80; listen [::]:80; set $proxy_upstream_name "-"; location ~* ^/grafana/(?&lt;baseuri&gt;.*) { set $proxy_upstream_name "default-grafana-grafana-80"; set $namespace "default"; set $ingress_name "grafana-ingress-v1"; rewrite /grafana/(.*) /$1 break; rewrite /grafana/ / break; proxy_pass http://default-grafana-grafana-80; } </code></pre> <p>And yes, when you go to <code>sample.com/grafana/</code> you get the response from grafana pod, but it redirects to <code>sample.com/login</code> page (you see this from screenshot you provided):</p> <pre><code>$ curl -v -L http://sample.com/grafana/ * Trying 192.168.99.100... * Connected to sample.com (192.168.99.100) port 80 (#0) &gt; GET /grafana/ HTTP/1.1 &gt; Host: sample.com &gt; User-Agent: curl/7.47.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 302 Found &lt; Server: nginx/1.13.5 &lt; Date: Tue, 30 Jan 2018 21:55:21 GMT &lt; Content-Type: text/html; charset=utf-8 &lt; Content-Length: 29 &lt; Connection: keep-alive &lt; Location: /login &lt; Set-Cookie: grafana_sess=c07ab2399d82fef4; Path=/; HttpOnly &lt; Set-Cookie: redirect_to=%252F; Path=/ &lt; * Ignoring the response-body * Connection #0 to host sample.com left intact * Issue another request to this URL: 'http://sample.com/login' * Found bundle for host sample.com: 0x563ff9bf7f20 [can pipeline] * Re-using existing connection! (#0) with host sample.com * Connected to sample.com (192.168.99.100) port 80 (#0) &gt; GET /login HTTP/1.1 &gt; Host: sample.com &gt; User-Agent: curl/7.47.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 404 Not Found &lt; Server: nginx/1.13.5 &lt; Date: Tue, 30 Jan 2018 21:55:21 GMT &lt; Content-Type: text/plain; charset=utf-8 &lt; Content-Length: 21 &lt; Connection: keep-alive &lt; * Connection #0 to host sample.com left intact default backend 404 </code></pre> <p>because by default grafana's <code>root_url</code> is just <code>/</code>:</p> <pre><code>root_url = %(protocol)s://%(domain)s:%(http_port)s/ </code></pre> <p>and when request redirects to just <code>sample.com</code> nginx forwards it to default backend 404.</p> <p><strong>Solution:</strong></p> <p>You need to change <code>root_url</code> grafana's server setting to <code>/grafana/</code>:</p> <pre><code>root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/ </code></pre> <p>You can do this changing this setting in grafana's <code>configmap</code> object.</p>
<p>I am trying to schedule Jobs in Kubernetes.</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a></p> <p>The job can be created using the below command.</p> <pre><code>$ kubectl create -f ./cronjob.yaml cronjob "hello" created </code></pre> <p>Is there any REST API using which the Job can be created from a Java client?</p> <p>Thanks</p>
<p>The respective REST endpoint is described in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#cronjob-v1beta1-batch" rel="nofollow noreferrer">the official API reference</a>. You will find the <code>CronJob</code> resource in the <code>batch/v1beta1</code> API group. To create a new <code>CronJob</code> resource, you'll need a POST call to the <code>/apis/batch/v1beta1/namespaces/{namespace}/cronjobs</code> URL.</p> <p>A respective HTTP request might look something like this:</p> <pre><code>POST /apis/batch/v1beta1/namespaces/default/cronjobs HTTP/1.1 Content-Type: application/json Content-Length: ... Authorization: ... [other headers] { "metadata": { "name": "some-cron" }, "spec": { ... } } </code></pre> <p>There are also older versions of the same resource, for example in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#cronjob-v2alpha1-batch" rel="nofollow noreferrer"><code>batch/v2alpha1</code> API group</a>. As a rule of thumb, I'd recommend using the newest API version available to you. Especially, do not rely on alpha APIs in production; they tend to deprecate pretty quickly between releases.</p> <p>To create a <code>batch/v1beta1</code> CronJob using the Java client, have a look at the <a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/BatchV1beta1Api.md#createNamespacedCronJob" rel="nofollow noreferrer"><code>createNamespacedCronJob</code> method</a> of the <code>io.kubernetes.client.openapi.apis.BatchV1beta1Api</code> class.</p>
<p>Does anyone know how to connect a local instance of <code>kubectl</code> to a Google Kubernetes Engine (GKE) cluster, without using the <code>gcloud</code> tool locally?</p> <p><strong>For example:</strong></p> <p>If you use the <code>gcloud</code> tool with this command:</p> <pre><code>gcloud container clusters get-credentials NAME [--zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …] </code></pre> <p>You'll find a user like this in <code>~/.kube/config</code>:</p> <pre><code>- name: gke_myproj_myzone user: auth-provider: config: access-token: TOKENSTRING cmd-args: config config-helper --format=json cmd-path: /google/google-cloud-sdk/bin/gcloud expiry: 2018-01-22 18:05:46 expiry-key: '{.credential.token_expiry}' token-key: '{.credential.access_token}' name: gcp </code></pre> <p>As you can see, the default values, the <code>gcloud</code> tool provides require the <code>glcoud</code> tool as an <em>auth-provider</em> to log in to your cluster.</p> <p>Now, what I'm looking for is a way to connect <code>kubectl</code> to a cluster on a machine, that does not have <code>gcloud</code> installed.</p>
<p>The easiest way to achieve this is by copying the <code>~/.kube/config</code> file (from a gcloud authenticated instance) to this directory <code>$HOME/.kube</code> in your local instance (laptop). </p> <p>But first, and using the authenticated instance, you would have to enable legacy cluster per this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/iam-integration#using_legacy_cluster_certificate_or_user_credentials" rel="noreferrer">document</a> by running these commands:</p> <pre><code>gcloud config set container/use_client_certificate True export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True </code></pre> <p>Then execute the <code>get-credentials</code> command, and copy the file. </p> <pre><code>gcloud container clusters get-credentials NAME [--zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …] </code></pre> <p>Note that you may have to run the <code>get-credentials</code> command, and copy the config file every time authentication tokens (saved in the config file) expire. </p>
<p>Jenkins ver. 2.77 K8s Version: v1.6.6&quot;</p> <p>We have installed the Jenkins Kubernetes Plugin and configured it to work with our K8s Cluster. We are able to succesfully connect to the cluster when we test our connection via “Manage Jenkins” -&gt; “Configure System” -&gt; Cloud, Kubernetes.</p> <p>Our Template config can be seen here</p> <p><a href="https://i.stack.imgur.com/l9i68.png" rel="nofollow noreferrer">Kubernetes Pod Termplate Config</a></p> <p>We then create a simple job to test the plugin and see if the slaves would be created and then run a few simple bash commands.</p> <p>The bash commands we are testing are:</p> <pre><code>sleep 10 echo &quot;I am a slave&quot; echo &quot;This is a K8s plugin generated slave&quot; </code></pre> <p>When we configured our Plugin we assigned the label &quot;autoscale&quot;. In addition we set up our job to work with the label autoscale.</p> <p>Within the configuration of the Job under Label Expression we also see the following &quot;Label autoscale is serviced by no nodes and 1 cloud&quot;</p> <p>We then start the job in Jenkins &quot;Build Now&quot; We then see the pods created in our K8s cluster</p> <pre><code>jenkins-pod-slave-d4j3n 1/1 Running 0 21h jenkins-pod-slave-tb2td 1/1 Running 0 21h </code></pre> <p>However note that under Build History we can see the following message</p> <h1>1</h1> <p>(pending—All nodes of label ‘autoscale’ are offline)</p> <p>Investigating the logs of the pods outputs nothing</p> <pre><code>kubectl logs jenkins-pod-slave-d4j3n kubectl logs jenkins-pod-slave-tb2td </code></pre> <p>Investigation of the Jenkins logs we can see the following message appear.</p> <p>Oct 08, 2017 6:18:16 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud addProvisionedSlave INFO: Template instance cap of 2 reached for template Jenkins-Pod-Slave, not provisioning: 2 running in namespace {3} with label {4}</p> <ul> <li>Our concern is that the namespace and label value are not being picked up correctly, and could be the source of the problem.</li> </ul>
<p>your issue maybe the command and arguments.</p> <p>the command should be blank, and the arguments should be set to:</p> <p>${computer.jnlpmac} ${computer.name}</p> <p>this will allow the jnlp slave to connect to the jenkins master correctly</p>
<p>I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments. </p> <p>I would like to use something like this (remotely): </p> <pre><code>kubectl config set-cluster cluster --server="&lt;IP&gt;" --insecure-skip-tls-verify=true kubectl config set-credentials foo --token="&lt;TOKEN&gt;" kubectl config set-context my-context --cluster=cluster --user=foo --namespace=default kubectl config use-context cluster kubectl set image deployment/my-deployment boo=eu.gcr.io/project-123456/image:v1 </code></pre> <p>So I created the service account and then get the secret token:</p> <pre><code>kubectl create serviceaccount foo kubectl get secret foo-token-gqvgn -o yaml </code></pre> <p>But, when I try to update the image in any deployment, I receive: </p> <blockquote> <p>error: You must be logged in to the server (Unauthorized)</p> </blockquote> <p>IP address for API I use the address, which is shown in GKE administration as cluster endpoint IP. Any suggestions? Thanks. </p>
<p>I have tried to recreate your problem.</p> <p>Steps I have followed</p> <ul> <li><code>kubectl create serviceaccount foo</code></li> <li><code>kubectl get secret foo-token-* -o yaml</code></li> </ul> <p>Then, I have tried to do what you have done</p> <p><strong>What I have used</strong> as token is <code>base64</code> decoded Token.</p> <p>Then I tried this:</p> <pre><code>$ kubectl get pods </code></pre> <blockquote> <p>Error from server (Forbidden): pods is forbidden: User &quot;system:serviceaccount:default:foo&quot; cannot list pods in the namespace &quot;default&quot;: Unknown user &quot;system:serviceaccount:default:foo&quot;</p> </blockquote> <p>This gave me error as expected. Because, I need to grant permission to this ServiceAccount.</p> <p><strong>How can I grant permission</strong> to this ServiceAccount? I need to create ClusterRole &amp; ClusterRoleBinding with necessary permission.</p> <p>Read more to learn more <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="nofollow noreferrer">role-based-access-control</a></p> <h2>I can do another thing</h2> <pre><code>$ kubectl config set-credentials foo --username=&quot;admin&quot; --password=&quot;$PASSWORD&quot; </code></pre> <p>This will grant you admin authorization.</p> <p>You need to provide cluster credential.</p> <pre><code>Username: admin Password: ----- </code></pre> <p>You will get this info in GKE -&gt; Kubernetes Engine -&gt; {cluster} -&gt; Show credential</p>
<p>I have been working with docker to run my scripts on chrome-node and firefox -node and debug with the selenium-hub image where it runs smoothly, but when I use the same with k8s the whole system slows down. Why is this happening, any idea. I am using minikubes for kubernetes and docker toolbox and docker compose for docker.</p> <p>Thanks,</p>
<p>There would definitely be an additional overhead when you start Kubernetes using minikube locally, compared to just starting a Docker container on the host.</p> <p>In order to have a Kubernetes cluster, minikube creates a VM on the machine where the Kubernetes components will run in addition to the Docker container.</p> <p>Anyway, minikube is not a production way for running Kubernetes. It is mostly meant for local development and testing. Therefore, you shouldn't evaluated kubernetes performance based on a minikube installation.</p>
<p>I am running minikube v0.24.1. In this minikube, I will create a Pod for my nginx application. And also I want to pass data from my local directory.</p> <p>That means I want to mount my local <code>$HOME/go/src/github.com/nginx</code> into my Pod</p> <p>How can I do this?</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:0.1 name: nginx volumeMounts: - mountPath: /data name: volume volumes: - name: volume hostPath: path: /data </code></pre>
<p>You can't mount your local directory into your Pod directly.</p> <p>First, you need to mount your directory <code>$HOME/go/src/github.com/nginx</code> into your minikube.</p> <pre><code>$ minikube start --mount-string=&quot;$HOME/go/src/github.com/nginx:/data&quot; --mount </code></pre> <p>Then If you mount <code>/data</code> into your Pod using hostPath, you will get you local directory data into Pod.</p> <p>There is <strong>another way</strong></p> <p>Host's <code>$HOME</code> directory gets mounted into minikube's <code>/hosthome</code> directory. Here you will get your data</p> <pre><code>$ ls -la /hosthome/go/src/github.com/nginx </code></pre> <p>So to mount this directory, you can change your Pod's hostPath</p> <pre><code>hostPath: path: /hosthome/go/src/github.com/nginx </code></pre>
<p>I have a Kafka cluster running on Kubernetes (on AWS). Each broker has a corresponding external loadbalancer (ELB) and afaict, Kafka's <code>advertised.listeners</code> have been set appropriately so that the ELB's DNS names get returned when clients query for broker information. Most of the setup is similar to the one mentioned <a href="https://github.com/Yolean/kubernetes-kafka/" rel="nofollow noreferrer">here</a>.</p> <p>I created a kafka consumer without specifying any group-id. With this consumer, reading messages from a topic worked just fine. However, if I set a group-id when creating the kafka consumer, I get back the following error messages:</p> <pre><code>2018-01-30 22:04:16,763.763.313055038:kafka.cluster:140735643595584:INFO:74479:Group coordinator for my-group-id is BrokerMetadata(nodeId=2, host=u'a17ee9a8a032411e8a3c902beb474154-867008169.us-west-2.elb.amazonaws.com', port=32402, rack=None) 2018-01-30 22:04:16,763.763.804912567:kafka.coordinator:140735643595584:INFO:74479:Discovered coordinator 2 for group my-group-id 2018-01-30 22:04:16,764.764.270067215:kafka.coordinator.consumer:140735643595584:INFO:74479:Revoking previously assigned partitions set([]) for group my-group-id 2018-01-30 22:04:16,866.866.26291275:kafka.coordinator:140735643595584:INFO:74479:(Re-)joining group my-group-id 2018-01-30 22:04:16,898.898.787975311:kafka.coordinator:140735643595584:INFO:74479:Joined group 'my-group-id' (generation 1) with member_id kafka-python-1.3.5-e31607c2-45ec-4461-8691-260bb84c76ba 2018-01-30 22:04:16,899.899.425029755:kafka.coordinator:140735643595584:INFO:74479:Elected group leader -- performing partition assignments using range 2018-01-30 22:04:16,936.936.614990234:kafka.coordinator:140735643595584:WARNING:74479:Marking the coordinator dead (node 2) for group my-group-id: [Error 15] GroupCoordinatorNotAvailableError. 2018-01-30 22:04:17,069.69.8890686035:kafka.cluster:140735643595584:INFO:74479:Group coordinator for my-group-id is BrokerMetadata(nodeId=2, host=u'my-elb.us-west-2.elb.amazonaws.com', port=32402, rack=None) </code></pre> <p><code>my-elb.us-west-2.elb.amazonaws.com:32402</code> is accessible from the client. I used <code>kafkacat</code> and set <code>my-elb.us-west-2.elb.amazonaws.com:32402</code> as the broker address, it was able to list topics, consume topics, etc.</p> <p>Any ideas what might be wrong?</p>
<p>Marking the coordinator dead happens when there is a Network communication error between the Consumer Client and the Coordinator (Also this can happen when the Coordinator dies and the group needs to rebalance). There are a variety of situations (offset commit request, fetch offset, etc) that can cause this issue. So to find the root cause issue you need to set the logging level to trace and debug :</p> <blockquote> <p>logging.level.org.apache.kafka=TRACE</p> </blockquote>
<p>I'm trying to deploy a persistent storage for couch DB and it is failing out with the error</p> <hr /> <p>kubectl create -f couch_persistant_deploy.yaml</p> <blockquote> <p>error: error validating &quot;couch_persistant_deploy.yaml&quot;: error validating data: couldn't find type: v1.Deployment; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>###<code>PersistentVolume</code> YAML</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: /mnt/sda1/data/test </code></pre> <p>###<code>PersistentVolumeClaim</code> YAML</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim labels: app: couchdb spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>###<code>Deployment</code> YAML</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: couchdb spec: replicas: 1 template: metadata: labels: app: couchdb spec: containers: - name: couchdb image: &quot;couchdb&quot; imagePullPolicy: Always env: - name: COUCHDB_USER value: admin - name: COUCHDB_PASSWORD value: password ports: - name: couchdb containerPort: 5984 - name: epmd containerPort: 4369 containerPort: 9100 volumeMounts: - mountPath: &quot;/opt/couchdb/data&quot; name: task-pv-storage imagePullSecrets: - name: registrypullsecret2 volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim </code></pre> <p>Any leads are really appreciated.</p>
<p>Your error message should be like this:</p> <blockquote> <p>error: error validating &quot;couch_persistant_deploy.yaml&quot;: error validating data: ValidationError(Deployment.spec.template.spec.volumes[0]): unknown field &quot;claimName&quot; in io.k8s.api.core.v1.Volume; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>See, error message is specific: <code>unknown field &quot;claimName&quot; in io.k8s.api.core.v1.Volume</code></p> <p>In your <code>Deployment</code> you need to nest <code>claimName</code> value one level deep inside of its <code>persistentVolumeClaim</code> key, because non-list values must be indented inside their keys (only keys starting a list such as <code>name</code> here - which begins with a dash - don't have to be indented):</p> <pre><code> volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim # fix is here </code></pre> <p>But you did</p> <pre><code> volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim # invalid </code></pre> <p>Which makes your Deployment object invalid</p>
<p>I have an application on GKE that I wish to be available via HTTPS only, so I have gotten a signed certificate to secure the application using TLS.</p> <p>I have checked out a lot of tutorials on how I can do this, but they all refer to using Ingress and automatically requesting the certificate using, LetsEncrypt and KubeLego. But I wish to continue using the external load balancers (the compute engine instances that google has provided me) but I just want my application to be accessible via https. </p> <p>How do I apply my server.crt and server.key files to enable https.Do I <a href="https://cloud.google.com/compute/docs/load-balancing/http/ssl-certificates" rel="noreferrer"> apply it to the Load balancers</a> or to the kubernetes cluster.</p>
<p>Ingress is probably your best bet when it comes to exposing your application over HTTPS. The Ingress resource specifies a backend service, so you will to continue exposing your application as a Kubernetes service, just with type set to <code>ClusterIP</code>. This will produce a service that is "internal" to your cluster, and will be externally accessible through the Ingress once you set it up.</p> <p>Now, specifically in Google Kubernetes Engine (GKE), any ingress resources defined in your cluster will be served by a Google Cloud Load Balancer, so I don't think you have to worry about deploying your own Ingress Controller (e.g. Nginx Ingress Controller).</p> <p>In terms of TLS, you can use your own certificate if you have one. The certificate must be uploaded to the cluster through a Kubernetes Secret. Once that secret is defined, you can reference that secret in your Ingress definition. (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#tls</a>)</p> <p>You can create the secret using the following command:</p> <pre><code>kubectl create secret tls my-app-certs --key /tmp/tls.key --cert /tmp/tls.crt </code></pre> <p>Once you have your secret, you can reference it in your ingress resource:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app-ingress spec: tls: - secretName: my-app-certs backend: serviceName: s1 servicePort: 80 </code></pre> <p>Once you have created your ingress resource, GKE will configure the load balancer and give you a publicly accessible IP that you can get using:</p> <pre><code>kubectl get ingress my-app-ingress </code></pre> <p>The following is a good tutorial that walks you through Ingress on GKE: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p>
<p>I wish I could have found my answer elsewhere, but the lack of documentation has sent me groveling for help :)</p> <p>I have been following <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">this tutorial</a> as a starting point. I can follow the steps through to the end with great success. But when I modify the ingress to do what I am trying to accomplish, nothing happens.</p> <p>The tutorial has you create an ingress with the following .yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress spec: backend: serviceName: nginx servicePort: 80 </code></pre> <p>What I am trying to do is modify the ingress so that it can utilize the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#external-authentication" rel="nofollow noreferrer">auth-url annotation</a> and in the end my ingress.yaml I am failing with looks like</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/auth-url: https://someauth.com/path/to/my/auth spec: backend: serviceName: nginx servicePort: 80 </code></pre> <p>In order to use the annotation, I found that I needed to include the <code>kubernetes.io/ingress.class: "nginx"</code> annotation to use the appropriate ingress. Basically though, this does nothing. I can hit the backend nginx cluster without my auth getting touched. Like these annotations are not even there.</p> <p>Does GKE not support the <code>nginx</code> ingress controller? Is something fundamentally wrong with my yaml? Does the gce ingress controller have an annotation that could accomplish the same thing?</p> <p>What I am trying to accomplish is: A client makes a call to my service, the loadbalancer/proxy first authenticates the request with an external endpoint, if auth is successful, the proxy sends the call along to my service (all without a single redirect response sent to the client). Basically what <a href="http://nginx.org/en/docs/http/ngx_http_auth_request_module.html" rel="nofollow noreferrer">nginx auth_request</a> does which is what I assume this <code>auth-url</code> annotation leverages under the covers.</p> <p>Thanks!</p>
<p>GKE has its own Ingress Controller. It is called GKE Ingress Controller. If you want to use Nginx Ingress Controller, you need to manage it yourself.</p> <p>Looks like <code>auth-url annotation</code> works only on Nginx Ingress Controller. So, you have to run Nginx Ingress Controller first.</p> <p>See <a href="http://rahmonov.me/posts/nginx-ingress-controller/" rel="nofollow noreferrer">this post</a> on how to do that on GKE.</p> <p>Hope it helps. </p>
<p>I did this example <a href="https://github.com/jetstack/kube-lego/tree/master/examples/gce" rel="noreferrer">https://github.com/jetstack/kube-lego/tree/master/examples/gce</a> , then failed to create ClusterRole kube-lego.</p> <p>The error is:</p> <pre><code>Error from server (Forbidden): error when creating "k8s/kube-lego/hoge.yaml": clusterroles.rbac.authorization.k8s.io "kube-lego" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["update"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["create"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["patch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["delete"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["update"]}] user=&amp;{[email protected] [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews" "selfsubjectrulesreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[] </code></pre> <p>I tried on 1.8.6-gke.0, 1.8.7-gke.0 and 1.9.2-gke.0.</p> <p>thanks.</p>
<p>As commented in <a href="https://github.com/jetstack/kube-lego/issues/225#issuecomment-313412762" rel="noreferrer"><code>kube-lego</code> issue 225</a>:</p> <blockquote> <p>Turns out the error I was receiving in an known issue with GKE 1.6. I resolved by following this article:</p> <h2>get current google identity</h2> </blockquote> <pre><code>$ gcloud info | grep Account Account: [[email protected]] </code></pre> <blockquote> <h2>grant cluster-admin to your current identity</h2> </blockquote> <pre><code>$ kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin [email protected] Clusterrolebinding &quot;myname-cluster-admin-binding&quot; created </code></pre> <p>For the actual RBAC to define, see <a href="https://github.com/jetstack/kube-lego/issues/99" rel="noreferrer">issue 99</a></p> <p>It refers to <a href="https://github.com/jetstack/kube-lego/commit/22e3451ca286276d0e4746522b16dd984fb07b9a" rel="noreferrer"><strong>Adds official RBAC rules</strong></a>, which applies the right settings:</p> <pre><code># RBAC objects kubectl apply -f lego/service-account.yaml kubectl apply -f lego/cluster-role.yaml kubectl apply -f lego/cluster-role-binding.yaml </code></pre>
<p>I am wondering if egress policies can be set for external domains that are not part of the K8s namespace or K8s cluster. We have a usecase where we set the default policy of a namespace to deny all outgoing traffic and we then write egress and ingress rules for each application.</p> <p>Some of these applications need access to the external domains. Are there policies that can be set to whitelist certain domains that are outside the cluster?</p> <p>I am looking for something on the following lines</p> <pre><code>apiVersion: "" kind: metadata: name: my-app-targets spec: targets: - mydependency1.example.com:443 - mydependency2.example.com:443 - *.example.org:80 </code></pre> <p>Is this possible?</p>
<p>You can limit the egress IPs using <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies" rel="nofollow noreferrer">network policies</a>. But it isn't possible to declare egress DNS names. </p>
<p>I am not able to create Kubernetes Dashboard.</p> <p>I am following the steps mentioned in Kubernetes official <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">website</a>:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml secret "kubernetes-dashboard-certs" created serviceaccount "kubernetes-dashboard" created service "kubernetes-dashboard" created </code></pre> <blockquote> <p>Error from server (BadRequest): error when creating "<a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml</a>": Role in version "v1" cannot be handled as a Role: no kind "Role" is registered for version "rbac.authorization.k8s.io/v1" Error from server (BadRequest): error when creating "<a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml</a>": RoleBinding in version "v1" cannot be handled as a RoleBinding: no kind "RoleBinding" is registered for version "rbac.authorization.k8s.io/v1" Error from server (BadRequest): error when creating "<a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml</a>":Deployment in version "v1beta2" cannot be handled as a Deployment: no kind "Deployment" is registered for version "apps/v1beta2"</p> </blockquote> <p><strong>Our Kubernetes version is as follows:</strong></p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.12", GitCommit:"3bda299a6414b4866f179921610d6738206a18fe", GitTreeState:"clean", BuildDate:"2017-12-29T08:39:49Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I have not been able to find an appropriate solution for this issue.</p>
<p>You need a cluster with RBAC enabled, otherwise the installation will fail because it can't find the <code>Role</code> kind of object. To enable RBAC, start the apiserver with <code>--authorization-mode=RBAC</code>.</p>
<p>Following blog post <a href="http://blog.kubernetes.io/2017/05/managing-microservices-with-istio-service-mesh.html" rel="nofollow noreferrer">here</a> I am trying to deploy this sample service on my aws k8s through istio gives me "error: no objects passed to apply"</p> <p><strong>Setup</strong></p> <ul> <li>aws k8s v1.7.x</li> <li>istio 0.4.0</li> </ul> <p><strong>Config</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: productpage labels: app: productpage spec: type: NodePort ports: - port: 9080 name: http selector: app: productpage --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: productpage-v1 spec: replicas: 1 template: metadata: labels: app: productpage track: stable spec: containers: - name: productpage image: istio/examples-bookinfo-productpage-v1 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 </code></pre> <p><strong>Issue</strong></p> <p><code>kubectl apply -f &lt;(istioctl kube-inject -f book-info-v1.yaml)</code></p> <p><code>error: no objects passed to apply</code></p>
<p>It probably means that <code>istioctl kube-inject</code> produced empty output in the <code>istioctl kube-inject -f book-info-v1.yaml</code> part. Try to run <code>istioctl kube-inject -f book-info-v1.yaml</code> as a separate command and see if it produces any errors.</p>
<p>I have installed a cluster on Google Kubernetes Engine. </p> <p>And then, I created namespace "staging"</p> <pre><code>$ kubectl get namespaces default Active 26m kube-public Active 26m kube-system Active 26m staging Active 20m </code></pre> <p>Then, I switched to operate in the staging namespace</p> <pre><code>$ kubectl config use-context staging $ kubectl config current-context staging </code></pre> <p>And then, I installed postgresql using helm on staging namespace</p> <pre><code>helm install --name staging stable/postgresql </code></pre> <p>But I got: </p> <blockquote> <p>Error: release staging failed: namespaces "staging" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "staging": Unknown user "system:serviceaccount:kube-system:default"</p> </blockquote> <p>What does it mean..?? How to get it working..??</p> <p>Thank youu..</p>
<p>As your cluster is RBAC enabled, seems like your <code>tiller</code> Pod do not have enough permission.</p> <p>You are using <code>default</code> ServiceAccount which lacks enough RBAC permission, tiller requires.</p> <p>All you need to create ClusterRole, ClusterRoleBinding and ServiceAccount. With them you can provide necessary permission to your Pod.</p> <p>Follow this steps</p> <p>_1. Create ClusterRole <code>tiller</code></p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: tiller rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] </code></pre> <blockquote> <p>Note: I have used full permission here.</p> </blockquote> <p>_2. Create ServiceAccount <code>tiller</code> in <code>kube-system</code> namespace</p> <pre><code>$ kubectl create sa tiller -n kube-system </code></pre> <p>_3. Create ClusterRoleBinding <code>tiller</code></p> <pre><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: tiller subjects: - kind: ServiceAccount name: tiller namespace: kube-system apiGroup: "" roleRef: kind: ClusterRole name: tiller apiGroup: rbac.authorization.k8s.io </code></pre> <p>Now you need to use this ServiceAccount in your tiller Deployment.</p> <p>As you already have one, edit that</p> <pre><code>$ kubectl edit deployment -n kube-system tiller-deploy </code></pre> <p>Set <code>serviceAccountName</code> to <code>tiller</code> under PodSpec</p> <p>Read more about <a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="noreferrer">RBAC</a></p>
<p>I see advantages of Kubernetes which include Rolling Deployments, Automatic Health check monitoring, and swinging a new server to action when an existing one fails. I also do understand that Kubernetes is not just for Docker. </p> <p>So, that brings a couple of questions! </p> <p>When Azure, and Service Fabric could provide all that I said (and beyond), why would I need Kubernetes? </p> <p>Would it make sense for one to use Kubernetes along with Service Fabric for large scale deployments on Azure?</p>
<p>Let's look first at the similarities between Kubernetes and Service Fabric. </p> <ul> <li>They are both cloud-agnostic clustering, orchestration, and scheduling software.</li> <li>They can both be deployed manually, by you, to any set of VMs, anywhere.</li> <li>There are "managed" offerings for both, meaning a cloud provider like Azure or Google Cloud will host a cluster for you, but generally you still own the VMs.</li> <li>They both deploy and manage containers.</li> <li>They both have rich management operations, such as rolling upgrades, health checks, and self-healing capabilities.</li> </ul> <p>That's a fairly high-level view but should give you an idea of what and where you can run with each.</p> <p>Now let's look where they're different. There are a ton of small differences, but I want to focus on two of the really big conceptual differences:</p> <ul> <li><p><strong>Application model</strong>:</p> <ul> <li>Service Fabric allows you to orchestrate any arbitrary container or EXE (whether that's a small node.js app or a giant legacy application), and in that sense it is similar to Kubernetes. But overall it is more focused on application development specifically, with programming models that are integrated with the platform. In this respect, it is more closely comparable to Cloud Foundry than Kubernetes.</li> <li>Kubernetes is focused more on orchestrating infrastructure for an application. It doesn't really focus on <em>how</em> you write your application. That's up to you to figure out; Kubernetes just wants a container to run, doesn't matter what's in it.</li> </ul></li> <li><p><strong>State management</strong></p> <ul> <li>Kubernetes <em>allows you to deploy</em> stateful software to it, by providing persistent disk storage volumes to containers and assigning unique identifiers to pods. This lets you deploy things like ZooKeeper or MySQL.</li> <li>Service Fabric <em>is</em> stateful software. Service Fabric is designed as a stateful, data-aware platform. It provides HA state and scale-out primitives. So while Kubernetes allows you to <em>deploy</em> stateful things, Service Fabric allows you to <em>build</em> stateful things. This is one of the key differences that's often overlooked. For example: <ul> <li>On Kubernetes, you can deploy ZooKeeper. </li> <li>On Service Fabric, you can actually build ZooKeeper yourself using Service Fabric's replication and leader election primitives.</li> <li>Kubernetes uses etcd for distributed, reliable storage about the state of the cluster.</li> <li>Service Fabric doesn't <em>need</em> etcd, because Service Fabric itself is a distributed, reliable storage platform. The system services in Service Fabric make use of this to reliably store the state of the cluster. This makes Service Fabric entirely self-contained.</li> </ul></li> </ul></li> </ul> <p>The fact that Service Fabric is a stateful platform is key to understanding it and how it differs from other major orchestrators. Everything it does - scheduling, health checking, rolling upgrades, application versioning, failover, self-healing, etc - are all designed around the fact that it is managing replicated and distributed data that needs to be consistent and highly available at all times. </p>
<p>This might be very basic question but I can't find anywhere so I apologize..</p> <p>I have installed postresql using helm on Google Kubernetes Engine</p> <pre><code>$ helm install --name staging stable/postgresql </code></pre> <p>My deployments:</p> <pre><code>$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE staging-backend 7 7 7 0 3h staging-postgresql 1 1 1 0 3h </code></pre> <p>And my services:</p> <pre><code>$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE staging-postgresql ClusterIP xx.x.xxx.xx &lt;none&gt; 5432/TCP 3h </code></pre> <p>I tried to connect <code>staging-backend</code> with <code>staging-postgresql</code> using cluster IP but it doesn't seem to work. Or should I use external IP..?? Where can I find external IP..??</p> <p>thank Youu..</p>
<p>In Kubernetes you have two options to find IP of your service.</p> <ol> <li>Environment variables</li> </ol> <p>Kubernetes generates automatically environment variables with all the data you need to access a service. If you run <code>env</code> command inside a pod, then you should see similar output to:</p> <pre><code>KUBERNETES_SERVICE_PORT=443 STAGING_POSTGRESQL_SERVICE_HOST=10.0.162.149 KUBERNETES_SERVICE_HOST=10.0.0.1 STAGING_POSTGRESQL_SERVICE_PORT=5432 KUBERNETES_SERVICE_PORT_HTTPS=443 </code></pre> <p>Use <code>STAGING_POSTGRESQL_SERVICE_HOST</code> environment variable in your app to get the IP of service.</p> <ol start="2"> <li>DNS (preferred)</li> </ol> <p>Kubernetes automatically assigns DNS names to all services. Use your service name instead of IP and it will be resolved to service IP. </p> <p>For example run <code>ping staging-postgresql</code> inside any pod.</p> <p>You can read more about using service <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="noreferrer">here</a>.</p>
<p>This is my application code:</p> <pre><code>from flask import Flask from redis import Redis, RedisError import os import socket # Connect to Redis redis = Redis(host=os.getenv("REDIS", "redis"), db=0, socket_connect_timeout=2, socket_timeout=2) app = Flask(__name__) @app.route("/") def hello(): try: visits = redis.incr("counter") except RedisError: visits = "&lt;i&gt;cannot connect to Redis, counter disabled&lt;/i&gt;" html = "&lt;h3&gt;Hello {name}!&lt;/h3&gt;" \ "&lt;b&gt;Hostname:&lt;/b&gt; {hostname}&lt;br/&gt;" \ "&lt;b&gt;Visits:&lt;/b&gt; {visits}" return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits) if __name__ == "__main__": app.run(host='0.0.0.0', port=80) </code></pre> <p>Here want to connect a Redis host in the Kubernetes cluster. So put a environment variable <code>REDIS</code> to set value in Kubernetes' manifest file.</p> <p>This is the Kubernetes deployment manifest file:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{ template "fullname" . }} labels: app: {{ template "name" . }} chart: {{ template "chart" . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ template "name" . }} release: {{ .Release.Name }} template: metadata: labels: app: {{ template "name" . }} release: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: REDIS value: {{ template "fullname" . }}-master-svc ports: - name: http containerPort: 80 protocol: TCP resources: {{ toYaml .Values.resources | indent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{ toYaml . | indent 8 }} {{- end }} </code></pre> <p>In order to connect Redis host in the same cluster, set environment value as:</p> <pre><code> env: - name: REDIS value: {{ template "fullname" . }}-master-svc </code></pre> <p>About Redis cluster, installed by official <a href="https://github.com/kubernetes/charts/tree/master/stable/redis-ha" rel="nofollow noreferrer">redis-ha</a> chart. Its master service manifest file is:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{ template "fullname" . }}-master-svc annotations: {{ toYaml .Values.servers.annotations | indent 4 }} spec: ports: - port: 6379 protocol: TCP targetPort: 6379 selector: app: "redis-ha" redis-node: "true" redis-role: "master" release: "{{ .Release.Name }}" type: "{{ .Values.servers.serviceType }}" </code></pre> <p>But it seems that the application pod didn't connect to the Redis master service name successfully. When I got accessed URL:</p> <pre><code>export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services wishful-rabbit-mychart) export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT </code></pre> <p>from browser, got this result:</p> <pre><code>Hello World! Hostname: wishful-rabbit-mychart-85dc7658c6-9blg5 Visits: cannot connect to Redis, counter disabled </code></pre> <p>More information about <code>pods</code> and <code>services</code>:</p> <pre><code>kubectl get po NAME READY STATUS RESTARTS AGE wishful-rabbit-mychart-bdfdf6558-8fjmb 1/1 Running 0 8m wishful-rabbit-mychart-bdfdf6558-9khfs 1/1 Running 0 7m wishful-rabbit-mychart-bdfdf6558-hgqxv 1/1 Running 0 8m wishful-rabbit-mychart-sentinel-8667dd57d4-9njwq 1/1 Running 0 37m wishful-rabbit-mychart-sentinel-8667dd57d4-jsq6d 1/1 Running 0 37m wishful-rabbit-mychart-sentinel-8667dd57d4-ndqss 1/1 Running 0 37m wishful-rabbit-mychart-server-746f47dfdd-2fn4s 1/1 Running 0 37m wishful-rabbit-mychart-server-746f47dfdd-bwgrq 1/1 Running 0 37m wishful-rabbit-mychart-server-746f47dfdd-spkkm 1/1 Running 0 37m kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h wishful-rabbit-mychart NodePort 10.101.103.224 &lt;none&gt; 80:30033/TCP 37m wishful-rabbit-mychart-master-svc ClusterIP 10.108.128.167 &lt;none&gt; 6379/TCP 37m wishful-rabbit-mychart-sentinel ClusterIP 10.107.63.208 &lt;none&gt; 26379/TCP 37m wishful-rabbit-mychart-slave-svc ClusterIP 10.99.211.111 &lt;none&gt; 6379/TCP 37m </code></pre> <p>What's the right usage?</p> <hr> <h1>Redis Server Pod Env</h1> <pre><code>kubectl exec -it wishful-rabbit-mychart-server-746f47dfdd-2fn4s -- sh / # printenv KUBERNETES_PORT=tcp://10.96.0.1:443 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PORT=6379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_ADDR=10.108.128.167 KUBERNETES_SERVICE_PORT=443 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PROTO=tcp WISHFUL_RABBIT_MYCHART_SERVICE_HOST=10.101.103.224 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PORT=6379 HOSTNAME=wishful-rabbit-mychart-server-746f47dfdd-2fn4s SHLVL=1 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PROTO=tcp HOME=/root WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_HOST=10.107.63.208 WISHFUL_RABBIT_MYCHART_PORT=tcp://10.101.103.224:80 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP=tcp://10.99.211.111:6379 WISHFUL_RABBIT_MYCHART_SERVICE_PORT=80 WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_PORT=26379 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT=tcp://10.107.63.208:26379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP=tcp://10.108.128.167:6379 WISHFUL_RABBIT_MYCHART_PORT_80_TCP_ADDR=10.101.103.224 REDIS_CHART_PREFIX=wishful-rabbit-mychart- TERM=xterm WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PORT=80 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PROTO=tcp PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_ADDR=10.107.63.208 WISHFUL_RABBIT_MYCHART_PORT_80_TCP=tcp://10.101.103.224:80 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_HOST=10.99.211.111 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PORT=26379 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PROTO=tcp KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_HOST=10.108.128.167 PWD=/ KUBERNETES_SERVICE_HOST=10.96.0.1 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_PORT=6379 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT=tcp://10.99.211.111:6379 WISHFUL_RABBIT_MYCHART_SERVICE_PORT_HTTP=80 REDIS_SENTINEL_SERVICE_HOST=redis-sentinel WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_ADDR=10.99.211.111 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP=tcp://10.107.63.208:26379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT=tcp://10.108.128.167:6379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_PORT=6379 </code></pre> <hr> <h1>Application Pod Env</h1> <pre><code>kubectl exec -it wishful-rabbit-mychart-85dc7658c6-8wlq6 -- sh # printenv KUBERNETES_SERVICE_PORT=443 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PORT=6379 KUBERNETES_PORT=tcp://10.96.0.1:443 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_ADDR=10.108.128.167 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PROTO=tcp WISHFUL_RABBIT_MYCHART_SERVICE_HOST=10.101.103.224 HOSTNAME=wishful-rabbit-mychart-85dc7658c6-8wlq6 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PORT=6379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PROTO=tcp PYTHON_PIP_VERSION=9.0.1 WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_HOST=10.107.63.208 HOME=/root GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF REDIS=wishful-rabbit-mychart-master-svc WISHFUL_RABBIT_MYCHART_SERVICE_PORT=80 WISHFUL_RABBIT_MYCHART_PORT=tcp://10.101.103.224:80 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP=tcp://10.99.211.111:6379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP=tcp://10.108.128.167:6379 WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_PORT=26379 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT=tcp://10.107.63.208:26379 WISHFUL_RABBIT_MYCHART_PORT_80_TCP_ADDR=10.101.103.224 NAME=World TERM=xterm KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PORT=80 WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PROTO=tcp PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp LANG=C.UTF-8 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_ADDR=10.107.63.208 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_HOST=10.99.211.111 WISHFUL_RABBIT_MYCHART_PORT_80_TCP=tcp://10.101.103.224:80 PYTHON_VERSION=2.7.14 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PORT=26379 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_SERVICE_PORT_HTTPS=443 WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_HOST=10.108.128.167 KUBERNETES_SERVICE_HOST=10.96.0.1 PWD=/app WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT=tcp://10.99.211.111:6379 WISHFUL_RABBIT_MYCHART_SERVICE_PORT_HTTP=80 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_PORT=6379 WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_ADDR=10.99.211.111 WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP=tcp://10.107.63.208:26379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_PORT=6379 WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT=tcp://10.108.128.167:6379 </code></pre> <h2>All Endpoints</h2> <pre><code>kubectl get ep NAME ENDPOINTS AGE kubernetes 10.0.2.15:8443 4h wishful-rabbit-mychart 172.17.0.5:80,172.17.0.6:80,172.17.0.7:80 1h wishful-rabbit-mychart-master-svc &lt;none&gt; 1h wishful-rabbit-mychart-sentinel 172.17.0.11:26379,172.17.0.12:26379,172.17.0.8:26379 1h wishful-rabbit-mychart-slave-svc &lt;none&gt; </code></pre> <hr> <h1>Describe Redis Master Service</h1> <pre><code>kubectl describe svc wishful-rabbit-mychart-master-svc Name: wishful-rabbit-mychart-master-svc Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: app=redis-ha,redis-node=true,redis-role=master,release=wishful-rabbit Type: ClusterIP IP: 10.108.128.167 Port: &lt;unset&gt; 6379/TCP TargetPort: 6379/TCP Endpoints: &lt;none&gt; Session Affinity: None Events: &lt;none&gt; </code></pre> <hr> <h1>Describe Redis Server Pod</h1> <pre><code>kubectl describe po wishful-rabbit-mychart-server-746f47dfdd-2fn4s Name: wishful-rabbit-mychart-server-746f47dfdd-2fn4s Namespace: default Node: minikube/192.168.99.100 Start Time: Fri, 02 Feb 2018 15:28:42 +0900 Labels: app=mychart chart=mychart-0.1.0 heritage=Tiller name=redis-server pod-template-hash=3029038988 podIP=172.17.0.10 redis-node=true redis-role=master release=wishful-rabbit runID=cbf8e0 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wishful-rabbit-mychart-server-746f47dfdd","uid":"4fcb0dfc-07e2-1... Status: Running IP: 172.17.0.10 Controlled By: ReplicaSet/wishful-rabbit-mychart-server-746f47dfdd Containers: redis: Container ID: docker://2734d60bd44a1446ec6556369359ed15b866a4589abe1e6d394f9252114c6d4d Image: quay.io/smile/redis:4.0.6r2 Image ID: docker-pullable://quay.io/smile/redis@sha256:8948a952920d4495859c984546838d4c9b4c71e0036eef86570922d91cacb3df Port: 6379/TCP State: Running Started: Fri, 02 Feb 2018 15:28:44 +0900 Ready: True Restart Count: 0 Environment: REDIS_SENTINEL_SERVICE_HOST: redis-sentinel REDIS_CHART_PREFIX: wishful-rabbit-mychart- Mounts: /redis-master-data from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-wfv2q (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-wfv2q: Type: Secret (a volume populated by a Secret) SecretName: default-token-wfv2q Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: &lt;none&gt; Events: &lt;none&gt; </code></pre>
<p>There is a discrepancy between your <code>wishful-rabbit-mychart-master-svc</code> service's <code>Selector</code> and your master redis pods' <code>labels</code>.</p> <p>Your service is searching for pods with the following labels:</p> <pre><code>app=redis-ha redis-node=true redis-role=master release=wishful-rabbit </code></pre> <p>While your pods have the following labels:</p> <pre><code>app=mychart chart=mychart-0.1.0 heritage=Tiller name=redis-server pod-template-hash=3029038988 podIP=172.17.0.10 redis-node=true redis-role=master release=wishful-rabbit runID=cbf8e0 </code></pre> <p>As you can see the <code>app</code> label is different (<code>redis-ha</code> in your service, <code>mychart</code> in your pods).</p> <p>This causes the service to be unbounded (it doesn't know where to forward the incoming traffic).</p> <p>While I don't know the actual cause of this configuration error, I can suggest a solution to make it work.</p> <p>You have to edit your redis service and change its selector attribute in order to match the pod's labels. Just run:</p> <pre><code>kubectl edit svc wishful-rabbit-mychart-master-svc </code></pre> <p>and change <code>app: "redis-ha"</code> with <code>app: "mychart"</code>.</p> <p>Your application should suddenly be able to reach your redis instance.</p>
<p>We are using RC to run our workload and want to migrate to Deployment. Is there a way to do that with out causing any impact to the running workload. I mean, can we move these running pods under Deployment? </p>
<p>Like, @matthew-l-daniel answered, the answer is yes. But I am more than 80% certain about it. Because I have tested it</p> <p>Now whats the process we need to follow</p> <p>Lets say I have a ReplicationController.</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 </code></pre> <p><strong>Question:</strong> can we move these running pods under Deployment?</p> <p>Lets follow these step to see if we can.</p> <p><strong>Step 1:</strong> Delete this RC with <code>--cascade=false</code>. This will leave Pods.</p> <p><strong>Step 2:</strong> Create ReplicaSet first, with same label as ReplicationController</p> <pre><code>apiVersion: apps/v1beta2 kind: ReplicaSet metadata: name: nginx labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: --- </code></pre> <p>So, now these Pods are under ReplicaSet.</p> <p><strong>Step 3:</strong> Create Deployment Now with same label.</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: ---- </code></pre> <p>And Deployment will find one ReplicaSet already exists and our job is done.</p> <p>Now we can check increasing <code>replicas</code> to see if it works.</p> <p>And It works.</p> <p><strong>Which way It doesn't work</strong></p> <p>After deleting ReplicationController, do not create Deployment directly. This will <strong>not work</strong>. Because, Deployment will find no ReplicaSet, and will create new one with additional label which will not match with your existing Pods</p>
<p>I have a web app hosted in the Google Cloud platform that sits behind a load balancer, which itself sits behind an ingress. The ingress is set up with an SSL certificate and accepts HTTPS connections as expected, with one problem: I cannot get it to redirect non-HTTPS connections to HTTPS. For example, if I connect to it with the URL <code>http://foo.com</code> or <code>foo.com</code>, it just goes to <code>foo.com</code>, instead of <code>https://foo.com</code> as I would expect. Connecting to <code>https://foo.com</code> explicitly produces the desired HTTPS connection. </p> <p>I have tried every annotation and config imaginable, but it stubbornly refuses, although it shouldn't even be necessary since docs imply that the redirect is automatic if TLS is specified. Am I fundamentally misunderstanding how ingress resources work? </p> <p><strong>Update</strong>: Is it necessary to <strong>manually install</strong> nginx ingress on GCP? Now that I think about it, I've been taking its availability on the platform for granted, but after coming across information on how to install nginx ingress on the Google Container Engine, I realized the answer may be a lot simpler than I thought. Will investigate further.</p> <p>Kubernetes version: <strong>1.8.5-gke.0</strong></p> <p>Ingress YAML file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: https-ingress annotations: kubernetes.io/ingress.class: "nginx" ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/secure-backends: "true" ingress.kubernetes.io/force-ssl-redirect: "true" spec: tls: - hosts: - foo.com secretName: tls-secret rules: - host: foo.com http: paths: - path: /* backend: serviceName: foo-prod servicePort: 80 </code></pre> <p><strong>kubectl describe ing https-ingress</strong> output</p> <pre><code>Name: https-ingress Namespace: default Address: Default backend: default-http-backend:80 (10.56.0.3:8080) TLS: tls-secret terminates foo.com Rules: Host Path Backends ---- ---- -------- foo.com /* foo-prod:80 (&lt;none&gt;) Annotations: force-ssl-redirect: true secure-backends: true ssl-redirect: true Events: &lt;none&gt; </code></pre>
<p>The problem was indeed the fact that the Nginx Ingress is not standard on the Google Cloud Platform, and needs to be installed manually - doh!</p> <p>However, I found installing it to be much more difficult than anticipated (especially because my needs pertained specifically to GCP), so I'm going to outline every step I took from start to finish in hopes of helping anyone else who uses that specific cloud and has that specific need, and finds generic guides to not quite fit the bill.</p> <ol> <li>Get Cluster Credentials</li> </ol> <p>This is a GCP specific step that tripped me up for a while - you're dealing with it if you get weird errors like </p> <p><code>kubectl unable to connect to server: x509: certificate signed by unknown authority</code> </p> <p>when trying to run kubectl commands. Run this to set up your console:</p> <p><code>gcloud container clusters get-credentials YOUR-K8s-CLUSTER-NAME --z YOUR-K8S-CLUSTER-ZONE</code></p> <ol start="2"> <li>Install Helm</li> </ol> <p>Helm by itself is not hard to install, and the directions can be found on GCP's own docs; what they neglect to mention, however, is that on new versions of K8s, RBAC configuration is required to allow Tiller to install things. Run the following after <code>helm init</code>:</p> <pre><code>kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' </code></pre> <ol start="3"> <li>Install Nginx Ingress through Helm</li> </ol> <p>Here's another step that tripped me up - <code>rbac.create=true</code> is necessary for the aforementioned RBAC factor.</p> <p><code>helm install --name nginx-ingress-release stable/nginx-ingress --set rbac.create=true</code></p> <ol start="4"> <li>Create your Ingress resource</li> </ol> <p>This step is the simplest, and there are plenty of sample nginx ingress configs to tweak - see @JahongirRahmonov's example above. What you MUST keep in mind is that this step takes anywhere from half an hour to over an hour to set up - if you change the config and check again immediately, it won't be set up, but don't take that as implication that you messed something up! Wait for a while and see first.</p> <p>It's hard to believe this is how much it takes just to redirect HTTP to HTTPS with Kubernetes right now, but I hope this guide helps anyone else stuck on such a seemingly simple and yet so critical need.</p>
<p>Kubernetes' liveness and readiness probes for pods (deployment) can be configured with this initial delay ---- meaning the probe will start after this many seconds after the container is up. If it is not specified, what is the default value? I can't seem to find it. The default value for periodSeconds is documented as 10 second.</p> <p>Thanks</p>
<p>It seems that the default value of <code>0</code> is missing from the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="noreferrer">documentation</a>.</p> <p>The health or readiness check algorithm works like this:</p> <ol> <li>Wait for <code>initialDelaySeconds</code></li> <li>Perform readiness check and wait <code>timeoutSeconds</code> for a timeout</li> <li>If the number of continued successes is greater than <code>successThreshold</code> return <code>success</code><br /> If the number of continued failures is greater than <code>failureThreshold</code> return <code>failure</code><br /> otherwise wait <code>periodSeconds</code> and start a new readiness check</li> </ol>
<p>Use case:</p> <p>I have a NFS directory available and I want to use it to persist data for multiple deployments &amp; pods.</p> <p>I have created a <code>PersistentVolume</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: server: http://mynfs.com path: /server/mount/point </code></pre> <p>I want multiple deployments to be able to use this <code>PersistentVolume</code>, so my understanding of what is needed is that I need to create multiple <code>PersistentVolumeClaims</code> which will all point at this <code>PersistentVolume</code>.</p> <pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolumeClaim apiVersion: v1 metaData: name: nfs-pvc-1 namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 50Mi </code></pre> <p>I believe this to create a 50MB claim on the <code>PersistentVolume</code>. When I run <code>kubectl get pvc</code>, I see:</p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s </code></pre> <p>I don't understand why I see 10Gi capacity, not 50Mi.</p> <p>When I then change the <code>PersistentVolumeClaim</code> deployment yaml to create a PVC named <code>nfs-pvc-2</code> I get this:</p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s nfs-pvc-2 Pending 10s </code></pre> <p>PVC2 never binds to the PV. Is this expected behaviour? Can I have multiple PVCs pointing at the same PV?</p> <p>When I delete <code>nfs-pvc-1</code>, I see the same thing:</p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc-2 Pending 10s </code></pre> <p>Again, is this normal?</p> <p>What is the appropriate way to use/re-use a shared NFS resource between multiple deployments / pods?</p>
<p>Basically you can't do what you want, as the relationship PVC &lt;--> PV is one-on-one. </p> <p>If NFS is the only storage you have available and would like multiple PV/PVC on one nfs export, use Dynamic Provisioning and a default storage class. </p> <p>It's not in official K8s yet, but this one is in the incubator and I've tried it and it works well: <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client" rel="noreferrer">https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client</a></p> <p>This will enormously simplify your volume provisioning as you only need to take care of the PVC, and the PV will be created as a directory on the nfs export / server that you have defined. </p>
<p>I'm running Kubernetes 1.2.0 on a number of lab machines. The machines have swap enabled. As the machines are used for other purposes, too, I cannot disable swap globally.</p> <p>I'm observing the following problem: If I start a pod with a memory limit, the container starts swapping after it reached the memory limit. I would expect the container to be killed.</p> <p>According to <a href="https://github.com/kubernetes/kubernetes/issues/7294" rel="nofollow noreferrer">this issue</a> this was a problem that has been fixed, but it still occurs with Kubernetes 1.2.0. If I check the running container with <code>docker inspect</code>, then I can see that <code>MemorySwap = -1</code> and <code>MemorySwappiness = -1</code>. If I start a pod with low memory limits, it starts swapping almost immediately.</p> <p>I had some ideas, but I couldn't figure out how to do any of these:</p> <ul> <li>Change the default setting in Docker so no container is allowed to swap</li> <li>Add a parameter to the Kubernetes container config so it passes <code>--memory-swappiness=0</code></li> <li>Fiddle with docker's cgroup and disallow swapping for the group</li> </ul> <p>How can I prevent the containers to start swapping?</p>
<p>Kubernetes, specifically the <code>kubelet</code>, <a href="https://github.com/kubernetes/kubernetes/commit/f4edaf2b8c32463d6485e2c12b7fd776aef948bc" rel="nofollow noreferrer">fails if swap is enabled on Linux since version <code>1.8</code> (flag <code>--fail-swap-on=true</code>)</a>, as <a href="https://github.com/kubernetes/kubernetes/issues/7294" rel="nofollow noreferrer">Kubernetes can't handle swap</a>. That means you can be sure that swap is disabled by default on Kubernetes.</p> <p>To test it in local Docker container, <a href="https://github.com/moby/moby/issues/18894#issuecomment-167177866" rel="nofollow noreferrer">set <code>memory-swap == memory</code></a>, e.g.:</p> <p><code>docker run --memory="10m" --memory-swap="10m" dominikk/swap-test</code></p> <p>My test image is based on <a href="https://unix.stackexchange.com/questions/1367/how-to-test-swap-partition/1368#1368">this small program</a> with the addition to flush output in Docker:</p> <pre class="lang-c prettyprint-override"><code>setvbuf(stdout, NULL, _IONBF, 0); // flush stdout buffer every time </code></pre> <p>You can also test it with <code>docker-compose up</code> (<a href="https://docs.docker.com/compose/compose-file/compose-versioning/#version-2x-to-3x" rel="nofollow noreferrer">only works for <code>version &lt;= 2.x</code></a>):</p> <pre class="lang-yaml prettyprint-override"><code>version: '2' services: swap-test: image: dominikk/swap-test mem_limit: 10m # memswap_limit: # -1: unlimited swap # 0: field unset # &gt;0: mem_limit + swap # == mem_limit: swap disabled memswap_limit: 10m </code></pre>
<p>I have a docker image called docker-hello-world - all it does is print Hello World to the log using the JRE. When tested it works fine.</p> <p>Then, I import an image into Kubernetes Docker and run – still no issues.</p> <pre><code>docker images -a REPOSITORY TAG IMAGE ID CREATED SIZE docker-hello-world latest 9a161d166742 20 hours ago 83.17 MB </code></pre> <ol start="3"> <li>When I try and deploy into Kubernetes with <code>kubectl run docker-hello-world --image=docker-hello-world:latest</code> something goes wrong here – I tried the image id as well but I can’t understand why it can’t find the image.</li> </ol> <p>It says deployment created.</p> <pre><code>kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE docker-hello-world 1 1 1 0 24s kubectl get pods NAME READY STATUS RESTARTS AGE docker-hello-world-67c745cff4-sv77d 0/1 ErrImagePull 0 43s </code></pre> <p>Logs:</p> <pre><code>kubectl logs docker-hello-world-67c745cff4-sv77d Error from server (BadRequest): container "docker-hello-world" in pod "docker-hello-world-67c745cff4-sv77d" is waiting to start: trying and failing to pull image </code></pre> <p>Im not sure why it can’t find the image.</p> <p>But if I do from within Minikube:</p> <pre><code>docker build -t dummy:v1 ~/eclipse-workspace/HelloWorld/bin/ </code></pre> <p>(I don’t really want do generate the image again)</p> <pre><code>docker images REPOSITORY TAG IMAGE ID CREATED SIZE dummy v1 beae3bfd2327 32 seconds ago 83.17 MB kubectl run --image=dummy:v1 dummy deployment "dummy” created kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE dummy 1 1 1 0 11s kubectl get pods NAME READY STATUS RESTARTS AGE dummy-8496dd7d84-t4h66 0/1 Completed 4 1m kubectl logs dummy-8496dd7d84-t4h66 Hello, World </code></pre> <p>It seems to work ok</p>
<p>The version tag of the image <code>docker-hello-world:latest</code> is <code>latest</code>, which indicates the default <code>ImagePullPolicy</code> is <code>Always</code> (see <code>pkg/apis/core/v1/defaults.go</code> for v1.9.x and after). It will try to pull image from the hub and not use the image already present.</p> <p>One option is set a specific tag rather than <code>latest</code>.</p>
<p>Using:</p> <pre><code>kubectl expose deployment &lt;Name-Of-Servce&gt; --name=loadbalancer --port=8080 --target-port=8080 --type=LoadBalancer </code></pre> <p>The <code>kubectl get services</code> is showing pending:</p> <pre><code>loadbalancer LoadBalancer &lt;x.x.x.x&gt; &lt;pending&gt; 8080:32670/TCP 2m </code></pre> <p>Before Docker surported Kubernetes, I could use MiniKube and Helm:</p> <pre><code>helm install stable/jenkins kubectl get services // To get the service name minikube service original-llama-jenkins // &lt;&lt; The service name </code></pre> <p>Now that we have Docker for Mac(Edge) supporting Kubernetes, how do you add an <code>EXTERNAL-IP</code>?</p>
<p>Both type LoadBalancer and NodePort work on Docker for Mac Kubernetes. It's a lovely bit of magic, actually. Just hit localhost:[port]. For NodePort, a port is automatically assigned unless specified in the service definition. For type LoadBalancer, it is also specified in the service definition. Note that in using LoadBalancer, the status from <code>kubectl</code> will be shown as <code>&lt;pending&gt;</code> for EXTERNAL-IP but it does work.</p> <p>This guy notes that it's exposed through vpnkit though I think another source would be helpful:</p> <p><a href="https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes" rel="noreferrer">https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes</a></p> <p>Edit: updated because Kubernetes is now available in stable versions of DfM.</p>
<p>Deploying my service to production:</p> <pre><code>envsubst &lt; ./kubernetes/pre-production/aks.yaml | kubectl apply -f - </code></pre> <p>I'm getting the following error:</p> <blockquote> <p>The Deployment "moverick-mule-pre" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"commit":"750a26deebc3582bec4bfbb2426b3f22ee042eaa", "app":"moverick-mule-pre"}: <code>selector</code> does not match template <code>labels</code></p> </blockquote> <p>My yaml file is:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: moverick-mule-pre spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: app: moverick-mule-pre commit: $CI_COMMIT_SHA spec: containers: - name: moverick-mule-pre image: $REGISTRY_SERVER_PRE/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME imagePullPolicy: Always ports: - containerPort: 80 envFrom: - secretRef: name: moverick-pre livenessProbe: httpGet: path: /console port: 80 initialDelaySeconds: 5 periodSeconds: 5 volumeMounts: - name: logs mountPath: /opt/mule/logs/ - name: asc mountPath: /opt/mule/asc/ imagePullSecrets: - name: registry-pre volumes: - name: logs azureFile: secretName: azure-files-pre shareName: logs-pre readOnly: false - name: asc azureFile: secretName: azure-asc-pre shareName: asc-pre readOnly: false --- apiVersion: v1 kind: Service metadata: name: moverick-mule-pre spec: ports: - port: 80 selector: app: moverick-mule-pre </code></pre>
<p>You need to add <code>selector</code> in <code>spec</code> of Deployment.</p> <p>And also, these <code>selector</code> should match with <code>labels</code> in PodTemplate.</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: moverick-mule-pre spec: replicas: 2 selector: matchLabels: app: moverick-mule-pre commit: $CI_COMMIT_SHA strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: app: moverick-mule-pre commit: $CI_COMMIT_SHA </code></pre> <p>Otherwise, you will get error like below</p> <blockquote> <p>The Deployment &quot;moverick-mule-pre&quot; is invalid:</p> <ul> <li>spec.selector: Required value</li> <li>spec.template.metadata.labels: Invalid value: map[string]string{...} <code>selector</code> does not match template <code>labels</code></li> </ul> </blockquote>
<p>I am currently deploying my applications in a Kubernetes cluster using Helm. Now I also need to be able to modify some parameter in the values.yaml file for different environments.</p> <p>For simple charts with only one level this is easy by having different values-local.yaml and values-prod.yaml and add this to the <code>helm install</code> flag, e.g. <code>helm install --values values-local.yaml</code>. </p> <p>But if I have a second layer of subcharts, which also need to distinguish the values between multiple environments, I cannot set a custom values.yaml.</p> <p>Assuming following structure:</p> <pre><code>| chart | Chart.yaml | values-local.yaml | values-prod.yaml | charts | foo-app | Chart.yaml | values-local.yaml | values-prod.yaml | templates | deployments.yaml | services.yaml </code></pre> <p>This will not work since Helm is expecting a <code>values.yaml</code> in subcharts.</p> <p>My workaround right now is to have an if-else-construct in the subchart/values.yaml and set this in as a global variable in the parent values.yaml.</p> <pre><code>*foo-app/values.yaml* {{ - if .Values.global.env.local }} foo-app: replicas: 1 {{ else if .Values.global.env.dev}} foo-app: replicas: 2 {{ end }} </code></pre> <hr> <pre><code>parent/values-local.yaml global: env: local: true parent/values-prod.yaml global: env: prod: true </code></pre> <p>But I hope there is a better approach around so I do not need to rely on these custom flags.</p> <p>I hope you can help me out on this.</p>
<p>Here is how I would do it (for reference <a href="https://docs.helm.sh/chart_template_guide/#overriding-values-from-a-parent-chart" rel="noreferrer" title="overriding values">overriding values</a>):</p> <ol> <li>In your child charts (foochart) define the number of replicas as a variable: <ul> <li>foochart/values.yaml</li> </ul></li> </ol> <pre><code> ... replicas: 1 ... </code></pre> <ul> <li>foochart/templates/deployment.yaml</li> </ul> <pre><code> ... spec: replicas: {{ .Values.replicas }} ... </code></pre> <ol start="2"> <li><p>Then, in your main chart's values files:</p> <ul> <li>values-local.yaml</li> </ul></li> </ol> <pre><code> foochart: replicas: 1 </code></pre> <ul> <li>values-prod.yaml</li> </ul> <pre><code> foochart: replicas: 2 </code></pre>
<p>Today I met a strange issue about my Windows kubectl client suddenly raise authorization issue in connecting ICp.</p> <p>I was using ICP with a Widows configured kubectl.exe. Then, after a while, due to laptop automatic sleeping, my VPN connection was disconnected, hence lose connection to remote ICP. Later I came back and re-connect the ICP. I use kubectl command again and faced:</p> <p>error: You must be logged in to the server (Unauthorized)</p> <p>On ICP master node, nothing wrong if I used:</p> <p><strong>kubectl -s 127.0.0.1:8888 -n kube-system get pods -o wide</strong></p> <p>I went back to re-configure client (pasted the code copied from admin -> configure kubectl), commands executed successful but when I issue</p> <p><strong>kubectl get pods</strong></p> <p>still error.</p> <p>I checked article:</p> <p><a href="https://stackoverflow.com/questions/45540139/kubectl-error-you-must-be-logged-in-to-the-server">kubectl - error: You must be logged in to the server</a></p> <p><a href="https://stackoverflow.com/questions/43111366/kubectl-error-you-must-be-logged-in-to-the-server-the-server-has-asked-for-th">kubectl error: &quot;You must be logged in to the server (the server has asked for the client to provide credentials)&quot;</a></p> <p><a href="https://stackoverflow.com/questions/44457609/error-you-must-be-logged-in-to-the-server-the-server-has-asked-for-the-client">error: You must be logged in to the server (the server has asked for the client to provide credentials)</a></p> <p>It looks like didn't much helpful</p>
<p>It turns out that the tokens was invalid (not sure if it because of 12 hours expiration). If you simply F5 the browser page you didn't re-authenticated but still can access the console page, but actually the token should be updated by re-login ICP Portal again.</p> <p>The issue was fixed by re-access the ICP portal:</p> <pre><code>https://&lt;master host&gt;:8443/console/ </code></pre> <p>This will re-allow you authenticate. After that, go to admin -> configure client, paste the latest commands you will find the token might be updated. Executing the new commands solved the issue.</p> <p>2 Question still left: </p> <p>a) If the page was long opened and token expired, ICP portal page may not auto refreshed to force you re-login, that means the token in set-credentials command are still old. </p> <p>b) Even setting old tokens are accepted and command never complain an error even warning. This may mislead us when token are changed on servers, e.g, If I saved the commands to a local txt file and re-execute it again (even after token expired), the commands still finished successful, but actually I still didn't get authenticated correctly when I try to login.</p>
<p><strong>Is is possible to rename a PVC?</strong> I can't seem to find an evidence it is possible.</p> <hr> <p>I'm trying mitigate an "No space left of device" issue I just stumbled upon. Essentially my plan requires me to resize the volume, on which my service persists its data.</p> <p>Unfortunately I'm still on Kubernetes 1.8.6 on GKE. It does not have the <a href="https://kubernetes.io/docs/admin/admission-controllers/#persistentvolumeclaimresize" rel="noreferrer"><code>PersistentVolumeClaimResize</code></a> admission plugin enabled:</p> <ul> <li>1.9.1: <a href="https://github.com/kubernetes/kubernetes/blob/v1.9.1/cluster/gce/config-default.sh#L300" rel="noreferrer">config-default.sh#[email protected]</a></li> <li>1.8.6: <a href="https://github.com/kubernetes/kubernetes/blob/v1.8.6/cluster/gce/config-default.sh#L254" rel="noreferrer">config-default.sh#[email protected]</a></li> </ul> <p>Therefor I have to try and save the data manually. I made the following plan:</p> <ol> <li>create a new, bigger volume PVC,</li> <li>create a temp container with attached "victim" pvc and a new bigger pvc,</li> <li>copy the data,</li> <li>drop "victim" PVC,</li> <li><strong>rename</strong> new bigger pvc to take place of "victim".</li> </ol> <p>The PVC in question is attached to StatefulSet, so the old and new names must match (as StatefulSet expects follows the volume naming convention). </p> <p>But I don't understand how to rename persistent volume claims.</p>
<p>The answer of your question is <strong>NO</strong>. There is no way to change any meta name in Kubernetes.</p> <p>But, there is a way to fulfill your requirement.</p> <p>You want to claim your new <em>bigger</em> PersistentVolume by old PersistentVolumeClaim.</p> <p>Lets say, old PVC named <code>victim</code> and new PVC named <code>bigger</code>. You want to claim PV created for <code>bigger</code> by <code>victim</code> PVC. Because your application is already using <code>victim</code> PVC.</p> <p>Follow these steps to do the hack.</p> <p><strong>Step 1:</strong> Delete your old PVC <code>victim</code>.</p> <p><strong>Step 2:</strong> Make PV of <code>bigger</code> Available.</p> <pre><code>$ kubectl get pvc bigger NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE bigger Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 30s </code></pre> <p>Edit PV <code>pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6</code> to set persistentVolumeReclaimPolicy to <code>Retain</code>. So that deleting PVC will not delete PV.</p> <p>Now, delete PVC <code>bigger</code>. </p> <pre><code>$ kubectl delete pvc bigger $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Released default/bigger standard 3m </code></pre> <p>See the status, PV is Released.</p> <p>Now, make this PV available to be claimed by another PVC, our <code>victim</code>.</p> <p>Edit PV again to remove claimRef</p> <pre><code>$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Available standard 6m </code></pre> <p>Now the status of PV is Available. </p> <p><strong>Step 3:</strong> Claim <code>bigger</code> PV by <code>victim</code> PVC</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: victim spec: accessModes: - ReadWriteOnce volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 resources: requests: storage: 10Gi </code></pre> <p>Use volumeName <code>pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6</code></p> <pre><code>kubectl get pvc,pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc/victim Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 9s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Bound default/victim standard 9m </code></pre> <p><strong>Finally:</strong> Set persistentVolumeReclaimPolicy to <code>Delete</code></p> <p>This is how, your PVC <code>victim</code> has had bigger PV.</p>
<p>I have a single instance Redis Deployment/Service on my cluster:</p> <p><strong>Redis.yaml</strong></p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: myapp-redis labels: name: myapp-redis spec: ports: - port: 6379 targetPort: 6379 selector: name: myapp-redis --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myapp-redis spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myapp-redis labels: name: myapp-redis spec: selector: matchLabels: name: myapp-redis strategy: type: Recreate replicas: 1 template: metadata: labels: name: myapp-redis spec: containers: - name: myapp-redis image: registry/myapp-redis:0.0.0-alpha.13 imagePullPolicy: Always ports: - containerPort: 6379 volumeMounts: - name: myapp-redis mountPath: /etc/redis/ imagePullSecrets: - name: regsecret volumes: - name: myapp-redis persistentVolumeClaim: claimName: myapp-redis --- </code></pre> <p><strong>Redis Service Description</strong></p> <p>I get this from <code>kubectl describe svc myapp-redis -n mw-dev</code>:</p> <pre><code>Name: myapp-redis Namespace: mw-dev Labels: name=myapp-redis Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"myapp-redis"},"name":"myapp-redis","namespace":"mw-dev"},"sp... Selector: name=myapp-redis Type: ClusterIP IP: 10.3.0.137 Port: &lt;unset&gt; 6379/TCP TargetPort: 6379/TCP Endpoints: 10.2.2.173:6379 Session Affinity: None Events: &lt;none&gt; </code></pre> <p><strong>Check if redis is up and running</strong></p> <p>Making sure the database is up and running, I can open a shell inside the pod with <code>kubectl exec -it myapp-redis-[..] sh -n mw-dev</code> and ping the database with <code>redis-cli -a test ping</code>. If I do that, I receive a <code>PONG</code>, so it seems that the password (<code>test</code>) resolves and the db is up.</p> <p><strong>Problem connecting python app to redis service</strong></p> <p>However, if I try to connect a pod running a Python app to the redis db, I get a connection refused error from the Python app.</p> <p><code>kubectl logs myapp-backend-596... -n mw-dev</code></p> <pre><code>[...] File "/usr/local/lib/python3.6/site-packages/aioredis/stream.py", line 19, in open_connection lambda: protocol, host, port, **kwds) File "uvloop/loop.pyx", line 1733, in create_connection File "uvloop/loop.pyx", line 1712, in uvloop.loop.Loop.create_connection ConnectionRefusedError: [Errno 111] Connection refused </code></pre> <p>This is the configuration for the Python app:</p> <p><strong>Backend.yaml</strong></p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: myapp-backend labels: name: myapp-backend spec: ports: - port: 8000 targetPort: 8000 selector: name: myapp-backend --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myapp-backend labels: name: myapp-backend spec: replicas: 1 strategy: type: Recreate template: metadata: labels: name: myapp-backend spec: containers: - name: myapp-backend image: registry/myapp-backend:0.0.0-alpha.13 imagePullPolicy: Always ports: - containerPort: 8000 env: - name: REDIS_HOST value: 'myapp-redis' - name: REDIS_PASSWORD value: 'test' imagePullSecrets: - name: regsecret --- </code></pre> <p><strong>Python Backend pod describtion</strong></p> <p>This is what I get from <code>kubectl describe po myapp-backend-58... -n mw-dev</code>:</p> <pre><code>Name: myapp-backend-585d... Namespace: mw-dev Node: worker-2/ip... Start Time: Sat, 03 Feb 2018 13:08:01 +0100 Labels: name=myapp-backend pod-template-hash=myhash Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"mw-dev","name":"myapp-backend-58...","uid":"e13... Status: Running IP: 10.2.2.180 Controlled By: ReplicaSet/myapp-backend-58... Containers: myapp-backend: Container ID: docker://78cfc218d... Image: registry/myapp-backend:0.0.0-alpha.13 Image ID: docker-pullable://registry/mw-dev/myapp-backend@sha256:785a... Port: 8000/TCP State: registryg Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat, 03 Feb 2018 13:55:07 +0100 Finished: Sat, 03 Feb 2018 13:55:08 +0100 Ready: False Restart Count: 14 Environment: REDIS_HOST: myapp-redis REDIS_PASSWORD: test Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-7... (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-7cm7c: Type: Secret (a volume populated by a Secret) SecretName: default-token-7... Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s node.alpha.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50m default-scheduler Successfully assigned myapp-backend-58... to worker-2 Normal SuccessfulMountVolume 50m kubelet, worker-2 MountVolume.SetUp succeeded for volume "default-token-7..." Warning BackOff 50m (x4 over 50m) kubelet, worker-2 Back-off restarting failed container Normal Pulling 50m (x4 over 50m) kubelet, worker-2 pulling image "registry/mw-dev/myapp-backend:0.0.0-alpha.13" Normal Pulled 50m (x4 over 50m) kubelet, worker-2 Successfully pulled image "registry/mw-dev/myapp-backend:0.0.0-alpha.13" Normal Created 50m (x4 over 50m) kubelet, worker-2 Created container Normal Started 50m (x4 over 50m) kubelet, worker-2 Started container Warning FailedSync 52s (x229 over 50m) kubelet, worker-2 Error syncing pod </code></pre> <p><strong>Running pods</strong></p> <p><code>kubectl get pods --all-namespaces</code>:</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system cert-manager-cert-manager-59fff59c7b-vdnd7 2/2 Running 4 3d kube-system digitalocean-cloud-controller-manager-6d6b675bfd-nxqq2 1/1 Running 0 3d kube-system digitalocean-provisioner-d4c79dfb4-mhb5d 1/1 Running 0 3d kube-system heapster-56bf7c7896-9rv4z 1/1 Running 0 3d kube-system kube-apiserver-wp7b4 1/1 Running 5 10d kube-system kube-controller-manager-586c9b745b-gkqk4 1/1 Running 2 10d kube-system kube-controller-manager-586c9b745b-pdhw7 1/1 Running 1 10d kube-system kube-dns-7d74988c8b-z9zs2 3/3 Running 0 10d kube-system kube-flannel-5wlk6 2/2 Running 0 10d kube-system kube-flannel-khsvq 2/2 Running 0 10d kube-system kube-flannel-skt2m 2/2 Running 4 10d kube-system kube-proxy-cwqv8 1/1 Running 2 10d kube-system kube-proxy-mg8jx 1/1 Running 0 10d kube-system kube-proxy-vmw8g 1/1 Running 0 10d kube-system kube-scheduler-7686847675-5kkhn 1/1 Running 1 10d kube-system kube-scheduler-7686847675-lkm98 1/1 Running 2 10d kube-system kubernetes-dashboard-7658f8d76-svtzh 1/1 Running 0 3d kube-system loadbalancer-nginx-ingress-controller-8649c7986b-jndzz 1/1 Running 3 3d kube-system loadbalancer-nginx-ingress-default-backend-6fb9444c64-bpz4g 1/1 Running 0 3d kube-system pod-checkpointer-kfcpp 1/1 Running 0 10d kube-system pod-checkpointer-kfcpp-spc1aitu1i-master-1 1/1 Running 0 10d kube-system tiller-deploy-fb8d7b69c-6xrpn 1/1 Running 2 3d mw-dev myapp-backend-6c4b56d9b7-2mfbs 1/1 Running 0 21m mw-dev myapp-frontend-7478fd456b-5ztvq 1/1 Running 0 1d mw-dev myapp-redis-67d45d97d7-7wxtj 1/1 Running 0 1d </code></pre> <p><strong>Making sure Python app received correct values for env vars</strong></p> <p>The Python app prints out the values that it uses to connect to the database. Looking at the pod logs, I can see that the values are identical to the ones given in <code>Backend.yaml</code> (<code>REDIS_HOST=myapp-redis</code>, <code>REDIS_PASSWORD=test</code>).</p> <p><strong>It works locally in docker</strong></p> <p>If I run the redis container and the python app container locally with docker on my laptop, they connect fine.</p> <p><strong>Cluster Info</strong></p> <p>The cluster uses a nginx-ingress controller to expose services to the internet. I'm not sure if this matters, since I need to connect the Python pod to the redis service internally.</p> <p>The cluster consists of 1 Master node, two Worker nodes and a LoadBlancer for the nginx-ingress controller, all running on DigitalOcean.</p> <p><strong>What now?</strong></p> <p>At this point I have no idea how to further debug the issue. I have searched the web for hours to find a solution, w/o luck. Any suggestion would be appreciated!</p>
<p>The <em>connection refused</em> error was caused by the redis configuration.</p> <p>I had to change the redis host from <em>localhost</em> to <code>0.0.0.0</code> in order to allow external connections.</p> <p>In <code>redis.conf</code> I changed this line:</p> <pre><code>bind 127.0.0.1 </code></pre> <p>to this:</p> <pre><code>bind 0.0.0.0 </code></pre>
<p>I have two services, web-service is running on port 80 and admin-service is running on port 8000. I'd like all http /admin requests to be proxied to admin-service:8000 and all other requests to go to web-service:80. I've tried the configuration below but it doesn't seem to work. I'm also using Google Kubernetes Engine.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-rules annotations: kubernetes.io/ingress.class: "nginx" nginx.org/redirect-to-https: "True" spec: tls: - secretName: tls-secret rules: - http: paths: - path: /admin backend: serviceName: admin-service servicePort: 8000 - path: / backend: serviceName: web-service servicePort: 80 </code></pre> <p>Any idea what I might be doing wrong?</p>
<p>looks good, but... does your admin service respond to <code>GET /admin</code> ? cause unless you use <code>ingress.kubernetes.io/rewrite-target: /</code> annotation on your ingress, you need to support the URI in your backing service</p>
<p>it’s been driving me crazy for about a week now. Searched over the “whole” internet with no luck. used rexray, standard config, digital ocean setup etc.</p> <p>Cannot make it working on digital ocean with block storage. Would be great if anyone could point me to some tutorial (preferable official kubernetes storage driver).</p> <p>Here is my config for postgres:</p> <pre><code> ... volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data volumes: - name: postgres-storage persistentVolumeClaim: claimName: postgres-pv-claim --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pv-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: postgres-volume spec: capacity: storage: 3Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain </code></pre> <p>Any help is highly appreciated.</p>
<p>Kubernetes does not support Volume plugin for DigitalOcean.</p> <p>But you can use this using FlexVolume plugin as external storage.</p> <ul> <li>Digital Ocean Flex Plugin</li> <li>Digital Ocean External Provisioner</li> </ul> <p>See this <a href="https://github.com/kubernetes/kubernetes/pull/50044#issuecomment-327158936" rel="nofollow noreferrer">comment</a> in a PR for DigitalOcean volume support.</p> <p>If you want to use DigitalOcean block storage, see <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/digitalocean" rel="nofollow noreferrer">Kubernetes DigitalOcean Provisioner</a> to setup digitalocean-flexplugin.</p>
<p>I'm looking to redirect all traffic from </p> <p><a href="http://example.com" rel="noreferrer">http://example.com</a> -> <a href="https://example.com" rel="noreferrer">https://example.com</a> like how nearly all websites do.</p> <p>I've looked at this link with no success: <a href="https://stackoverflow.com/questions/40763718/kubernetes-https-ingress-in-google-container-engine">Kubernetes HTTPS Ingress in Google Container Engine</a></p> <p>And have tried the following annotations in my ingress.yaml file.</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | if ($http_x_forwarded_proto != 'https') { return 301 https://$host$request_uri; } nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" kubernetes.io/ingress.allow-http: "false" </code></pre> <p>All without any success. To be clear, I can access <a href="https://example.com" rel="noreferrer">https://example.com</a> and <a href="http://example.com" rel="noreferrer">http://example.com</a> without any errors, I need the http call to redirect to https. </p> <p>Thanks</p>
<p>GKE uses GCE L7. The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.</p> <p>L7 inserts the <code>x-forwarded-proto</code> header that you can use to understand if the frontend traffic came using HTTP or HTTPS. Take a look here: <a href="https://github.com/kubernetes/ingress-gce/blob/master/README.md#redirecting-http-to-https" rel="noreferrer">Redirecting HTTP to HTTPS</a></p> <p>There is also an example in that link for Nginx (copied for convenience):</p> <pre><code># Replace '_' with your hostname. server_name _; if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; } </code></pre>
<p>I am trying to create windows nodes in an already existing kubernetes cluster in Azure. The kubernetes cluster has two Linux nodes running on them. </p> <p>I am trying to use az aks cli to create windows nodes but I don't see any option.</p> <p>So can we create Linux and Windows nodes in the same kubernetes cluster? If yes, How?</p>
<p>Yes, this is posible, but not using the CLI\portal (at this stage). You need to use <a href="https://github.com/Azure/acs-engine" rel="nofollow noreferrer">ACS engine</a>.</p> <p>You need to use this definition (adjust it to your needs):<br> <a href="https://github.com/Azure/acs-engine/blob/master/examples/windows/kubernetes-hybrid.json" rel="nofollow noreferrer">https://github.com/Azure/acs-engine/blob/master/examples/windows/kubernetes-hybrid.json</a></p> <p>There is a bit of a learning curve, but not that hard.<br> <a href="https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/deploy.md" rel="nofollow noreferrer">https://github.com/Azure/acs-engine/blob/master/docs/kubernetes/deploy.md</a><br> <a href="https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md" rel="nofollow noreferrer">https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md</a></p>
<p>In IBM Cloud Private EE, I need to go to the Web UI <code>User &gt; Configure client</code>, copy the <code>kubectl</code> config commands and then run these 5 commands on my client machine. </p> <p>I deployed the IBM Cloud private EE on 5 VMs and have access to the master node. I am wondering if there is a way to capture these <code>kubectl config</code> commands directly from the docker containers without having a need to go to the Web UI. </p> <p>For example: I did not want to download the <code>kubectl</code> client from google (as I just want to use same <code>kubectl</code> version which is in the ICP containers) and I used the following command to get it from the container itself.</p> <pre><code>docker run --rm -v $(pwd):/data -e LICENSE=accept \ ibmcom/icp-inception:2.1.0.1-ee \ cp -r /usr/local/bin/kubectl /data </code></pre> <p>Then, I copied this to all VM guests so that I could access <code>kubectl</code> from any guest.</p> <pre><code>chmod +x kubectl for host in $(awk '/192.168.142/ {print $3}' /etc/hosts) do scp kubectl $host:/bin done </code></pre> <p>Where - <code>192.168.142</code> is the subnet of my VM guests.</p> <p>But, I could not figure out how to get <code>Configure Client</code> commands without having to go to the Web UI. I need this to automate client <code>kubectl</code> command so that my environment is ready for <code>kubectl</code> commands through simple scripts.</p>
<p>You should use <strong><a href="https://www.vagrantup.com/" rel="nofollow noreferrer">Vagrant</a></strong> to automate those steps.</p> <p>For instance, <a href="https://github.com/IBM/deploy-ibm-cloud-private/blob/ba600e9c08d5bd9fd784e1b209a07ef597052a7f/Vagrantfile#L659-L666" rel="nofollow noreferrer"><code>IBM/deploy-ibm-cloud-private/Vagrantfile</code></a> has this section:</p> <pre><code>install_kubectl = &lt;&lt;SCRIPT echo "Pulling #{image_repo}/kubernetes:v#{k8s_version}..." sudo docker run -e LICENSE=#{license} --net=host -v /usr/local/bin:/data #{image_repo}/kubernetes:v#{k8s_version} cp /kubectl /data &amp;&gt; /dev/null kubectl config set-credentials icpadmin --username=admin --password=admin &amp;&gt; /dev/null kubectl config set-cluster icp --server=http://127.0.0.1:8888 --insecure-skip-tls-verify=true &amp;&gt; /dev/null kubectl config set-context icp --cluster=icp --user=admin --namespace=default &amp;&gt; /dev/null kubectl config use-context icp &amp;&gt; /dev/null SCRIPT </code></pre> <p>See more at "<a href="https://medium.com/ibm-cloud/kubernetes-ibm-cloud-private-and-vagrant-oh-my-cedc0758036f" rel="nofollow noreferrer">Kubernetes, IBM Cloud Private, and Vagrant, oh my!</a>", from <strong><a href="https://twitter.com/tpouyer" rel="nofollow noreferrer">Tim Pouyer</a></strong>.</p>
<p>I am trying to install Kubernetes 1.9.0 on a cluster of CentOS 7.3 systems running in VMware Workstation on Windows 7, following the "kubernetes-the-hard-way tutorial". When I get to the verification stage in the tutorial and try to start the busybox deployment (<a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/12-dns-addon.md" rel="nofollow noreferrer">https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/12-dns-addon.md</a>), the pod status remains stuck at "ContainerCreating". </p> <p>The kubelet log for the node that the pod supposed to run on shows these error messages:</p> <pre><code>failed to get sandbox image \"gcr.io/google_containers/pause:3.0\": failed to pull image \"gcr.io/google_containers/pause:3.0\": failed to pull image \"gcr.io/google_containers/pause:3.0\": httpReaderSeeker: failed open: failed to do request: Get https://storage.googleapis.com/artifacts.google-containers.appspot.com/containers/images/sha256:f112334343777b75be77ec1f835e3bbbe7d7bd46e27b6a2ae35c6b3cfea0987c: x509: certificate signed by unknown authority </code></pre> <p>I added both of those domains to the list of insecure registries in /etc/docker/daemon.json:</p> <pre><code>{ "insecure-registries" : ["gcr.io"], "insecure-registries" : ["googleapis.com"] } </code></pre> <p>Docker is able to pull the image from the command line:</p> <pre><code>docker pull gcr.io/google_containers/pause:3.0 Trying to pull repository gcr.io/google_containers/pause ... 3.0: Pulling from gcr.io/google_containers/pause a3ed95caeb02: Pull complete f11233434377: Pull complete Digest: sha256:0d093c962a6c2dd8bb8727b661e2b5f13e9df884af9945b4cc7088d9350cd3ee </code></pre> <p>Any ideas why the kubelet is unable to pull the image?</p> <p>Thanks, TI</p>
<p>The syntax for this in <code>daemon.json</code> is </p> <pre><code>"insecure-registries" : ["gcr.io" , "googleapis.com"] </code></pre> <p>"Also depending of the registries you are accessing, you may have to perform a "<code>kubectl create secret docker-registry ...</code>" action as explained <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">here</a></p> <p>Finally, you may have to define the certificate to <code>docker</code> by creating a new directory in <code>/etc/docker/certs.d</code> containing the certificates as explained <a href="https://docs.docker.com/engine/security/certificates/" rel="nofollow noreferrer">here</a></p>
<p>I'm following <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="nofollow noreferrer">this guide</a> in an attempt to upgrade a kubernetes cluster on GKE with no downtime. I've gotten all the old nodes cordoned and most of the pods have been evicted, but for a couple of the nodes, <code>kubectl drain</code> just keeps running and not evicting any more pods.</p> <p><code>kubectl get pods --all-namespaces -o=wide</code> shows a handful of pods still running on the old pool, and when I run <code>kubectl drain --ignore-daemonsets --force</code> it prints a warning explaining why it's ignoring most of them; the only ones it doesn't mention are the pods I have running memcached, which were created via helm using <a href="https://github.com/kubernetes/charts/tree/master/stable/memcached" rel="nofollow noreferrer">this chart</a>.</p> <p>We don't rely too heavily on memcached, so I could just go ahead and delete the old node pool at this point and accept the brief downtime for that one service. But I'd prefer to have a script to do this whole thing the right way, and I wouldn't know what to do at this point if these pods were doing something more important.</p> <p>So, is this expected behavior somehow? Is there something about that helm chart that's making these pods refuse to be evicted? Is there another force/ignore sort of flag I need to pass to <code>kubectl drain</code>?</p>
<p>The helm chart you linked contains a PodDisruptionBudget (PDB). <code>kubectl drain</code> will not remove pods if it would violate a PDB (reference: <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/disruptions/</a>, "How Disruption Budgets Work" section mentions this). </p> <p>If <code>minAvailable</code> on your PDB equals to number of replicas of your pod you will not be able to drain the node. Given that <a href="https://github.com/kubernetes/charts/blob/master/stable/memcached/values.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/charts/blob/master/stable/memcached/values.yaml</a> has both set to 3, I would guess that's most likely the source of your problem. Just set your PDB <code>minAvailable</code> to one less than the desired number of replicas and it will be able to move your pods one-by-one.</p>
<p>I have installed kubernetes with minikube in ubuntu 16.04. I want to know how i can integrate openid-connect based authentication with it. I am new to kubernetes. So any suggestion on how to configure would help. I am currently accessing the dashboard with "minikube dashboard" command. But i dont seem to find any role specific login. The K8S guide has the below config section,</p> <pre><code> kubectl config set-credentials USER_NAME \ --auth-provider=oidc \ --auth-provider-arg=idp-issuer-url=( issuer url ) \ --auth-provider-arg=client-id=( your client id ) \ --auth-provider-arg=client-secret=( your client secret ) \ --auth-provider-arg=refresh-token=( your refresh token ) \ --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \ --auth-provider-arg=id-token=( your id_token ) \ --auth-provider-arg=extra-scopes=( comma separated list of scopes to add to "openid email profile", optional ) </code></pre> <p>Can someone tell me how i can get values for</p> <p><strong>1. Issuer URL 2. Refresh token 3. Id-token 4. Extra-scope</strong></p> <p>I assume the <strong>client id</strong> and <strong>client secret</strong> are the ones we get when <strong>google credentials</strong> are created. Please correct me if I'm wrong.</p>
<p>The <a href="https://kubernetes.io/docs/admin/authentication/" rel="nofollow noreferrer">Kubernetes Authentication</a> docs try to explain the different "<code>authn</code>" plugins. One of these is "OpenID Connect", which requires that you start up an "Identity Provider".</p> <p>So when you tell <code>kubectl</code> to use <code>--auth-provider=oidc</code>, that's what you're using. The <code>idp-issuer-url</code> will point at your Identity Provider's HTTPS URL. They give different examples of implementations of this. CoreOS has one called <a href="https://github.com/coreos/dex" rel="nofollow noreferrer">Dex</a>.</p> <p>Their repo has some examples under: <code>./examples</code></p> <p>An example of using <a href="https://github.com/SEJeff/k8s-ldap" rel="nofollow noreferrer">LDAP connector plugin for dex is here</a></p> <p>For more information about how Authentication is done in Kubernetes (e.g.: "What is authn?" "What is authz", etc...), there is a <a href="https://youtu.be/i75ysFcvCkk" rel="nofollow noreferrer">great presentation by Eric Chiang here</a>.</p> <p>So to answer your question:</p> <h3>Q: how i can get values for:</h3> <ol> <li>Issuer URL</li> <li>Refresh token</li> <li>Id-token</li> <li>Extra-scope</li> </ol> <h3>A: <a href="https://github.com/SEJeff/k8s-ldap" rel="nofollow noreferrer">Set up Dex</a>, then authenticate to it using the "Login" app (with some backend such as LDAP in example). Then it redirects you to a page with a <code>~/.kube/config</code> file with a <code>user</code> which has all of these items.</h3>
<p>Is there a simple way to get the current attached volume state (think "space left on disk", or the opposite) ? Using stackdriver, this info is not provided. Not to be found neither within the gcloud console. I was wondering if this was accessible besides connecting to the instance and check it manually</p>
<p>Check <a href="https://github.com/prometheus/prometheus" rel="nofollow noreferrer">Prometeus</a>.</p> <p>At the moment this option is available through Kubernetes, but merely for the 1.8 version on. As you can check <a href="https://github.com/coreos/prometheus-operator/issues/485" rel="nofollow noreferrer">here</a> there is a whole topic regarding this feature request and <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/stats/volume_stat_calculator.go#L97" rel="nofollow noreferrer">the code</a> that implement these features.</p> <p>Kubernetes 1.8 expose metrics for prometheus.</p> <ul> <li>kubelet_volume_stats_available_bytes</li> <li>kubelet_volume_stats_capacity_bytes </li> <li>kubelet_volume_stats_inodes</li> <li>kubelet_volume_stats_inodes_free </li> <li>kubelet_volume_stats_inodes_used</li> <li>kubelet_volume_stats_used_bytes</li> </ul> <p>Source <a href="https://stackoverflow.com/questions/44718268/how-to-monitor-disk-usage-of-kubernetes-persistent-volumes">link</a> of the metrics.</p>
<p>I have read the docs, but seem not able to understand differences between Mixer and Pilot. Is there any overlap? I mean I would like to draw a definite boundary between them to understand their responsibilities and with respect to their communication with the envoy proxies in the mesh. Please add examples of different use-cases if possible.</p>
<p>The Istio Service Mesh provides the following functionalities:</p> <ol> <li>Routing. For example 90% of the traffic goes to the version 1 of a microservice and the remaining 10% goes to the version 2. Or some specific requests go to the version 1 and all the others to the version 2, according to some condition. And also: a) rewrite b) redirect </li> <li>Support for microservices development, deployment and testing: a) timeouts b) retries c) circuit breakers d) load balancing e) fault injection for testing </li> <li>Reporting: Logging, Distributed Tracing, Telemetry</li> <li>Policy enforcement</li> <li>Secure communication between micro services and strong identity.</li> </ol> <p>Pilot is responsible for the items 1 and 2. Mixer is responsible for the items 3 and 4. Citadel (previously CA, previously Auth) is responsible for the item 5. </p> <p>Envoy, the sidecar proxy, gets its routing and configuration tables from Pilot to implement the items 1 and 2. Envoy reports to Mixer about each request, to implement the item 3. Envoy asks Mixer to allow or forbid requests, to implement the item 4. Envoy gets certificates from Citadel to implement the item 5. </p>
<p>I have a Django app and use Celery for background tasks. </p> <p>For deploying, I use Docker with Kubernetes.</p> <p>Deployment is automatized with Jenkins.</p> <p>All is good, however I feel like this can be drastically optimized.</p> <p>The problem is that Jenkins builds almost the same images for a django app and 5 celery workers. The only difference they have is the entry point. Django app image starts gunicorn, celery container starts, well, celery.</p> <p>What is the best practice to build almost the same images?</p> <p>I would optimally like to build the same image several times and indicate the entrypoint during the build process.</p> <p>Thank you for any help.</p>
<p>An option that comes to mind is to have the same entrypoint for all the images and then, using environment variables for example, have a logic in the entrypoint code that will launch one program or the other. Here's an extremely simple example.</p> <pre><code>if [ $ROLE == "worker" ];then program_1 else profram_2 fi </code></pre> <p>Another option could be using the same entrypoint and then be able to select the exact program using the <code>command</code> argument. Find an example here: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a>. An example Dockerfile and app-entrypoint.sh here</p> <p><a href="https://github.com/bitnami/bitnami-docker-wordpress/blob/master/4/Dockerfile" rel="nofollow noreferrer">https://github.com/bitnami/bitnami-docker-wordpress/blob/master/4/Dockerfile</a> <a href="https://github.com/bitnami/bitnami-docker-wordpress/blob/master/4/rootfs/app-entrypoint.sh" rel="nofollow noreferrer">https://github.com/bitnami/bitnami-docker-wordpress/blob/master/4/rootfs/app-entrypoint.sh</a></p>
<p>In Kubernetes, it is possible to make a service running in cluster externally accessible by running <code>kubectl expose deployment</code>. Why deployment as opposed to service is beyond my simpleton's comprehension. That aside, I would like to also be able to undo this operation afterwards. Think of a scenario, where I need to get access to the service that normally is only accessible inside the cluster for debugging purposes and then to restore original situation.</p> <p>Is there any way of doing this short of deleting the deployment and creating it afresh?</p> <hr> <p>PS. Actually deleting service and deployment doesn't help. Re-creating service and deployment with the same name will result in service being exposed.</p>
<p>Assuming you have a deployment called hello-world, and do a kubectl expose as follows:</p> <p><code>kubectl expose deployment hello-world --type=ClusterIP --name=my-service</code></p> <p>this will create a service called my-service, which makes your deployment accessible for debugging, as you described.</p> <p>To display information about the Service:</p> <p><code>kubectl get services my-service</code></p> <p>To delete this service when you are done debugging:</p> <p><code>kubectl delete service my-service</code></p> <p>Now your deployment is un-exposed.</p>
<p>We're running <code>gitlab-runner</code> instances via kubernetes executors inside a kubernetes cluster (let's call it KUBE01). These instances build and deploy to the kubernetes cluster, and runners are given an environment variable <code>KUBECONFIG</code> (pointing to a config file) as follows:</p> <pre><code>$ cat $KUBECONFIG --- apiVersion: v1 clusters: - name: gitlab-deploy cluster: server: https://KUBE01:6443 certificate-authority-data: &lt;CA_B64&gt; contexts: - name: gitlab-deploy context: cluster: gitlab-deploy namespace: dev user: gitlab-deploy current-context: gitlab-deploy kind: Config users: - name: gitlab-deploy user: token: gitlab-deploy-token-&lt;secret&gt; </code></pre> <p>We can verify that <code>kubectl</code> is actually using the above <code>gitlab-deploy</code> context:</p> <pre><code>$ kubectl config current-context gitlab-deploy </code></pre> <p>However, attempting to actually affect KUBE01 fails:</p> <pre><code>$ kubectl get pods error: You must be logged in to the server (Unauthorized) </code></pre> <p>On my machine we can verify that the namespace and service account tokens are correct:</p> <pre><code>$ kubectl get sa/gitlab-deploy -o yaml --namespace dev apiVersion: v1 kind: ServiceAccount metadata: &lt;snip metadata&gt; name: gitlab-deploy namespace: dev secrets: - name: gitlab-deploy-token-&lt;secret&gt; </code></pre> <p>I cannot find any documentation about this other than <em>it should just work</em>, and all forum/stack exchange issues I've found about this error message are bad user/pass combinations; but as far as I can tell, my token, namespace, and cluster are all correct.</p>
<p>After a more detailed reading of the <a href="https://kubernetes.io/docs/admin/authentication/#service-account-tokens" rel="noreferrer">kuberneter authentication documentation</a> and some trial and error, I have found the issue.</p> <p>The "secret tokens" inside service account objects are not the <em>actual</em> secret tokens that we use for service account authentication; rather, it is a pointer to a secret object which, in turn, holds the real (bearer) token. It can be found as follows:</p> <pre><code>$ kubectl get secret gitlab-deploy-token-&lt;secret&gt; -o yaml apiVersion: v1 data: ca.crt: &lt;CA_B64&gt; namespace: ZGV2 token: &lt;BEARER_TOKEN_B64&gt; kind: Secret metadata: &lt;snip metadata&gt; name: gitlab-deploy-token-&lt;secret&gt; namespace: dev type: kubernetes.io/service-account-token </code></pre> <p>The bearer token is, of course, base64 encoded due to it being a secret; but oddly enough, we don't base64 encode it in our <code>KUBECONFIG</code> file (like we do with e.g. the CA). Thus, what we had to do was find the above bearer token, decode it from base64, and add it as our token under the <code>gitlab-deploy</code> user. Then, authentication works.</p>
<p>Is there a way to limit the number of deployments a Kubernetes cluster will implement at once? With rolling deployments and 100% uptime, it's possible that updating all deployments at once could overload the nodes.</p> <p>I know it is possible to limit the number of pods deployed per-namespace, but i was wondering if it is also possible to limit simultaneous deployments in a similar way. Say, for example, maximum of 10 deployments at once.</p> <p>I could probably script out a limit to the number of deployments I send to the k8s API at once, but it would be nice if there was a setting I could use instead.</p>
<p>The first thing coming to my mind is to use resource <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-types" rel="nofollow noreferrer">limits and requests</a> to make sure you're not overloading the cluster. This way, even if you update all the deployments, some pods will be in "pending" state until other deployments are successfully updated.</p>
<p>I created a cluster using kops. It worked fine and the cluster is healthy. I can see my nodes using kubectl and have created some deployments and services. I tried adding a node using "kops edit ig nodes" and got an error "cluster not found". Now I get that error for all kops commands:</p> <pre><code>kops validate cluster Using cluster from kubectl context: &lt;clustername&gt; cluster "&lt;clustername&gt;" not found </code></pre> <p>So my question is: where does kops look for clusters and how do I configure it to see my cluster.</p>
<p>My KOPS_STATE_STORE environment variable got messed up. I corrected it to be the correct s3 bucket and everything is fine.</p> <pre><code>export KOPS_STATE_STORE=s3://correctbucketname </code></pre>
<p>In kubernetes, master node provide kube-apiserver process to accept REST API requests. What about ICp? can we use curl command to quickly test k8s REST APIs on ICp master node as well?</p>
<p>The answer is yes. But first you may need to pay attention to the default port for --insecure-port and --secure-port. By default, if you didn't change it in config.yaml file, ICp use below ports to accept REST requests:</p> <p>--insecure-port=8888</p> <p>--secure-port=8001</p> <pre><code>netstat -anp|grep 8888 </code></pre> <p>or </p> <pre><code>netstat -anp|grep 8001 </code></pre> <p>Or you can</p> <pre><code>ps -ef|grep apiserver </code></pre> <p>result something like:</p> <pre><code>root 5462 5442 9 Jan29 ? 22:48:09 /hyperkube apiserver --secure-port=8001 --bind-address=0.0.0.0 --advertise-address=10.0.14.94 --insecure-port=8888 --insecure-bind-address=127.0.0.1 ...... </code></pre> <p>Once you find the port, on master node, you can issue the curl quickly, first try via a insecure port:</p> <pre><code>curl http://localhost:8888/api </code></pre> <p>result something like:</p> <pre><code>{ "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "10.0.14.94:8001" } ] } </code></pre> <p>further call to /api/v1 and /api/v1/pods, /api/v1/services are as you like.</p> <p>But you cannot do the same on other node. On other node, you may have to use secure port, but with a <strong>-k</strong> parameter to ignore the certificate:</p> <p>on client or other node:</p> <pre><code>curl -k https://10.0.14.94:8001/api </code></pre> <p>result should be the same unless you specify the ca certificate.</p>
<p>My goal is to filter access by IP address of an angular app deployed on Kubernetes Engine served by nginx through a GCE ingress.</p> <p>But on my nginx the remote_addr is not right.</p> <p><strong>$LB_IP is the ip defined here : kubernetes.io/ingress.global-static-ip-name: app-angular</strong></p> <p>I'm using set_real_ip_from on nginx to set the ip from X-Forwarded-For </p> <pre><code>set_real_ip_from $LB_IP; real_ip_header X-Forwarded-For; </code></pre> <p>The original ip comes in the X-Forwarded-For header as expected from the google doc : <a href="https://cloud.google.com/compute/docs/load-balancing/http/#components" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/load-balancing/http/#components</a></p> <p>I can see the X-Forwarded-For contains the $CLIENT_IP but the remote_addr is not correct, and by the way my filter on IP is not working. Any idea ?</p> <p>My nginx logs :</p> <blockquote> <p>10.40.40.40 - - [07/Feb/2018:11:29:48 +0000] "GET /styles.bundle.css HTTP/1.1" 200 35908 "<a href="http://MY_URL/home" rel="nofollow noreferrer">http://MY_URL/home</a>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36" "$CLIENT_IP, $LB_IP"</p> </blockquote> <p>But i want : </p> <blockquote> <p>$CLIENT_IP - - [07/Feb/2018:11:29:48 +0000] "GET /styles.bundle.css HTTP/1.1" 200 35908 "<a href="http://MY_URL/home" rel="nofollow noreferrer">http://MY_URL/home</a>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36" "$CLIENT_IP, $LB_IP"</p> </blockquote> <p><strong>My Ingress configuration :</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.global-static-ip-name: app-angular spec: rules: - host: MY_URL http: paths: - backend: serviceName: app-backend servicePort: 80 </code></pre> <p><strong>My backend configuration :</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-backend spec: type: NodePort selector: app: app-angular ports: - port: 80 targetPort: 80 </code></pre> <p><strong>My deployment configuration :</strong></p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: app-angular spec: replicas: 2 revisionHistoryLimit: 2 template: metadata: labels: app: app-angular tier: frontend spec: containers: - name: app-angular image: gcr.io/MY_PROJECT/app-angular:MY_TAG imagePullPolicy: Always ports: - containerPort: 80 </code></pre> <p><strong>My nginx configuration :</strong></p> <pre><code>server { listen 80; root /usr/share/nginx/html; location / { set_real_ip_from $LB_IP; real_ip_header X-Forwarded-For; allow 130.211.0.0/22; #Google IP allow $CLIENT_IP; # FILTERED IP allow 10.0.0.0/8; #internal network deny all; index index.html index.htm; try_files $uri $uri/ /index.html =404; } } </code></pre> <p>Any idea ?</p>
<p>They are two solutions (Thks to gcbirzan for helping me on GCP slack):</p> <p><strong>1) Update My nginx configuration with the good IPS from IP addresses ranges :</strong></p> <p>IP addresses ranges can be found here : <a href="https://console.cloud.google.com/networking/networks/list" rel="nofollow noreferrer">https://console.cloud.google.com/networking/networks/list</a>. You can just add set_real_ip_from for a region or all needed regions. Don't forget real_ip_recursive on;</p> <pre><code>server { listen 80; root /usr/share/nginx/html; location / { set_real_ip_from 10.128.0.0/20; ... real_ip_header X-Forwarded-For; real_ip_recursive on; allow 130.211.0.0/22; #Google IP allow $CLIENT_IP; # FILTERED IP allow 10.0.0.0/8; #internal network deny all; index index.html index.htm; try_files $uri $uri/ /index.html =404; } } </code></pre> <p><strong>2) Update My backend configuration with :</strong></p> <p>externalTrafficPolicy: Local</p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-backend spec: type: NodePort selector: app: app-angular ports: - port: 80 targetPort: 80 externalTrafficPolicy: Local </code></pre> <p>Update nginx configuration :</p> <p>now the IP shown in $remote_addr will be set with the Load balancer IP for your client requests and with google Infra IPS : 130.211.0.0/22,35.191.0.0/16 </p> <p>Don't forget real_ip_recursive on;</p> <pre><code>server { listen 80; root /usr/share/nginx/html; location / { set_real_ip_from $LB_IP; set_real_ip_from 130.211.0.0/22; set_real_ip_from 35.191.0.0/16; real_ip_header X-Forwarded-For; real_ip_recursive on; allow 130.211.0.0/22; #Google IP allow 35.191.0.0/16; #Google IP allow $CLIENT_IP; # FILTERED IP allow 10.0.0.0/8; #internal network deny all; index index.html index.htm; try_files $uri $uri/ /index.html =404; } } </code></pre>
<p>I have a kubernetes cluster with a gitlab-runner 10.3.0 and kubernetes executor. There is no <code>cache_dir</code> defined in the runner's config.toml-file. Note that this is different that a docker executor, so the volume-solutions do not apply.</p> <p>In a <code>.gitlab-ci.yml</code>, I configured a job to use the cache:</p> <pre><code>build: cache: key: "${PROJECT_NAME}" paths: - "node_modules/" script: - ls node_modules/ || echo "cache not there" - npm i - npm build - ... </code></pre> <p>When I run this, I see the cache being pulled and created:</p> <pre><code>Cloning repository for some-branch with git depth set to 1... Cloning into '/group/projectname'... Checking out d03baa31 as some-branch... Skipping Git submodules setup Checking cache for projectname... Successfully extracted cache $ docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY // // ...work being done here... // Creating cache projectname... node_modules/: found 24278 matching files Created cache Job succeeded </code></pre> <p>However, when I push another commit to this branch, the <code>ls node_modules/</code> still does not find the cache.</p> <p>I searched the documentation and did not find any information on how to activate the cache. The gitlab-runner-pod does not have any of the supposedly cached files there as well and <a href="https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section" rel="noreferrer">according to the documentation</a>, a <code>cache_dir</code> in the config is not used by the kubernetes executor.</p> <p>But according to <a href="https://docs.gitlab.com/runner/executors/#compatibility-chart" rel="noreferrer">this feature page</a>, the kubernetes executor <em>does</em> support cache.</p> <p>So how to do this?</p>
<p>Due to the distributed nature of Kubernetes, you will need to configure a central cache location (typically, in the form of a S3-compatible object storage like <a href="https://aws.amazon.com/s3" rel="nofollow noreferrer">AWS S3</a> or <a href="https://www.minio.io/" rel="nofollow noreferrer">Minio</a>). The reason behind this is explained in the <a href="https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching" rel="nofollow noreferrer">Gitlab runner documentation</a> (emphasis mine):</p> <blockquote> <p>To speed up your builds, GitLab Runner provides a cache mechanism where selected directories and/or files are saved and shared between subsequent builds.</p> <p>This is working fine when builds are run on the same host, <strong>but when you start using the Runners autoscale feature, most of your builds will be running on a new (or almost new) host, which will execute each build in a new Docker container. In that case, you will not be able to take advantage of the cache feature.</strong></p> <p>To overcome this issue, together with the autoscale feature, the distributed Runners cache feature was introduced.</p> <p>It uses any S3-compatible server to share the cache between used Docker hosts. When restoring and archiving the cache, GitLab Runner will query the S3 server and will download or upload the archive.</p> </blockquote> <p>For this, you can use the <a href="https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnerscache-section" rel="nofollow noreferrer"><code>[runners.cache] section</code></a> in the runner configuration:</p> <pre><code>[runners.cache] Type = &quot;s3&quot; ServerAddress = &quot;s3.amazonaws.com&quot; AccessKey = &quot;AMAZON_S3_ACCESS_KEY&quot; SecretKey = &quot;AMAZON_S3_SECRET_KEY&quot; BucketName = &quot;runners&quot; BucketLocation = &quot;eu-west-1&quot; Insecure = false Path = &quot;path/to/prefix&quot; Shared = false </code></pre> <p><em>Edit by OP:</em> <a href="https://docs.gitlab.com/runner/install/autoscaling.html#install-the-cache-server" rel="nofollow noreferrer">Installation instructions for Minio for gitlab-ci</a></p>
<p>I am trying to deploy a tomcat docker container in the Kubernetes and start the application deployed. </p> <p>I have installed tomcat inside the docker container and copied my application war file in the tomcat webapps folder. When i ssh into the docker container and tried to start the tomcat server by running the catalina.sh file, tomcat server started deploying my app.war file and stayed there. Tomcat server is not finish starting up. I tried looking at the logs, but did not help much to track down this issue. I tried deploying the same war file in the docker, it worked perfectly there. </p> <p>Note: When i restart the minikube, i can see that tomcat tried to deploy the war and also tried to start the server as well. But when i restart the tomcat by navigating to the /tomcat/bin folder and running the catalina.sh start, tomcat server just deployed the war file and not starting up the server. </p> <p>Here is what the catalina.out looks like</p> <pre><code>Jan 30, 2018 4:06:09 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-apr-8080"] Jan 30, 2018 4:06:09 AM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-apr-8009"] Jan 30, 2018 4:06:09 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 854 ms Jan 30, 2018 4:06:09 AM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Jan 30, 2018 4:06:09 AM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.82 Jan 30, 2018 4:06:09 AM org.apache.catalina.startup.HostConfig deployWAR INFO: Deploying web application archive /usr/local/tomcat/webapps/app.war </code></pre> <p>I appreciate any help on this ..</p>
<p>This issue was happening because of lack of memory for the Node. In my case it is Minikube which is acting as a Node. After i allotted more memory to minikube, i am able to deploy and start my application from tomcat docker container.</p>