prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have installed kubeadm. Heapster show me metrics, but hpa no</p> <p><strong>kubectl get hpa</strong></p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE httpd Deployment/httpd &lt;unknown&gt; / 2% 2 5 2 19m </code></pre> <p><strong>kubeadm version</strong></p> <pre><code>kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p><strong>docker version</strong> </p> <p>Client:</p> <pre><code>Version: 1.11.2 API version: 1.23 Go version: go1.5.4 Git commit: b9f10c9 Built: Wed Jun 1 22:00:43 2016 OS/Arch: linux/amd64 </code></pre>
<p>You may have had to enable a metrics-server. Heapster is now deprecated. Also make sure you have Kubernetes version greater than 1.7. You can check this buy typing <code>kubectl get nodes</code>.</p> <p>You can enable the metrics server by looking at the minikube addons.</p> <p><code>minikube addons list</code> gives you the list of addons.</p> <p><code>minikube addons enable metrics-server</code> enables metrics-server.</p> <p>Wait a few minutes, then if you type <code>kubectl get hpa</code> the percentage for the <code>TARGETS &lt;unknown&gt;</code> should appear.</p>
<p>I've followed <a href="http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/" rel="nofollow noreferrer">http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/</a> blog, created two docker images with index.html returning 'Version 1 of an app' and 'Version 2 of an app'. What I want to achieve is zero downtime release. I'm using</p> <pre><code>kubectl apply -f mydeployment.yaml </code></pre> <p>with <code>image: mynamespace/nodowntime-test:v1</code> inside.</p> <p>to deploy v1 version to k8s and then run:</p> <pre><code>while True do printf "\n---------------------------------------------\n" curl "http://myhosthere" sleep 1s done </code></pre> <p>So far everything works. After short time curl returns 'Version 1 of an app'. Then I'm applying same k8s deployment file with <code>image: mynamespace/nodowntime-test:v2</code>. And well, it works, but there is one ( always one ) <strong>Gateway Timeout</strong> response between v1 and v2. So its not really no downtime release ; ) It is much better than without RollingUpdate but not perfect.</p> <p>I'm using <code>RollingUpdate</code> strategy and <code>readinessProbe:</code></p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: nodowntime-deployment spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 1 selector: matchLabels: app: nodowntime-test template: metadata: labels: app: nodowntime-test spec: containers: ... readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 5 </code></pre> <p>Can I do it better? Is it some issue with syncing all of that with ingress controller? I know I can tweak it by using <code>minReadySeconds</code> so old and new pod overlaps for some time but is it the only solution?</p>
<p>I've recreated the mentioned experiment and changed the number of requests to something close to 30 per second by starting the three simultaneous processes of the following:</p> <pre><code>While True do curl -s https://&lt;NodeIP&gt;:&lt;NodePort&gt;/ -m 0.1 --connect-timeout 0.1 | grep Version || echo "fail" done </code></pre> <p>After editing deployment and changing image version several times, there was no packet loss at all during the transition process. I even caught a short moment of serving requests by both images at the same time.</p> <pre><code> Version 1 of my awesome app! Money is pouring in! Version 1 of my awesome app! Money is pouring in! Version 1 of my awesome app! Money is pouring in! Version 2 of my awesome app! More Money is pouring in! Version 1 of my awesome app! Money is pouring in! Version 1 of my awesome app! Money is pouring in! Version 2 of my awesome app! More Money is pouring in! Version 2 of my awesome app! More Money is pouring in! Version 2 of my awesome app! More Money is pouring in! </code></pre> <p>Therefore, if you send the request to service directly, it will work as expected. </p> <p><a href="https://www.bountysource.com/issues/44399804-gateway-timeout-when-rolling-update-a-scaled-service-in-docker-swarm-mode" rel="nofollow noreferrer">“Gateway Timeout”</a> error is a reply from Traefik proxy. It opens TCP connection to backend through a set of iptables rules.<br> When you do the RollingUpdates, iptables rules have changed but Traefic doesn't know that, so the connection is still considered as open from Traefik point of view. And after the first unsuccessful attempt to go through nonexistent iptables rule Traefik reports "Gateway Timeout" and closes tcp connection. On the next try, it opens a new connection to the backend through the new iptables rule, and everything goes well again.</p> <p>It could be fixed by <a href="https://docs.traefik.io/configuration/commons/#retry-configuration" rel="nofollow noreferrer">enabling retries</a> in Traefik.</p> <pre><code># Enable retry sending request if network error [retry] # Number of attempts # # Optional # Default: (number servers in backend) -1 # # attempts = 3 </code></pre> <p><strong>Update:</strong></p> <p>finally we worked around it without using 'retry' feature of traefik which could potentially need idempotent processing on all services ( which is good to have anyway but we could not afford forcing all projects to do that ). What you need is kubernetes RollingUpdate strategy + ReadinessProbe configured and graceful shutdown implemented in your app.</p>
<p>I have installed kube v1.11, since heapster is depreciated I am using matrics-server. Kubectl top node command works.</p> <p>Kubernetes dashboard looking for heapster service. What is the steps to configure dashboard to use materics server services</p> <pre><code>2018/08/09 21:13:43 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds. </code></pre> <p>Thanks SR</p>
<p>This must be the week for asking that question; it seems that whatever is deploying heapster is omitting the <code>Service</code>, which one can fix <a href="https://stackoverflow.com/questions/51641057/how-to-get-the-resource-usage-of-a-pod-in-kubernetes/51645461#comment90363934_51645461">as described here</a> -- or the tl;dr is just: create the <code>Service</code> named <code>heapster</code> and point it at your heapster pods.</p>
<p>This is the helm and tiller version:</p> <pre><code>&gt; helm version --tiller-namespace data-devops Client: &amp;version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"} </code></pre> <p>The previous helm installation failed:</p> <pre><code>helm ls --tiller-namespace data-devops NAME REVISION UPDATED STATUS CHART NAMESPACE java-maven-app 1 Thu Aug 9 13:51:44 2018 FAILED java-maven-app-1.0.0 data-devops </code></pre> <p>When I tried to install it again using this command, it failed:</p> <pre><code>helm --tiller-namespace data-devops upgrade java-maven-app helm-chart --install \ --namespace data-devops \ --values helm-chart/values/stg-stable.yaml Error: UPGRADE FAILED: "java-maven-app" has no deployed releases </code></pre> <p>Is the <code>helm upgrade --install</code> command going to fail, if the previous installation failed? I am expecting it to force install. Any idea?</p>
<p>This is or has been a helm issue for a while. It only affects the situation where the first install of a chart fails and has up to helm 2.7 required a manual delete of the failed release before correcting the issue and installing again. However there is now a --force flag available to address this case - <a href="https://github.com/helm/helm/issues/4004" rel="noreferrer">https://github.com/helm/helm/issues/4004</a></p>
<p>with a new release of our product, we want to move to new technologies(kubernetes) so that we can take advantage of it services. we have a local kubernetes application running in our infra. we have made our applicatons dockerize and now we want to use the images to integrate it with kubernetes to make cluster --pods, but we are stuck with docker registry as our customer do not want to have any public/private docker repository(registry) where we can upload this images. we have try with (docker save and docker load) but no luck(error: portal-66d9f557bb-dc6kq 0/1 ImagePullBackOff) Is it at all possible to have some filesystem where from we can access this images or any other alternative is welcome if that solves our problems(no private/public repository/registry).</p>
<p>A Docker registry of some sort is all but a requirement to run Kubernetes. Paid Docker Hub supports private images; Google and Amazon both have hosted registry products (GCR and ECR respectively); there are third-party registries; or you can deploy the <a href="https://hub.docker.com/_/registry/" rel="nofollow noreferrer">official registry image</a> locally.</p> <p>There's an alternative path where you <a href="https://docs.docker.com/engine/reference/commandline/save/" rel="nofollow noreferrer"><code>docker save</code></a> every private image you reference in any Kubernetes pod spec, then <a href="https://docs.docker.com/engine/reference/commandline/load/" rel="nofollow noreferrer"><code>docker load</code></a> it on every single node. This has obvious scale and maintenance problems (whenever <em>anybody</em> updates <em>any</em> image you need to redeploy it by hand to <em>every</em> node). But if you really need to try this, make sure your pod specs specify <code>ImagePullPolicy: Never</code> to avoid trying to fetch things from registries. (If something isn't present on a node the pod will just fail to start up.)</p> <p>The <a href="https://github.com/docker/distribution" rel="nofollow noreferrer">registry</a> is "just" an open-source (Go) HTTP REST service that implements the Docker registry API, if that helps your deployment story.</p>
<p>There are 6 kinds of namespaces in linux: <code>Network, UTS, Users, Mount, IPC, Pid</code>. I know that all the containers share the same network namespace with the pause container in a Kubernetes pod. And by default, different containers have different PID namespaces because they have different init process. However, how about other namespaces and why?</p>
<p>According to <a href="https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/" rel="noreferrer">this article</a>:</p> <blockquote> <p>Containers in a Pod run on a “logical host”; they use the same network namespace (in other words, the same IP address and port space), and the same IPC namespace.</p> <p>Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory.</p> <p>Containers in a Pod are accessible via “localhost”; they use the same network namespace. Also, for containers, the observable host name is a Pod’s name. Because containers share the same IP address and port space, you should use different ports in containers for incoming connections. In other words, applications in a Pod must coordinate their usage of ports.</p> </blockquote> <p>You can also <a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/" rel="noreferrer">enable sharing Process namespace</a> between containers in a Pod by specifying <code>v1.PodSpec.shareProcessNamespace: true</code>.</p>
<p>I have setup a name based ingress controller, but it doesn't seem to work for anything other than <code>/</code>.</p> <p>So <code>http://metabase.domain.com</code> works but <code>http://metabase.domain.com/style/app.css</code> does not.</p> <p>This is my config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: domain annotations: kubernetes.io/ingress.global-static-ip-name: "domain" name: domain-ingress spec: rules: - host: metabase.domain.com http: paths: - path: / backend: serviceName: metabase servicePort: 80 - host: jenkins.domain.com http: paths: - path: / backend: serviceName: jenkins servicePort: 80 </code></pre> <p>From the nginx.conf in the everything looks normal too. For some reason the nginx access and error logs are also empty so can't find anything from there too</p>
<p>As you mentioned, there is no error in the log files, and everything looks normal from your perspective. I may suggest to tune up ingress using annotations tags. I've checked <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#rewrite" rel="noreferrer">documentation of ingress-nginx</a> and found that below annotations may help a bit.</p> <p>In some scenarios, the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite, any request will return 404. Set the annotation </p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target </code></pre> <p>to the path expected by the service.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something </code></pre> <p>If the Application Root is exposed in a different path and needs to be redirected, set the annotation</p> <pre><code>nginx.ingress.kubernetes.io/app-root </code></pre> <p>to redirect requests for /.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: rules: - host: approot.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / </code></pre> <p>If the application contains relative links, it is possible to add an additional annotation </p> <pre><code>nginx.ingress.kubernetes.io/add-base-url </code></pre> <p>that will prepend a base tag in the header of the returned HTML from the backend.</p>
<p>How do I assign a variable within a <strong>helmfile</strong>?</p> <pre><code>context: example.com # kube-context (--kube-context) releases: # Published chart example - name: controller-pod-nginx # Name of this release namespace: ingress-nginx # Target namespace chart: stable/nginx-ingress # The chart being installed to create this release, referenced by `repository/chart` syntax set: # Values (--set) - name: rbac.create value: true - name: controller.service.annotations value: 'service.beta.kubernetes.io/aws-load-balancer-ssl-ports:https' </code></pre> <h3>Error message</h3> <pre><code> helmfile -f deploy_cp_ns_ingress-nginx.yaml sync exec: helm repo add roboll http://roboll.io/charts --kube-context example.com "roboll" has been added to your repositories exec: helm repo update --kube-context example.com Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "roboll" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈ Happy Helming!⎈ exec: helm upgrade --install controller-pod-nginx stable/nginx-ingress --namespace ingress-nginx --set rbac.create=true,controller.service.annotations=service.beta.kubernetes.io/aws-load-balancer-ssl-ports:https --kube-context example.com Error: UPGRADE FAILED: YAML parse error on nginx-ingress/templates/controller-service.yaml: error unmarshaling JSON: json: cannot unmarshal string into Go struct field .annotations of type map[string]string err: exit status 1 </code></pre> <p>If I use a pure <strong>helm</strong> installation that works with no problem:</p> <pre><code>helm install stable/nginx-ingress --set rbac.create=true --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol"=http --namespace=ingress-nginx </code></pre> <p>That works with no problem. I will need to add numerous <strong>annotations</strong>.</p>
<p>Typically, if using the <code>set</code> parameter in <code>helmfile.yaml</code>, you would specify it this way:</p> <pre><code>set: - name: 'controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports' value: 'https' </code></pre> <p><strong>NOTE</strong> the backslashes are used to escape the dots in the key <code>service.beta.kubernetes.io/aws-load-balancer-ssl-ports</code>. Dots in YAML selectors have a special meaning, so we need to escape them.</p> <p>However, since that's very unintuitive, I recommend using inline values. Then it looks something like this:</p> <pre><code>values: - rbac: create: true controller: service: annotations: "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": true </code></pre> <p><strong>NOTE</strong> In the end, it's always tricky with helm since there's no universal specification for how to consume <code>values.yaml</code> - that is, the structure can be arbitrary. I am assuming that it's a <code>map</code>, since most charts we use define <code>annotations</code> as a simple map (as opposed to a list).</p> <p>Here's an example where we define an annotation for <code>replica.annotations.iam\.amazonaws\.com/role</code></p> <p><a href="https://github.com/cloudposse/geodesic/blob/0.12.0/rootfs/conf/kops/helmfile.yaml#L104-L105" rel="nofollow noreferrer">https://github.com/cloudposse/geodesic/blob/0.12.0/rootfs/conf/kops/helmfile.yaml#L104-L105</a></p> <p>And here's how we do it for inline values: (we switched to using this everywhere) <a href="https://github.com/cloudposse/helmfiles/blob/0.2.4/helmfile.d/0300.chartmuseum.yaml#L52-L55" rel="nofollow noreferrer">https://github.com/cloudposse/helmfiles/blob/0.2.4/helmfile.d/0300.chartmuseum.yaml#L52-L55</a></p>
<p>We're requesting persistence on kubernetes using persistence volume claim.</p> <p>Currently, we're setting <code>storage class</code> using <code>volume.beta.kubernetes.io/storage-class</code> annotation:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vault-file-backend-volumeclaim annotations: volume.beta.kubernetes.io/storage-class: heketi spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>However, we've realized that exists an <code>spec.storageClassName</code> field, we don't know what's it for...</p> <p>Could we remove <code>volume.beta.kubernetes.io/storage-class</code> annotation by setting <code>spec.storageClassName</code>?</p>
<p>Yes, you should opt for <code>storageClassName</code>, because the annotation will be deprecated in the future, as state in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class" rel="nofollow noreferrer">kubernetes.io</a>:</p> <blockquote> <p>In the past, the annotation <code>volume.beta.kubernetes.io/storage-class</code> was used instead of the storageClassName attribute. This annotation is still working, however it will become fully deprecated in a future Kubernetes release.</p> </blockquote> <p>So your YML file will be the following:</p> <p><code>storageclass.yml</code>:</p> <pre><code>apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: heketi provisioner: kubernetes.io/glusterfs parameters: resturl: [...] restuser: [...] restuserkey: [...] </code></pre> <p><code>pv.yml</code>:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vault-file-backend-volumeclaim spec: storageClassName: heketi accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre>
<p>I am trying to understand the benefits and drawbacks of the following architectures when it comes to deploying my application and database containers using kubernetes.</p> <p>A little background: The application sits behind an Nginx proxy. All requests flow from the proxy to the web server. The web server is the only thing that has access to the (read only) database.</p> <p><strong>Architecture 1:</strong></p> <p>Pod#1 - Database container only</p> <p>Pod#2 - Application container only</p> <p><strong>Architecture 2:</strong></p> <p>Pod#1 - Database container &amp; Application container</p> <p>From my research so far, I have found comments recommending Architecture 1 for scaling reasons. <a href="https://linchpiner.github.io/k8s-multi-container-pods.html" rel="nofollow noreferrer">https://linchpiner.github.io/k8s-multi-container-pods.html</a></p> <p>Does anyone have insight onto which of these approaches would be better suited for my situation?</p>
<p>Being able to scale the application and database independently would be the key reason for having them separated. Scaling with high load (or highly variable load) requires a robust architecture and what counts as 'high load' will depend on your app. For example, if the database and application are in different pods then you could in theory run multiple replicas of the application (i.e. multiple Pods) and (if you wanted) just one replica of the database that all of the instances of the application point to. And you could have an nginx ingress controller routing to the application instances and load-balancing between them.</p> <p>Running multiple replicas can give you the ability to scale up and down in response to load (see the HorizontalPodAutoscaler for example but you could also scale manually). It provides a level of fault-tolerance as one instance can become be overwhelmed and become unresponsive (or simply fail) without affecting the others (and the failing pod can also be automatically restarted by Kubernetes). </p> <p>A potential snag to watch out for on running multiple replicas of your app, at least if it's an existing app that you're porting to kubernetes, is that your application does need to be written in stateless way to support this. Your db being read-only presumably means this isn't a problem at the data layer. Perhaps you could run multiple db replicas too and put use a Service so that your app instances could talk to them. But you'd also need to think about statefulness in the app e.g. Is authentication token-based and could different instances validate the token without requiring a new login?</p> <p>It's not necessarily wrong to put the two containers in the same pod. You might still get some scaling benefits in your case as if your db is read-only then presumably the instances can't get out of sync. But then you can only scale them together and likewise each pair would fail together. </p>
<p>I would like to know, which are the endpoints correctly configured for a certain ingress.</p> <p>For instance I have the following ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dictionary spec: tls: - hosts: - dictionary.juan.com secretName: microsyn-secret backend: serviceName: microsyn servicePort: 8080 rules: - host: dictionary.juan.com http: paths: - path: /synonyms/* backend: serviceName: microsyn servicePort: 8080 </code></pre> <p>that is working the following service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: microsyn spec: ports: - port: 8080 targetPort: 8080 protocol: TCP name: http selector: app: microsyn </code></pre> <p>This service is exposing this deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: microsyn spec: replicas: 1 selector: matchLabels: app: microsyn template: metadata: labels: app: microsyn spec: containers: - name: microsyn image: microsynonyms imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p>The application is a nodejs app listening on port 8080, that says hello for path '/', in a docker image I just created locally, that's the reason it has a imagePullPolicy: Never, it is just for testing purposes and learn how to create an ingress.</p> <p>So I created the ingress from nginx-ingress and it is up and running with a nodeport for local environment test, later I will take it to a deployment with load balancer:</p> <p><a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example</a></p> <p>So I do the following:</p> <p><code>minikube service list</code></p> <p><code> |---------------|----------------------|--------------------------------| | NAMESPACE | NAME | URL | |---------------|----------------------|--------------------------------| | default | kubernetes | No node port | | default | microsyn | No node port | | kube-system | default-http-backend | http://192.168.99.100:30001 | | kube-system | kube-dns | No node port | | kube-system | kubernetes-dashboard | http://192.168.99.100:30000 | | nginx-ingress | nginx-ingress | http://192.168.99.100:31253 | | | | http://192.168.99.100:31229 | |---------------|----------------------|--------------------------------| </code></p> <p>Brilliant I can do:</p> <p><code>curl http://192.168.99.100:31253/synonyms</code></p> <p>But then I get a:</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body bgcolor="white"&gt; &lt;center&gt;&lt;h1&gt;404 Not Found&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.15.2&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>So the only nginx I have is the one from this minikube, and it is working fine. But I cannot see which are the endpoints configured for this ingress...</p> <p>I see the logs that says:</p> <pre><code>2018/08/11 16:07:05 [notice] 117#117: signal process started I0811 16:07:05.313037 1 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"dictionary", UID:"9728e826-9d80-11e8-9caa-0800270091d8", APIVersion:"extensions", ResourceVersion:"57014", FieldPath:""}): type: 'Normal' reason: 'AddedOrUpdated' Configuration for default/dictionary was added or updated W0811 16:15:05.826537 1 reflector.go:341] github.com/nginxinc/kubernetes-ingress/nginx-controller/controller/controller.go:413: watch of *v1.ConfigMap ended with: too old resource version: 56348 (56655) </code></pre> <p>So that means that the dictionary ingress has being processed without errors.</p> <p>But then Why I get a 404?</p> <p>Where can I see the endpoints that are configured for this ingress??</p> <p>I would like to run a command that says nginx --> which endpoints are you listening to?</p>
<blockquote> <p>But then Why I get a 404?</p> </blockquote> <p>Ingress resources -- or certainly the one you showed in your question -- use virtual-host routing, so one must set the <code>host:</code> header when interacting with that URL:</p> <pre><code>curl -H 'host: dictionary.juan.com' http://192.168.99.100:31253/synonyms/something </code></pre> <p>I would have to look up the <code>path:</code> syntax to know if <code>/synonyms/*</code> matches <code>/synonyms</code>, which is why I included the slash and extra content in <code>curl</code>.</p> <p>For interacting with it in a non-<code>curl</code> way, you can either change the <code>host:</code> in the <code>Ingress</code> to temporarily be 192.168.99.100 or use <a href="http://www.thekelleys.org.uk/dnsmasq/doc.html" rel="nofollow noreferrer">dnsmasq</a> to create a local nameserver where you can override <code>dictionary.juan.com</code> to always return <code>192.168.99.100</code>, and then Chrome will send the correct <code>host:</code> header by itself.</p> <blockquote> <p>Where can I see the endpoints that are configured for this ingress??</p> </blockquote> <p>The question is slightly inaccurate in that <code>Endpoint</code> is a formal resource, and not related to an <code>Ingress</code>, but the answer is:</p> <pre><code>kubectl get endpoints microsyn </code></pre> <blockquote> <p>I would like to run a command that says nginx --> which endpoints are you listening to?</p> </blockquote> <p>First, look up the name of the nginx-ingress Pod (any one of them should do, if you have multiple), and then look at the <code>nginx.conf</code> from the Pod to know exactly what rules it has converted the <code>Ingress</code> resource into:</p> <pre><code>kubectl exec $ingress_pod cat /etc/nginx/nginx.conf </code></pre>
<p>I have written the <code>nifi.properties</code> into a <code>Kubernetes ConfigMap</code>. When I deploy NiFi (as a <code>StatefulSet</code>) I want to have this <code>nifi.properties</code> file to be used by the NiFi I just deployed. To do so I added a volume for the <code>ConfigMap</code> and mounted it in the Container. The associated <code>statefulset.yaml</code> looks like this:</p> <pre><code>... containers: - name: 'myName' image: 'apache/nifi:latest' ports: - name: http containerPort: 8080 protocol: TCP - name: http-2 containerPort: 1337 protocol: TCP volumeMounts: - name: 'nifi-config' mountPath: /opt/nifi/nifi-1.6.0/conf/nifi.properties volumes: - name: 'nifi-config' configMap: name: 'nifi-config' ... </code></pre> <p>This doesn't work, I think it is, because NiFi is already running and the <code>nifi.properties</code> file is locked by the service. The pod cannot be created, I get an error: <code>...Device or resource is busy</code>. I also tried that with the <code>bootstrap.conf</code> file, which works, but I don't think that changes in there are recognized by the NiFi service because it would have to be restarted. </p> <p>I already had the same issue with NiFi deployed on pure Docker, where I worked around by stopping the container, copying the files and starting the container; not very pretty, but working. </p> <p>Using environment variables to change values in NiFi as stated <a href="https://github.com/apache/nifi/tree/master/nifi-docker/dockerhub" rel="nofollow noreferrer">here</a> is also not an option, because the possibility of changing parameters there are very limited.</p> <p>This problem doesn't occurs for NiFi only. I think that there are many situations where someone want's to change the configuration for a system running within <code>Kubernetes</code>, so I hope there is any solution to handle this issue.</p>
<p>There are two problems with the above setup:</p> <ul> <li>You have to specify the subpath to tell which item you mount from the configmap as a single file, see: <a href="https://github.com/kubernetes/kubernetes/issues/44815#issuecomment-297077509" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/44815#issuecomment-297077509</a></li> <li>You cannot mount a configmap item as a readwrite volume by default on 1.9.6 and above, so the start script won't be able to replace properties in it, see: <a href="https://github.com/kubernetes/kubernetes/issues/62099#issuecomment-378809922" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/62099#issuecomment-378809922</a> </li> </ul> <p>To workaround the second issue you can simply mount the configmap item as a separate file (nifi.properties.tmp) and copy it to the destination by wrapping the container entry point with a custom command.</p> <pre><code>... containers: - name: 'myName' image: 'apache/nifi:latest' ports: - name: http containerPort: 8080 protocol: TCP - name: http-2 containerPort: 1337 protocol: TCP volumeMounts: - name: 'nifi-config' mountPath: /opt/nifi/nifi-1.6.0/conf/nifi.properties.tmp subPath: nifi.properties command: - bash - -c - | cat "${NIFI_HOME}/conf/nifi.properties.tmp" &gt; "${NIFI_HOME}/conf/nifi.properties" exec "${NIFI_BASE_DIR}/scripts/start.sh # or you can do the property edits yourself and skip the helper script: # exec bin/nifi.sh run volumes: - name: 'nifi-config' configMap: name: 'nifi-config' ... </code></pre>
<p>I need to connect to Google Cloud SQL from a Kubernetes pod using Go.</p> <p>I have been following the following guides religiously: <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine</a> <a href="https://cloud.google.com/sql/docs/mysql/connect-external-app#go" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-external-app#go</a></p> <p>Here is my <strong>Kubernetes deployment yaml</strong> file:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-service labels: app: my-service spec: strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 10% maxSurge: 10% replicas: 1 template: metadata: labels: app: my-service spec: imagePullSecrets: - name: regsecret containers: - name: my-service image: my-image imagePullPolicy: IfNotPresent env: - name: DB_INSTANCE value: "my-project-id:europe-west1:uk" - name: DB_USERNAME valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username - name: DB_PASSWORD valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=my-project-id:europe-west1:uk=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials </code></pre> <p>And here is my attempt to connect using <strong>Go</strong>:</p> <pre><code>import ( "github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/mysql" "github.com/spf13/viper" "log" ) func Connect() { cfg := mysql.Cfg( viper.GetString("DB_INSTANCE"), viper.GetString("DB_USERNAME"), viper.GetString("DB_PASSWORD")) cfg.DBName = "my-db-name" db, err := mysql.DialCfg(cfg) if err != nil { log.Fatalf("Failed to create the Cloud SQL client: %v", err) } } </code></pre> <p>Unfortunately, running this code results in:</p> <pre><code>2018/08/08 15:19:45 Failed to create the Cloud SQL client: ensure that the Cloud SQL API is enabled for your project (https://console.cloud.google.com/flows/enableapi?apiid=sqladmin). Error during createEphemeral for my-project-id:europe-west1:uk: googleapi: Error 403: Insufficient Permission, insufficientPermissions </code></pre> <p>I can confirm that the database obviously exists in Cloud SQL, the <strong>Cloud SQL API</strong> is indeed enabled, and that the service account that I used to generate the credentials (as described in the links above) does have the <strong>Cloud SQL Client</strong> role attached to it (I have also tried with Cloud SQL Admin and even project Owner with no success).</p> <p>What am I missing?</p> <p><strong>EDIT:</strong></p> <p>Fixed it by updating my Go code to:</p> <pre><code>dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s", viper.GetString("DB_USERNAME"), viper.GetString("DB_PASSWORD"), "127.0.0.1:3306", "my-db-name") db, err := sql.Open("mysql", dsn) if err != nil { log.Fatalf("Failed to create the Cloud SQL client: %v", err) } </code></pre> <p>Thanks @DazWilkin for the tip.</p>
<p>I know you've solved your issue already, but wanted to provide another alternative for anyone else reading this thread.</p> <p>There are 2 ways you can connect to your instance: 1: Through a TCP connection: You have that solution already and <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="nofollow noreferrer">this page</a> shows you the details. Since the cloud SQL proxy container runs on the same pod as your app container, your host should point to <code>localhost</code>. 2: Unix socket: </p> <p>You should also be able to connect through the following code (<a href="https://cloud.google.com/sql/docs/mysql/connect-external-app#go" rel="nofollow noreferrer">details here</a>):</p> <pre><code>import ( "database/sql" _ "github.com/go-sql-driver/mysql" ) dsn := fmt.Sprintf("%s:%s@unix(/cloudsql/%s)/%s", dbUser, dbPassword, INSTANCE_CONNECTION_NAME, dbName) db, err := sql.Open("mysql", dsn) </code></pre> <p>You may have to remove the <code>=tcp:3306</code> portion from your deployment.yaml file through.</p>
<p><a href="https://i.stack.imgur.com/EKOoX.png" rel="nofollow noreferrer">Startup Logs of Pod</a> I am not able to access a spring boot service on my minikube cluster. On my local machine,I configured minikube cluster and built the docker image of my service. My service contains some simple REST endpoints.</p> <p>I configured minikube to take my local docker image or should I say pull my docker image. But now when I do </p> <pre><code>kubectl get services -n istio-system </code></pre> <p>I get the below services <a href="https://i.stack.imgur.com/BiBia.png" rel="nofollow noreferrer">kubectl get services|Services list in minkube cluster</a> | <a href="https://i.stack.imgur.com/yxHwn.png" rel="nofollow noreferrer">Kubectl get pods all namespaces</a> | <a href="https://i.stack.imgur.com/96QUs.png" rel="nofollow noreferrer">Kubectl describe service</a></p> <p>I am trying to access my service through below command</p> <p><code>minikube service producer-service --url </code> which gives <a href="http://192.168.99.100:30696" rel="nofollow noreferrer">http://192.168.99.100:30696</a></p> <p>I have a ping URL in my service so ideally I should be getting response by hitting <a href="http://192.168.99.100:30696/ping" rel="nofollow noreferrer">http://192.168.99.100:30696/ping</a></p> <p>I am not getting any response here. Can you guys please let me know what I am missing here?</p>
<p>The behaviour you describe would suggest a port mapping problem. Is your Spring boot service on the default port of 8080? Does the internal port of your Service match the port the Spring boot app is running on (it'll be in your app startup logs). The port in your screenshot seems to be 8899. It's also possible your pod is in a different namespace from your service. It would be useful to include your app startup logs and the output of 'kubectl get pods --all-namespaces', and 'kubectl describe service producer-service'.</p>
<p>What specific changes need to be made to the <code>yaml</code> below in order to get the <code>PersistentVolumeClaim</code> to bind to the <code>PersistentVolume</code>?</p> <p>An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.</p> <h2>Creating the PersistentVolume</h2> <p>We created a PersistentVolume pv01 with <code>pv-volume-network.yaml</code>:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv01 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: "/nfsfileshare" server: "10.0.0.112" </code></pre> <p>and by typing:</p> <pre><code>kubectl create -f pv-volume-network.yaml </code></pre> <p>Then when we type <code>kubectl get pv pv01</code>, the <code>pv01</code> PersistentVolume shows a STATUS of "Available".</p> <h2>Creating the PersistentVolumeClaim</h2> <p>Then we created a PersistentVolumeClaim named `` with <code>pv-claim.yaml</code>: </p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>And by typing: </p> <pre><code>kubectl create -f pv-claim.yaml </code></pre> <h2>STATUS is Pending</h2> <p>But then when we type <code>kubectl get pvc my-pv-claim</code>, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.</p> <p>Note this OP is different than <a href="https://stackoverflow.com/questions/44556363/kubernetes-nfs-persistentvolumeclaim-has-status-pending">this other question</a>, because this problem persists even with quotes around the NFS IP and the path.</p> <p>Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?</p>
<p>I diagnosed the problem by typing <code>kubectl describe pvc my-pv-claim</code> and looking in the Events section of the results.</p> <p>Then, based on the reported Events, I was able to fix this by changing <code>storageClassName: manual</code> to <code>storageClassName: slow</code>. </p> <p>The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.</p>
<p>I have an Kubernetes environment running multipe applications (services). Now i'm a little bit confused how to setup the MySQL database instance(s). </p> <p>According to different sources each microservice should have there own database. Should i create a single MySQL statefulset in HA mode running multiple databases OR should i deploy a separate MySQL instance for each application (service) running one database each.</p> <p>My first thought would be the first option hence where should HA oterwise be usefull for? Would like to hear some differente views on this.</p>
<p>Slightly subjective question, but here's what we have setup. Hopefully, that will help you build a case. I'm sure someone would have a different opinion, and that might be equally valid too:</p> <p>We deploy about 70 microservices, each with it's own database ("schema"), and it's own JDBC URL (defined via a service). Each microservice has it's own endpoint and credentials that we do not share between microservices. So in effect, we have kept the design to be completely independent across the microservices as far as the schema is concerned. </p> <p>Deployment-wise, however, we have opted to go with a single database instance for hosting all databases (or "schemas"). While technically, we could deploy each database on its own database instance, we chose not to do it for few main reasons:</p> <ol> <li><strong>Cost overhead</strong>: Running separate database instances for each microservice would add a lot of "fixed" costs. This may not be directly relevant to you if you are simply starting the database as a MySQL Docker container (we use a separate database service, such as RDS or Google Cloud SQL). But even in the case of MySQL as a Docker container, you might end up having a non-trivial cost if you run, for example, 70 separate containers one per microservice.</li> <li><strong>Administration overhead</strong>: Given that databases are usually quite involved (disk space, IIOPs, backup/archiving, purge, upgrades and other administration activities), having separate database instances -- or Docker container instances -- may put a significant toll on your admin or operations teams, especially if you have a large number of microservices</li> <li><strong>Security</strong>: Databases are usually also critical when it comes to security as the "truth" usually goes in the DB. Keeping encryption, TLS configuration and strengths of credentials aside (as they should be of utmost importance regardless of your deployment model), security considerations, reviews, audits and logging will bring in significant challenges if your databases instances are too many.</li> <li><strong>Ease of development</strong>: Relatively less critical in the grand scheme of things, but significant, nonetheless. Unless you are thinking of coming up with a different model for development (and thus breaking the "dev-prod parity"), your developers may have a hard time figuring out the database endpoints for debugging even if they only need that information once-in-a-while.</li> </ol> <p>So, my recommendation would be to go with a single database instance (Docker or otherwise), but keep the databases/schemas completely independent and inaccessible by the any microservice but the "owner" microservice. </p> <p>If you are deploying MySQL as Docker container(s), go with a <code>StatefulSet</code> for persistence. Define an external <code>pvc</code> so that you can always preserve the data, no matter what happens to your pods or even your cluster. Of course, if you run 'active-active', you will need to ensure clustering between your nodes, but we do run it in 'active-passive' mode, so we keep the <code>replica</code> count to 1 given we only use MySQL Docker container alternative for our test environments to save costs of external DBaaS service where it's not required.</p>
<p>I have a docker image that when created should check if the volume is empty, in case it should initialize it with some data. This saved data must remain available for other pods with the same or different image.</p> <p><a href="https://i.stack.imgur.com/2sJos.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2sJos.png" alt="enter image description here"></a></p> <p>What do you recommend me to do?</p>
<p>You have 2 options:</p> <ol> <li>First option is to mount the pod into the node and save the data in the node so when new pod will create in the same node it will have an access to the same volume (persistent storage location).</li> </ol> <p>Potential problem: 2 pods on the same node can create deadlock for the same resource (so you have to manage the resource).</p> <ol start="2"> <li>Shared storage meaning create one storage and every pod will claim storage in the same storage.</li> </ol> <p>I strongly suggest that you will take the next 55 minutes and see the webinar below: <a href="https://www.youtube.com/watch?v=n06kKYS6LZE" rel="nofollow noreferrer">https://www.youtube.com/watch?v=n06kKYS6LZE</a> </p>
<p>I've managed to deploy a .netcore api to azure kubernetes managed service (ACS) and it's working as expected. The image is hosted in an azure container registry.</p> <p>I'm now trying to get the service to be accessible via https. I'd like a very simple setup. </p> <ul> <li><p>firstly, do I have to create an openssl cert or register with letencrypt? I'd ideally like to avoid having to manage ssl certs separately, but from documentation, it's not clear if this is required.</p></li> <li><p>secondly, I've got a manifest file below. I can still access port 80 using this manifest. However, i am not able to access port 443. I don't see any errors, so it's not clear what the problem is. Any ideas?</p></li> </ul> <p>thanks</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: someappservice-deployment annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: "false" loadbalancer.openstack.org/floating-network-id: "9be23551-38e2-4d27-b5ea-ea2ea1321bd6" spec: replicas: 3 template: metadata: labels: app: someappservices spec: containers: - name: someappservices image: myimage.azurecr.io/someappservices ports: - containerPort: 80 - containerPort: 443 --- kind: Service apiVersion: v1 metadata: name: external-http-someappservice spec: selector: app: someappservices type: LoadBalancer ports: - name: http port: 80 protocol: TCP targetPort: 80 - name: https port: 443 protocol: TCP targetPort: 443</code></pre> </div> </div> </p>
<p>From what I understand, you will need something like an NGINX ingress controller to handle the SSL termination and will also need to manage certificates. Kubernetes cert-manager is a nice package that can help with the certs.</p> <p>Here is a write up on how to do both in an AKS cluster:</p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress/?WT.mc_id=stackoverflow-stackoverflow-nepeters" rel="nofollow noreferrer">Deploy an HTTPS enabled ingress controller on AKS</a></p>
<p>I have Kubernetes cluster hosted in Google Cloud. I created a deployment and defined a hpa rule for it:</p> <pre><code>kubectl autoscale deployment my_deployment --min 6 --max 30 --cpu-percent 80 </code></pre> <p>I want to run a command that editing the <code>--min</code> value, <strong>without remove and re-create a new hpa rule</strong>. Something like: </p> <pre><code>$ kubectl autoscale deployment my_deployment --min 1 --max 30 Error from server (AlreadyExists): horizontalpodautoscalers.autoscaling "my_deployment" already exists </code></pre> <p>Is this possible to edit hpa (min, max, cpu-percent, ...) on command line?</p>
<blockquote> <p>Is this possible to edit hpa (min, max, cpu-percent, ...) on command line?</p> </blockquote> <p>They are editable just as any other resource is, though either <code>kubectl edit hpa $the_hpa_name</code> for an interactive edit, or <code>kubectl patch hpa $the_hpa_name -p '{"spec":{"minReplicas": 1}}'</code> for doing so in a "batch" setting.</p> <p>If you don't know the <code>$the_hpa_name</code>, you can get a list of them like any other resource: <code>kubectl get hpa</code>, and similarly you can view the current settings and status with <code>kubectl get -o yaml hpa $the_hpa_name</code> (or even omit <code>$the_hpa_name</code> to see them all, but that might be a lot of text, depending on your cluster setup).</p>
<p>I have a docker image that when created should check if the volume is empty, in case it should initialize it with some data. This saved data must remain available for other pods with the same or different image.</p> <p><a href="https://i.stack.imgur.com/2sJos.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2sJos.png" alt="enter image description here"></a></p> <p>What do you recommend me to do?</p>
<p>I assume you create your pods using Deployment object in Kubernetes. What you want to look into is a StatefulSet, which, in opposite to deployments, retains some identity aspects for recreated pods including to some extent network and storage.</p> <p>It was introduced specifically as a means to run services that need to keep their state in kube cluster (ie. running databases queues etc.)</p>
<p>I am using helm 2.6.1 to deploy a package to my kubernetes cluster. I have created a nexus raw repository to host the packaged helm charts. I also have the index file uploaded to the same.</p> <pre><code>--charts - wordpress-0.1.0.tgz - index.yaml </code></pre> <p>However, when I try to do a helm install, it never finds the package. I have tried all the below ways. Any other options to try?</p> <pre><code>+ helm repo list NAME URL stable https://kubernetes-charts.storage.googleapis.com local http://127.0.0.1:8879/charts helm-repo https://admin:[email protected]/repository/kubecharts/charts + helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "helm-repo" chart repository ...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com): Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes-charts.storage.googleapis.com on 10.00.0.00:53: no such host Update Complete. ⎈ Happy Helming!⎈ + helm search wordpress-0.1.0.tgz WARNING: Repo "stable" is corrupt or missing. Try 'helm repo update'.No results found + helm search helm-repo/wordpress-0.1.0.tgz WARNING: Repo "stable" is corrupt or missing. Try 'helm repo update'.No results found + helm install helm-repo/wordpress-0.1.0.tgz Error: file "helm-repo/wordpress-0.1.0.tgz" not found </code></pre>
<p>You references the chart wrong way. Try <code> helm search helm-repo/wordpress helm install helm-repo/wordpress </code></p> <p>If you need to install particular version:</p> <p><code> helm install helm-repo/wordpress --version=0.1.0 </code></p>
<p>I'm trying to deploy my web service to Google Container Engine:</p> <p>Here's my <strong>deployment.yaml:</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: handfree labels: app: handfree spec: replicas: 3 template: metadata: labels: app: handfree spec: containers: - name: handfree image: arycloud/mysecretrepo:latest imagePullPolicy: Always #Ports to expose ports: - name: api_port containerPort: 8000 </code></pre> <p>Here's my <strong>service.yaml:</strong></p> <pre><code>kind: Service apiVersion: v1 metadata: #Service name name: judge spec: selector: app: handfree ports: - protocol: TCP port: 8000 targetPort: 8000 type: LoadBalancer </code></pre> <p>I have created a cluster on Google Container Engine with cluster size 4 and 8 vCPUs, I have successfully get credentials by using the command from connecting link of this cluster.</p> <p>When I try to run the deployment.yml it returns an error as:</p> <blockquote> <p>Error from server (Forbidden): error when retrieving current configuration of: default handfree deployment.yaml</p> <p>from server for: &quot;deployment.yaml&quot; deployments.extensions &quot;handfree&quot; is forbidden: User &quot;client&quot; cannot get deployments.extensions in the namespace &quot;default&quot;: Unknown user &quot;client&quot;.</p> </blockquote> <p>I'm new to kubernetes world, help me, please!</p> <p>Thanks in advance!</p>
<blockquote> <p>Unknown user "client".</p> </blockquote> <p>Means there is no <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#rolebinding-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer"><code>RoleBinding</code></a> or <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#clusterrolebinding-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer"><code>ClusterRoleBinding</code></a> with a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#subject-v1beta1-rbac-authorization-k8s-io" rel="nofollow noreferrer"><code>subjects:</code></a> of <code>type: User</code> with a <code>name:</code> of <code>client</code>.</p> <p>The fix is to create a <code>ClusterRoleBinding</code> or <code>RoleBinding</code> -- depending on whether you want <code>client</code> to have access to <strong>every</strong> <code>Namespace</code> or just <code>default</code> -- and point it at an existing (or created) <code>Role</code> or <code>ClusterRole</code>. The bad news is that since your current credential is invalid, you will need to track down the <code>cluster-admin</code> credential to be able to make that kind of change. Since I haven't used GKE, I can't specify the exact steps.</p> <p>I know those paragraphs are filled with jargon, and for that I'm sorry - it's a complex topic. There are several RBAC summaries, including a <a href="https://about.gitlab.com/2018/08/07/understanding-kubernestes-rbac/" rel="nofollow noreferrer">recent one from GitLab</a>, a <a href="https://www.youtube.com/watch?v=CnHTCTP8d48" rel="nofollow noreferrer">CNCF webinar</a>, and <a href="https://sysdig.com/blog/kubernetes-security-rbac-tls/" rel="nofollow noreferrer">one from Sysdig</a>, and (of course) <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">the kubernetes docs</a></p>
<p>I have a kubernetes cluster setup on AWS. When i make call to elasticsearch-client.default.svc.cluster.local from a pod, i get unknown host exception occasionaly. It must have something to do with the name resolution, coz hitting the service IP directly works fine. </p> <p>Note : I already have kube-dns autoscaler enabled. I manually tried with almost 6 kube-dns pods. SO i dont think it is because of dns pod scaling.</p> <p>When I set the kube-dns configMap with the upstreamserver values to google nameservers (8.8.8.8 and 8.8.4.4) I am not getting the issue. I assume it is because of api ratelimiting done by AWS on route53. But I dont know why the name resolution request would got to AWS NS.</p>
<p>Here's a good <a href="https://blog.quentin-machu.fr/2018/06/24/5-15s-dns-lookups-on-kubernetes/" rel="nofollow noreferrer">write-up</a> that may be related to your problems, also check <a href="https://www.weave.works/blog/racy-conntrack-and-dns-lookup-timeouts" rel="nofollow noreferrer">this</a> one out by Weaveworks. </p> <p>Basically there have been a number of issues during the last year created at the GitHub <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow noreferrer">Kubernetes issue tracker</a> that has to do with various DNS latencies/problems from within a cluster.</p> <p>Worth mentioning, although not a fix to every DNS related problem, is that CoreDNS are generally available since version <code>1.11</code> and are or will be default thus replacing <code>kube-dns</code> as the default DNS add-on for clusters.</p> <p>Here's a couple of issues that might be related to the problem you're experiencing:</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/47142" rel="nofollow noreferrer">#47142</a></p> <p><a href="https://github.com/kubernetes/kubernetes/issues/45976" rel="nofollow noreferrer">#45976</a></p> <p><a href="https://github.com/kubernetes/kubernetes/issues/56903" rel="nofollow noreferrer">#56903</a></p> <p>Hopefully this may help you moving forward.</p>
<p>I have three services running in my backend and the Ingress routing is defined like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapp-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - myapp.westeurope.cloudapp.azure.com secretName: acme-crt-secret rules: - host: myapp.westeurope.cloudapp.azure.com http: paths: - path: / backend: serviceName: myapp-mvc servicePort: 80 - path: /api backend: serviceName: myapp-api servicePort: 80 - path: /identity backend: serviceName: myapp-identity servicePort: 80 </code></pre> <p>The problem is that <em>myapp-api</em> is already listening for requests to <code>/api/v1/myresource</code>. With the current configuration, the <em>myapp-api</em> service only serves requests to <code>myapp.westeurope.cloudapp.azure.com/api/api/v1/myresource</code> (please note the .../api/api/...). </p> <p>Is it possible to serve requests to <code>/api</code> by the myapp-api service but rewriting these requests to <code>/</code> for the service without creating another Ingress? So, myapp-api should serve requests to <code>myapp.westeurope.cloudapp.azure.com/api/v1/myresource</code>.</p>
<p>You have two options:</p> <p>a) Change the port of the API and have it serve / on that port.</p> <p>b) Change your app so it will serve the API on "/v1/myresource" and give it the "api" part of the URL through the Ingress.</p> <p>Either way, you'll have your resources at "myapp.westeurope.cloudapp.azure.com/api/v1/myresource".</p>
<p>I need to setup a RabbitMQ cluster with queue mirroring enabled on all queues in Kubernetes. The RabbitMQ plugin for kubernetes peer discovery only provides a clustering mechanism based on peer discovery , as the plugin name indicates. But how do I enable queue mirroring and achieve HA , so that if pods a restarted for any reason or if I need to scale the Rabbitmq nodes , I can do it without any loss of messages.</p>
<p>Add a definitions.json file into your ConfigMap and ensure that your pods mount the file (in /etc/rabbitmq). In that file, specify all exchanges/queues, and have a policy defined for mirroring that would be applied to those exchanges/queues.</p> <p>It may be easier to manually set this up and export the definitions file from a running RabbitMQ node.</p> <p>This way - your cluster is all configured when started.</p>
<p>I am very new to development on Azure ASE and I am working on an existing Azure cloud solution that has Service Bus receiving messages from a UI and the message events start Azure (on-demand) Web Jobs at various points in the solution.</p> <p>Similar to this: <a href="https://i.stack.imgur.com/l7BtS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l7BtS.png" alt="enter image description here" /></a></p> <p>We are hosting most parts of the solution on ASE and the plan is to move off ASE and onto Kubernetes (AKS) instead (at the moment, I've setup AKS with windows as the OS just to start playing with it).</p> <p>What options are there for moving Web Jobs off ASE and onto AKS? Does the OS have a bearing on the options? Can the WebJobs SDK be installed in the AKS cluster to run WebJobs (and are they executable from Service Bus for example)? I know you can setup scheduled Jobs, but what would be the equivalent for on-demand web jobs (long running processes).</p> <p>Any advice much appreciated. We have a similar migration of Azure Functions, but I think if I can understand how to shift Web Jobs, the Functions will naturally follow the same path.</p>
<p><em>Having spent more time on this, here's some information which has helped me (you can see from the question above that there are gaps in my understanding, building my knowledge up slowly and publishing in case its useful for others).</em></p> <p>Moving WebJobs off ASE - this is not as straight forward as my question implies. Essentially, the WebJobs themselves output an exe and the required libs, similar to the output of a command line app. The WebJobs (their code) run a "JobHost" in the Main method, which has a host waiting for triggers to run the jobs within. Looks very similar to me in how WCF services would have been hosted in a Windows Service a few years back.</p> <p>With that in mind, first, the WebJob .exe can be run on the local host OS. As we are using .net Framework version of WebJobs, we can only deploy to Windows (maybe with Mono it is possible to run on Linux, but for now I'm saying Windows only to keep things simple). Had we built the WebJobs using .net Core then arguably they could have been to a Windows or Linux host OS.</p> <p>Secondly, we want to "containerise" the compiled output of a WebJob - so a docker image needs to be built containing the WebJob and dependencies, so it can be deployed into a cluster (this is the point I'm currently at and trying to define the docker file). Read more about <a href="https://docs.docker.com/get-started/" rel="nofollow noreferrer">Docker Containers here</a>.</p> <p>Thirdly, the cluster itself. I'd mentioned AKS. There are other options, such as Service Fabric but it's a Microsoft proprietary SDK, so maybe best to steer clear for now. You can deploy your docker image containing your WebJob .exe (and libs) to your cluster as you need. The cluster can manage scaling of your containers as required. NOTE: you can run <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">Minikube</a> locally which helps get to grips</p> <p>This is a high level description, but clarifies my question above and gives some information which I found useful. Hopefully helps others who have DevOps trust upon them! :)</p>
<p>I'm trying to setup Kubernetes with Nvidia GPU nodes/slaves. I followed the guide at <a href="https://docs.nvidia.com/datacenter/kubernetes-install-guide/index.html" rel="nofollow noreferrer">https://docs.nvidia.com/datacenter/kubernetes-install-guide/index.html</a> and I was able to get the node join the cluster. I tried the below kubeadm example pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: gpu-pod spec: containers: - name: cuda-container image: nvidia/cuda:9.0-base command: ["sleep"] args: ["100000"] extendedResourceRequests: ["nvidia-gpu"] extendedResources: - name: "nvidia-gpu" resources: limits: nvidia.com/gpu: 1 affinity: required: - key: "nvidia.com/gpu-memory" operator: "Gt" values: ["8000"] </code></pre> <p>The pod fails scheduling &amp; the kubectl events shows:</p> <pre><code>4s 2m 14 gpu-pod.15487ec0ea0a1882 Pod Warning FailedScheduling default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 PodToleratesNodeTaints. </code></pre> <p>I'm using AWS EC2 instances. m5.large for the master node &amp; g2.8xlarge for the slave node. Describing the node also gives "<strong>nvidia.com/gpu: 4</strong>". Can anybody help me out if I'm missing any steps/configurations?</p>
<p>According to the AWS G2 <a href="https://aws.amazon.com/blogs/aws/new-g2-instance-type-with-4x-more-gpu-power/" rel="nofollow noreferrer">documentation</a>, <code>g2.8xlarge</code> servers have the following resources:</p> <ul> <li>Four NVIDIA GRID GPUs, each with 1,536 CUDA cores and 4 GB of video memory and the ability to encode either four real-time HD video streams at 1080p or eight real-time HD video streams at 720P. </li> <li>32 vCPUs. </li> <li>60 GiB of memory.</li> <li>240 GB (2 x 120) of SSD storage.</li> </ul> <p>Looking at the comments, 60 GB is standard RAM, and it is used for regular calculations. <code>g2.8xlarge</code> servers have 4 GPUs with 4 GB of GPU memory each, and this memory is used for calculations in <code>nvidia/cuda</code> containers. </p> <p>In your case, it is requested 8 GB of GPU memory per GPU, but your server has only 4 GB. Therefore, the cluster experiences a lack of resources for scheduling the POD. So, try to reduce the memory usage in the Pod settings or try to use a server with a larger amount of GPU memory.</p>
<p>How to refer to the local image that exists? </p> <pre><code>kubectl run u --rm -i --tty --image my_local_image -- bash </code></pre> <p>Results in <code>ImagePullBackOff</code> and <code>kubectl</code> is obviously trying to pull from a remote repository instead of local register.</p> <p>This <a href="https://stackoverflow.com/questions/43462384/kubectl-run-local-docker-image-imagepullbackoff-status">answer</a> is unhelpful, and the follow up refers to minikube and kubernetes. </p> <p>Some event logs</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned u-6988b9c446-zcp99 to docker-for-desktop Normal SuccessfulMountVolume 1m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-q2qm7" Normal SandboxChanged 1m kubelet, docker-for-desktop Pod sandbox changed, it will be killed and re-created. Normal Pulling 23s (x3 over 1m) kubelet, docker-for-desktop pulling image "centos_postgres" Warning Failed 22s (x3 over 1m) kubelet, docker-for-desktop Failed to pull image "centos_postgres": rpc error: code = Unknown desc = Error response from daemon: pull access denied for centos_postgres, repository does not exist or may require 'docker login' Warning Failed 22s (x3 over 1m) kubelet, docker-for-desktop Error: ErrImagePull Normal BackOff 9s (x5 over 1m) kubelet, docker-for-desktop Back-off pulling image "centos_postgres" Warning Failed 9s (x5 over 1m) kubelet, docker-for-desktop Error: ImagePullBackOff </code></pre>
<p>Kubernetes Pods have an <code>imagePullPolicy</code> field. If you set that to <code>Never</code>, it will never try to pull an image, and it's up to you to ensure that the docker daemon which the kubelet is using contains that image. The default policy is <code>IfNotPresent</code>, which should work the same as <code>Never</code> if an image is already present in the docker daemon. Double check that your docker daemon actually contains what you think it contains, and make sure your <code>imagePullPolicy</code> is set to one of the two that I mentioned.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: my-image image: local-image-name imagePullPolicy: Never </code></pre>
<p>I'm kind of new to the Kubernetes world. In my project we are planning to use windows containers(.net full framework) in short term and linux containers(.net core) for the long run.</p> <p>We have a K8 cluster provided by infrastructure team and the cluster has mix of Linux and Windows nodes. I just wanted to know how my windows containers will only be deployed to windows nodes in the K8 cluster. Is it handled by K8 or Do I need anything else ?</p>
<p>Below are the details from the <a href="https://kubernetes.io/docs/getting-started-guides/windows/" rel="nofollow noreferrer">Kubernetes Windows Documentation</a>.</p> <blockquote> </blockquote> <p>Because your cluster has both Linux and Windows nodes, you must explicitly set the nodeSelector constraint to be able to schedule pods to Windows nodes. You must set nodeSelector with the label beta.kubernetes.io/os to the value windows; see the following example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: iis labels: name: iis spec: containers: - name: iis image: microsoft/iis:windowsservercore-1709 ports: - containerPort: 80 nodeSelector: &quot;kubernetes.io/os&quot;: windows </code></pre>
<p>I am having a trouble creating a kubernets using kops inside existing AWS vpc and subnets. I have an existing vpc with the following CIDR blocks:</p> <p><strong>IPv4 CIDR:</strong> 10.10.16.0/20</p> <p>And in that VPC I have my subnets with their assigned CIDR blocks:</p> <p><strong>SubnetDatabaseA:</strong> 10.10.23.0/24</p> <p><strong>SubnetDatabaseB:</strong> 10.10.24.0/24</p> <p><strong>SubnetDatabaseC:</strong> 10.10.20.0/24</p> <p>And so on...</p> <p>when trying to create the cluster using kops I get this error:</p> <pre><code> error running task "Subnet/ap-southeast-2a.clusters.dev1.k8s.local" (9m58s remaining to succeed): error creating subnet: InvalidSubnet.Conflict: The CIDR '10.10.18.0/23' conflicts with another subnet status code: 400, request id: 252367d1-d693-47b9-a6c5-a44908a0f6f7 </code></pre> <p>Which means that one of my subnets is already using that IP range.</p> <p>How can I assign kops to use a specific CIDR of my choice?</p> <p>Because I can see that every time I try to create the cluster it assigns a different CIDR (example- CIDR 10.10.18.0/23)?</p>
<pre><code>kops create cluster --help --subnets stringSlice Set to use shared subnets --vpc string Set to use a shared VPC </code></pre> <p>See following </p> <pre><code> kops create cluster --name=${CLUSTER_NAME} --vpc=vpc-1010af11 --subnets=subnet- 000e123a,subnet-123b456,subnet-888c9991 --master-zones=${ZONES} --zones=${ZONES} --networking=weave </code></pre> <p>So, if you pass subnet ids, kops doesn't create new CIDR, instead, it will use provided subnet ids and corresponding CIDRs. refer following. </p> <pre><code> subnets: - cidr: 92.145.123.0/26 id: subnet- 000e123a name: us-east-1a type: Public zone: us-east-1a - cidr: 92.145.123.64/26 id: subnet-123b456 name: us-east-1b type: Public zone: us-east-1b - cidr: 92.145.123.128/26 id: subnet-888c9991 name: us-east-1c type: Public zone: us-east-1c </code></pre> <p>Or you can edit the cluster with <code>kops edit cluster $CLUSTER_NAME</code> after running <code>kops create cluster</code> without <code>--subnets</code> flag and update the subnets section as seen above. </p> <p>Reference: <a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md</a></p>
<p>I have MiniKube running on my Windows 10 machine. I would like to add an additional node to the cluster.</p> <ol> <li>I have a Centos VM running on a different host that has k8s installed. How to I get the <code>kubectrl join command</code> to run on the VM from the master node running on my Windows machine?</li> <li>Do I need to install an overlay network on the MiniKube VM? Or is one already installed?</li> </ol>
<p>Minikube is officially single-node at the moment. There's a discussion about this limitation at <a href="https://github.com/kubernetes/minikube/issues/94" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/94</a> But it seems people have <a href="https://stackoverflow.com/a/51706547/9705485">found ways to do it with VirtualBox</a> and there are other ways to run a <a href="https://medium.com/devopslinks/deploying-multi-node-kubernetes-environment-in-your-local-machine-a66a1eb82e36" rel="nofollow noreferrer">multi-node cluster locally</a>. Otherwise I'd suggest creating a cluster with one of the cloud providers (e.g. <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">GKE</a>).</p>
<p>I have Kuberenetes cluster hosted in Google Cloud. </p> <p>I deployed my deployment and added an <code>hpa</code> rule for scaling. </p> <pre><code>kubectl autoscale deployment MY_DEP --max 10 --min 6 --cpu-percent 60 </code></pre> <p>waiting a minute and run <code>kubectl get hpa</code> command to verify my scale rule - As expected, I have 6 pods running (according to min parameter).</p> <pre><code>$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE MY_DEP Deployment/MY_DEP &lt;unknown&gt;/60% 6 10 6 1m </code></pre> <p>Now, I want to change the min parameter:</p> <pre><code>kubectl patch hpa MY_DEP -p '{"spec":{"minReplicas": 1}}' </code></pre> <p>Wait for 30 minutes and run the command:</p> <pre><code>$ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE MY_DEP Deployment/MY_DEP &lt;unknown&gt;/60% 1 10 6 30m </code></pre> <p><strong>expected replicas: 1, actual replicas: 6</strong></p> <p>More information:</p> <ol> <li>You can assume that the system has no computing anything (0% CPU utilization). </li> <li>I waited for more than an hour. Nothing changed. </li> <li>The same behavior is seen when i deleted the scaling rule and deployed it again. The <code>replicas</code> parameter has not changed.</li> </ol> <h2>Question:</h2> <p>If I changed the <code>MINPODS</code> parameter to "1" - why I still have 6 pods? How to make Kubernetes to actually change the <code>min</code> pods in my deployment?</p>
<blockquote> <p>If I changed the MINPODS parameter to "1" - why I still have 6 pods?</p> </blockquote> <p>I believe the answer is because of the <code>&lt;unknown&gt;/60%</code> present in the output. <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">The fine manual</a> states:</p> <blockquote> <p>Please note that if some of the pod's containers do not have the relevant resource request set, CPU utilization for the pod will not be defined and the autoscaler will not take any action for that metric</p> </blockquote> <p>and one can see an example of <code>0% / 50%</code> in <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">the walkthrough page</a>. Thus, I would believe that since kubernetes cannot prove <em>what</em> percentage of CPU is being consumed -- neither above nor below the target -- it takes no action for fear of making <em>whatever</em> the situation is worse.</p> <p>As for why there is a <code>&lt;unknown&gt;</code>, I would hazard a guess it's the dreaded heapster-to-metrics-server cutover that might be obfuscating that information from the kubernetes API. Regrettably, I don't have first-hand experience testing that theory, in order to offer you concrete steps beyond "see if your cluster is collecting metrics in a place that kubernetes can see them."</p>
<p>I would like to run a kubernetes cluster with 1 master and 2 worker nodes all 3 in different separate private subnets within our on-premise data center. What would be the best strategy to implement the kubernetes cluster while exposing front end application to public network implementing a good kubernetes security? </p>
<p>In the cloud my answer will be different! From what i understand טםו are not going to scale your nodes so my answer is based on that.</p> <ol> <li><p>Create all your services in K8S cluster (do not expose any one of them).</p></li> <li><p>Create Nginx or any Loadbalancer that you prefer as VM (if you can create 2 VM's for HA much better).</p></li> <li><p>Route Nginx to the Frontend (please use ingress controller that not expose)</p></li> </ol> <p>Now regarding security:</p> <ol> <li><p>Add WAF to your Loadbalancer.</p></li> <li><p>Control on the allowed process in every container type (use Falco for that).</p></li> <li><p>Create network policy that define what service allowed to speak with what service or i strongly suggest using Istio.</p></li> <li><p>Create certificate to the DB and only the pods that contain certificate will able to speak with him.</p></li> </ol> <p>Good luck. </p>
<p><code>kubectl logs -f pod</code> shows all logs from the beginning and it becomes a problem when the log is huge and we have to wait for a few minutes to get the last log. Its become more worst when connecting remotely. Is there a way that we can tail the logs for the last 100 lines of logs and follow them?</p>
<p>In a cluster best practices are to gather all logs in a single point through an aggregator and analyze them with a dedicated tool. For that reason in K8S, log command is quite basic.</p> <p>Anyway <code>kubectl logs -h</code> shows some options useful for you:</p> <pre><code># Display only the most recent 20 lines of output in pod nginx kubectl logs --tail=20 nginx # Show all logs from pod nginx written in the last hour kubectl logs --since=1h nginx </code></pre> <p>Some tools with your requirements (and more) are available on github, some of which are:</p> <ul> <li><p><a href="https://github.com/boz/kail" rel="noreferrer">https://github.com/boz/kail</a></p> </li> <li><p><a href="https://github.com/stern/stern" rel="noreferrer">https://github.com/stern/stern</a></p> </li> </ul>
<p>We have a Java Spring Boot project with Swagger and docker. We deploy it on kubernetes behind an ingress controller.</p> <p>It works properly in localhost (using postman and swagger-ui try button). Problem comes when we deploy it.</p> <p>Rest Controller:</p> <pre><code>@ApiOperation(value = "Operation", notes = "it does something&lt;br /&gt;") @RequestMapping(value="/operation", method=RequestMethod.POST) @ApiResponses({ @ApiResponse(code = 200, message = "OK") }) @ResponseBody public ResponseEntity&lt;String&gt; operation(@RequestBody BodyThing thing) { return new ResponseEntity&lt;&gt;("OK", HttpStatus.OK); } //operation </code></pre> <p>Now the ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapp-ingress namespace: __NAMESPACE__ annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - test.host.com secretName: key-pair rules: - host: test.host.com http: paths: - path: /myapp backend: serviceName: myapp-service servicePort: 8080 </code></pre> <p>Then, with the app online deployed on K8S, usign an app like postman we must call: <a href="https://test.host.com/myapp/operation" rel="noreferrer">https://test.host.com/myapp/operation</a> in order to call the API. It works OK.</p> <p>The problem comes if we enter in Swagger UI portal: <a href="https://test.host.com/myapp/swagger-ui.html" rel="noreferrer">https://test.host.com/myapp/swagger-ui.html</a></p> <p>If we try the API call inside the swagger UI, it tries to call to <a href="https://test.host.com/operation" rel="noreferrer">https://test.host.com/operation</a> and it fails with 404 code.</p> <p>Swagger-UI makes the Endpoint URL with: host + basepath + operation_path and that is: test.host.com + / + operation</p> <p>It doesnt aggregate ingress path.</p> <p>How can we deal with it? Of course is something that only happends if we deploy it with the ingress controller because, we add the /myapp path.</p> <p>Thanks!</p>
<p>The problem is how to get swagger's base path to match to the one that is being used behind the proxy. There is more than one solution as per <a href="https://github.com/springfox/springfox/issues/1443" rel="nofollow noreferrer">https://github.com/springfox/springfox/issues/1443</a> </p> <p>Since you have a specific host, I'd suggest going with the one to change the base path that swagger knows about based upon the host the request goes to. This way you can set it differently for localhost vs in the remote host. You'd need to set a RelativePathProvider for your host in your custom docket section of your Swagger @Configuration class like in <a href="https://github.com/springfox/springfox/issues/1443#issuecomment-274540681" rel="nofollow noreferrer">https://github.com/springfox/springfox/issues/1443#issuecomment-274540681</a></p>
<p>I have a spring boot application with the below docker file.</p> <pre><code>FROM docker.com/base/jdk1.8:latest MAINTAINER Application Engineering [ https://docker.com/ ] RUN mkdir -p /opt/docker/svc COPY application/weather-service.war /opt/docker/svc/ CMD java -jar /opt/docker/svc/weather-service.war --spring.config.location=file:/conf/application.properties -Dlogging.config=/conf/logback.xml </code></pre> <p>I can use kubernetes configMap or secrets for application.properties and use the volume mount option as below.</p> <pre><code>"spec": { "volumes": [ { "name": "svc-prop", "configMap": { "name": "svc-app-config", "items": [ { "key": "application.properties", "path": "application.properties" } ] } } ], "containers": [ "volumeMounts": [ { "name": "svc-prop", "mountPath": "/conf" } ] </code></pre> <p>How can i achieve the samething for logback.xml. Do i need to use secrets as a file in this case?</p> <p>I dont want to bundle logback.xml file with the image as we might be changing the log level at runtime.</p> <p>Is there any other better approach for keeping logback.xml for spring boot app in Kubernetes?</p>
<p>Usually you do not want to provide the whole <code>logback.xml</code> file but rather it's <code>logger</code> list which requires updating at runtime most frequently. In order to achieve this you can use <a href="https://logback.qos.ch/manual/configuration.html#fileInclusion" rel="noreferrer">Logback's file inclusion</a> feature:</p> <ol> <li>Write your <code>logback.xml</code> file as usual except <code>logger</code> list. Use <code>include</code> element instead:</li> </ol> <pre class="lang-xml prettyprint-override"><code> &lt;configuration scan="true" scanPeriod="10 seconds" debug="true"&gt; &lt;appender ...&gt;&lt;/appender&gt; &lt;root level="INFO"&gt;...&lt;/root&gt; &lt;!-- Import loggers configuration from external file --&gt; &lt;include file="config/mount/loggers-include.xml"/&gt; &lt;/configuration&gt; </code></pre> <p>Note those <code>scan*</code> attributes. They are essential for log config reloading at runtime.</p> <ol start="2"> <li><p>Define all the loggers in Kubernetes ConfigMap with <code>loggers-include.xml</code> data section: </p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: microservice-loggers # the name to refer to from deployment (see below) namespace: upc data: loggers-include.xml: |+ &lt;included&gt; &lt;logger name="org.springframework.cloud.netflix.zuul" level="INFO"/&gt; &lt;logger name="com.netflix.zuul" level="INFO"/&gt; &lt;logger name="com.netflix.hystrix" level="INFO"/&gt; &lt;logger name="com.netflix.ribbon" level="DEBUG"/&gt; &lt;logger name="com.netflix.loadbalancer" level="INFO"/&gt; &lt;/included&gt; </code></pre> <p>Note that all the included content must be enclosed in <code>included</code> tag in order to be correctly parsed by Logback.</p></li> <li><p>Mount your ConfigMap's data into container as <code>config/mount/loggers-include.xml</code> file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment ... spec: ... template: ... spec: # declare the volume created from ConfigMap volumes: - name: config-volume # used with this name below configMap: name: microservice-loggers # declared in previous step containers: - name: microservice ... ports: ... # mount the volume declared above to container's file system volumeMounts: - mountPath: /microservice/config/mount name: config-volume # declared above </code></pre> <p>Note that <code>mount</code> directory must not be created neither by container itself nor its image. Moreover, if such a directory exists, all its content <strong>will be removed</strong> during mounting.</p></li> <li><p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="noreferrer">Apply the ConfigMap</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">run the declared deployment</a>. To check if loggers file is correctly mounted execute the following command:</p> <pre><code>$ kubectl exec restorun-7d757b7c6-wcslx -- ls -l /microservice/config/mount total 0 lrwxrwxrwx 1 root root 26 Aug 14 05:52 loggers-include.xml -&gt; ..data/loggers-include.xml </code></pre> <p>Also if you've set <code>debug=true</code> attribute in Logback's <code>configuration</code> element (see step 1) then you should see the following record in STDOUT during application startup:</p> <pre><code>05:52:17,031 |-INFO in ch.qos.logback.core.joran.util.ConfigurationWatchListUtil@6e06451e - Adding [file:/microservice/config/mount/loggers-include.xml] to configuration watch list. </code></pre></li> <li><p>Now you can edit your ConfigMap (e.g. set <code>com.netflix.hystrix</code> to level <code>WARN</code>), save its file and tell Kubernetes to apply the changes to the application:</p> <pre><code>$ kubectl apply -f microservice-log-configmap.yaml configmap "microservice-loggers" configured </code></pre> <p>Again, Logback should reflect the changes by logging with the following message to standard output:</p> <pre><code> 05:59:16,974 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.netflix.hystrix] to WARN </code></pre> <p>You can also check the effective logging level by directly asking it from the Spring Boot Actuator (if you have an access to this endpoint):</p> <pre><code>$ curl http://&lt;k8s-external-ip&gt;/actuator/loggers/com.netflix.hystrix { "configuredLevel" : "WARN", "effectiveLevel" : "WARN" } </code></pre></li> </ol> <p>If the level stays the same, wait for a minute and check again: it takes some time for changes to propagate through Kubernetes and Logback. More info on this topic:</p> <ul> <li><a href="https://logback.qos.ch/manual/configuration.html#autoScan" rel="noreferrer">Logback auto reloading</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="noreferrer">Kubernetes ConfigMap mounting</a><br> (see <em>Mounted ConfigMaps are updated automatically</em> section)</li> </ul>
<p>I have a number of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">Jobs</a> running on k8s. </p> <p>These jobs run a custom agent that copies some files and sets up the environment for a user (trusted) provided container to run. This agent runs on the side of the user container, captures the logs, waits for the container to exit and process the generated results. </p> <p>To achieve this, we mount Docker's socket <code>/var/run/docker.sock</code> and run as a privileged container, and from within the agent, we use <a href="https://github.com/docker/docker-py/tree/master/docker" rel="noreferrer">docker-py</a> to interact with the user container (setup, run, capture logs, terminate).</p> <p>This works almost fine, but I'd consider it a hack. Since the user container was created by calling docker directly on a node, k8s is not aware of it's existence. This has been causing troubles since our monitoring tools interact with K8s, and don't get visibility to these stand-alone user containers. It also makes pod scheduling harder to manage, since the limits (cpu/memory) for the user container are not accounted as the requests for the pod. </p> <p>I'm aware of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init containers</a> but these don't quite fit this use case, since we want to keep the agent running and monitoring the user container until it completes. </p> <p><em>Is it possible for a container running on a pod, to request Kubernetes to add additional containers to the same pod the agent is running? And if so, can the agent also request Kubernetes to remove the user container at will (e.g. certain custom condition was met)?</em></p>
<p>From <a href="https://github.com/kubernetes/kubernetes/issues/37838#issuecomment-328853094" rel="nofollow noreferrer">this GitHub issue</a>, it seems that the answer is that adding or removing containers to a pod is not possible, since the container list in the pod spec is immutable. </p>
<p>Our GKE cluster is shared to multiple teams in company. Each team can have different public domain (and hence want to have different CA cert setup and also different ingress gateway controller). How to do that in Istio? All the tutorial/introduction articles in Istio's website are using a shared ingress gateway. See the example shared ingress gateway that comes installed by istio-1.0.0: <a href="https://istio.io/docs/tasks/traffic-management/secure-ingress/" rel="noreferrer">https://istio.io/docs/tasks/traffic-management/secure-ingress/</a> </p> <pre><code>spec: selector: istio: ingressgateway # use istio default ingress gateway </code></pre>
<p>Okay, I found the answer after looking at the code of Istio installation via helm. So, basically the istio have an official way (but not really documented in their readme.md file) to add additional gateway (ingress and egress gateway). I know that because I found this <a href="https://github.com/istio/istio/blob/3a0daf1db0bd8a98a414abf7c21e9506a0839848/install/kubernetes/helm/istio/values-istio-gateways.yaml" rel="nofollow noreferrer">yaml file</a> in their github repo and read the comment (also looking at the <code>gateway</code> chart template code for the spec and its logic).</p> <p>So, I solved this by, for example, defining this values-custom-gateway.yaml file:</p> <pre><code># Gateways Configuration # By default (if enabled) a pair of Ingress and Egress Gateways will be created for the mesh. # You can add more gateways in addition to the defaults but make sure those are uniquely named # and that NodePorts are not conflicting. # Disable specifc gateway by setting the `enabled` to false. # gateways: enabled: true agung-ingressgateway: namespace: agung-ns enabled: true labels: app: agung-istio-ingressgateway istio: agung-ingressgateway replicaCount: 1 autoscaleMin: 1 autoscaleMax: 2 resources: {} # limits: # cpu: 100m # memory: 128Mi #requests: # cpu: 1800m # memory: 256Mi loadBalancerIP: "" serviceAnnotations: {} type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be ports: ## You can add custom gateway ports - port: 80 targetPort: 80 name: http2 # nodePort: 31380 - port: 443 name: https # nodePort: 31390 - port: 31400 name: tcp secretVolumes: - name: ingressgateway-certs secretName: istio-ingressgateway-certs mountPath: /etc/istio/ingressgateway-certs - name: ingressgateway-ca-certs secretName: istio-ingressgateway-ca-certs mountPath: /etc/istio/ingressgateway-ca-certs </code></pre> <p>If you take a look at yaml file above, I specified the <code>namespace</code> other than <code>istio-system</code> ns. In this case, we can have a way to customize the TLS and ca cert being used by our custom gateway. Also the <code>agung-ingressgateway</code> as the holder of the custom gateway controller spec is used as the gateway controller's name.</p> <p>Then, i just install the istio via <code>helm upgrade --install</code> so that helm can intelligently upgrade the istio with additional gateway.</p> <pre><code>helm upgrade my-istio-release-name &lt;istio-chart-folder&gt; --install </code></pre> <p>Once it upgrades successfully, I can specify custom selector to my <code>Gateway</code>:</p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: agung-gateway namespace: agung-ns spec: selector: app: agung-istio-ingressgateway # use custom gateway # istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key hosts: - "*" </code></pre>
<p>I try to run an mongodb inside an kubernetes cluster which is hosted on azure aks</p> <p>I was not able to get it running, following this tutorial: <a href="https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/" rel="nofollow noreferrer">https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/</a></p> <p>here is the yaml I use:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: default-view roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: default namespace: default --- apiVersion: v1 kind: Service metadata: name: mongo labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongo spec: serviceName: "mongo" replicas: 2 template: metadata: labels: role: mongo environment: test spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo command: - mongod - "--replSet" - rs0 - "--bind_ip" - 0.0.0.0 - "--smallfiles" - "--noprealloc" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db - name: mongo-sidecar image: cvallance/mongo-k8s-sidecar env: - name: MONGO_SIDECAR_POD_LABELS value: "role=mongo,environment=test" volumeClaimTemplates: - metadata: name: mongo-persistent-storage annotations: volume.beta.kubernetes.io/storage-class: "managed-premium" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 32Gi </code></pre> <p>The connection string I use is:</p> <p>"mongodb://mongo-0.mongo,mongo-1.mongo:27017/databasename\_?"</p> <p>from my JS application I get: </p> <blockquote> <p>database names cannot contain the character '\'</p> </blockquote> <p>How I can connect from the JS application to the mongodb?</p>
<blockquote> <p>mongodb://mongo-0.mongo,mongo-1.mongo:27017</p> </blockquote> <p>You are indicating that you have a replicaset with 2 members and both use the port 27017. Your mongodb library will handle that url to connect to the cluster.</p> <p>In order to connect locally you have to do a port forwarding:</p> <blockquote> <p>kubectl port-forward mongo-0 27017 # or mongo-1</p> </blockquote> <p>Then you can connect to the chosen mongodb with <strong>Robo 3T</strong> using your localhost (127.0.0.1) and the port 27017.</p>
<p>After manually adding some iptables rules and rebooting the machine, all of the rules are gone (no matter the type of rule ). </p> <p>ex.</p> <pre><code>$ iptables -A FUGA-INPUT -p tcp --dport 23 -j DROP $ iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ KUBE-FIREWALL all -- anywhere anywhere DROP tcp -- anywhere anywhere tcp dpt:telnet </code></pre> <p>After the reboot:</p> <pre><code>$ iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ KUBE-FIREWALL all -- anywhere anywhere </code></pre> <p>If I am not mistaken, <code>kube-proxy</code> running on every node is dynamically modifying the <code>iptables</code>. If that is correct how can I add rules that are permanent but still enable kubernetes/kube-proxy to do it's magic and not delete all the <code>INPUT</code>, <code>FORWARD</code> and <code>OUTPUT</code> rules that both Kubernetes and Weave plugin network dynamically generate?</p>
<p>Running <code>iptables</code> on <strong>any</strong> system is not a persistent action and would be forgotten on reboot, a k8s node is not an exception. I doubt that k8s will erase the IPTABLES rules when it starts, so you could try this:</p> <ul> <li>create your rules (do this starting with empty iptables, with <code>iptables -A</code> commands, as you need them)</li> <li>run <code>iptables-save &gt;/etc/my-iptables-rules</code> (NOTE you could create a rules file manually, too).</li> <li>create a system service script that runs on boot (or use <code>/etc/rc.local</code>) and add <code>iptables-restore -n &lt;/etc/my-iptables-rules</code> to it. This would load your rules on reboot. Note if you use <code>rc.local</code>, your 'iptables-restore' command may well run after k8s starts, check that your <code>iptables -A</code> commands are not sensitive to being loaded after those of k8s; if needed replace the -A commands in the file with -I (to place your commands first in the tables).</li> </ul> <p>(be aware that some OS installations might include a boot-time service that loads iptables as well; there are some firewall packages that install such a service - if you have one on your server, the best approach is to add your rules to that firewall's config, not write and load your own custom config).</p>
<p>I have a program that executes some code, sleeps for 10 minutes, then repeats. This continues in an infinite loop. I'm wondering if theres a way to let Kubernetes/GKE handle this scheduling.</p> <p>I see that GKE offers cron scheduling. I could schedule a pod to run every 10 minutes. The problem is that in some scenarios the program could take more than 10 minutes to complete.</p> <p>Ideally, I could let the pod run to completion, schedule it to run in 10 minutes, repeat. Is this possible?</p> <p>Is this possible on Kubernetes?</p>
<p>In K8S there's a specific resource for that goal: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">CronJob</a></p> <p>In the following example you see a <code>schedule</code> section with the tipical cron notation:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: your-cron spec: schedule: "*/20 8-19 * * 1-5" concurrencyPolicy: Forbid jobTemplate: spec: template: metadata: labels: app: your-periodic-batch-job spec: containers: - name: redmine-cron image: your_image imagePullPolicy: IfNotPresent restartPolicy: OnFailure </code></pre>
<p>Im new at helm. Im building a splunk helm chart with numerous conf files. I currently use something like this in a configmap ..</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: splunk-master-configmap data: indexes.conf: | # global settings # Inheritable by all indexes: no hot/warm bucket can exceed 1 TB. # Individual indexes can override this setting. homePath.maxDataSizeMB = 1000000 </code></pre> <p>but I would prefer to have the conf files in a seperate folder e.g. configs/helloworld.conf and have come accross &quot;tpl&quot; but am struggling to understand how to implement it. - can anyone advise best practices. On a side note splunk has orders of presidences &gt;&gt; so there may be many indexes.conf files used in various locations. does anyone have any thoughts on how best to implement this?!??!</p> <p>Cheers.</p>
<p>If the content of the files is static then you could create a files directory in your chart at the same level as the templates directory (<a href="https://github.com/Activiti/activiti-cloud-charts/tree/master/activiti-keycloak" rel="noreferrer">not inside it</a>) and reference them like:</p> <pre><code>kind: ConfigMap metadata: name: splunk-master-configmap data: {{ (.Files.Glob "files/indexes.conf").AsConfig | indent 2 }} {{ (.Files.Glob "files/otherfile.conf").AsConfig | indent 2 }} # ... and so on </code></pre> <p>Where this would break down is if you want to be able to reference the values of variables inside the files so that the content is controlled from the values.yaml. If you want to expose each value individually then there's an <a href="https://github.com/technosophos/k8s-helm/blob/master/docs/chart_template_guide/variables.md" rel="noreferrer">example in the helm documentation using range</a>. But I think a good fit or your case is <a href="https://github.com/helm/charts/blob/31279766845d297d12bc4309177f46548b15f82b/stable/mysql/templates/configurationFiles-configmap.yaml" rel="noreferrer">what the stable/mysql chart does</a>. It has a ConfigMap that takes values as strings:</p> <pre><code>{{- if .Values.configurationFiles }} apiVersion: v1 kind: ConfigMap metadata: name: {{ template "mysql.fullname" . }}-configuration data: {{- range $key, $val := .Values.configurationFiles }} {{ $key }}: |- {{ $val | indent 4}} {{- end }} {{- end -}} </code></pre> <p>And the values.yaml allows both the files and their content to be set and overridden by the user of the chart:</p> <pre><code># Custom mysql configuration files used to override default mysql settings configurationFiles: # mysql.cnf: |- # [mysqld] # skip-name-resolve # ssl-ca=/ssl/ca.pem # ssl-cert=/ssl/server-cert.pem # ssl-key=/ssl/server-key.pem </code></pre> <p>It comments out that content and leaves it to the user of the chart to set but you could have defaults in the values.yaml.</p> <p>You would only need <code>tpl</code> if you needed further flexibility. The <a href="https://github.com/helm/charts/tree/master/stable/keycloak" rel="noreferrer">stable/keycloak chart</a> lets the user of the chart create their own configmap and <a href="https://github.com/helm/charts/pull/6090#issuecomment-396941423" rel="noreferrer">point it into the keycloak deployment via tpl</a>. But I think your case is probably closest to the mysql one. </p> <p>Edit: the tpl function can also be used to take the content of files loaded with Files.Get and effectively make that content part of the template - see <a href="https://stackoverflow.com/questions/47595295/how-do-i-load-multiple-templated-config-files-into-a-helm-chart/52009992#52009992">How do I load multiple templated config files into a helm chart?</a> if you're interested in this</p>
<p>I am trying to parse the logs from kubernetes like this for example</p> <pre><code>2018-08-14 13:21:20.013 [INFO][67] health.go 150: Overall health summary=&amp;health.HealthReport{Live:true, Ready:true} </code></pre> <p>And this is the configuration</p> <pre><code>&lt;source&gt; @id calico-node.log @type tail format /^(?&lt;time&gt;[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?&lt;severity&gt;[^ \]]*) *\] (?&lt;message&gt;.*)$/ time_format %Y-%m-%d %H:%M:%S path /var/log/containers/calico-node**.log pos_file /var/log/es-calico.pos tag calico-node &lt;/source&gt; </code></pre> <p>According to regex101.com, this pattern should match this string. However, I get an error from fluentd while trying to parse this</p> <pre><code>2018-08-14 13:21:20 +0000 [warn]: [calico-node.log] "{\"log\":\"2018-08-14 13:21:20.013 [INFO][67] health.go 150: Overall health summary=\\u0026health.HealthReport{Live:true, Ready:true}\\n\",\"stream\":\"stdout\",\"time\":\"2018-08-14T13:21:20.013908223Z\"}" error="invalid time format: value = {\"log\":\"2018-08-14 13:21:20.013, error_class = ArgumentError, error = string doesn't match"``` </code></pre> <p>What could be wrong? I have had similar errors with the built-in parser for apache logs as well?</p>
<p>From what I can see, you are missing something in the <code>fluentd config</code>.</p> <p>Your <code>time_format %Y-%m-%d %H:%M:%S</code> will not work with the timestamp <code>2018-08-14 13:21:20.013</code>, as it's missing <code>.%3N</code>.</p> <p>It should be as follows: <code>time_format %Y-%m-%d %H:%M:%S.%3N</code> or <code>time_format %Y-%m-%d %H:%M:%S.%L</code></p>
<p>I would like to use consul (or would you recommend another technology?) to store environment variables that are used in a kubernetes pod/container. I have a software which uses those environment variables to setup its application state.</p> <p>I heard that I could use consul for that, but I have to use something like consul because of business reasons and the pipeline for setting and distributing that configuration to other systems.</p>
<p>Give a try in <a href="https://github.com/hashicorp/envconsul" rel="nofollow noreferrer">envconsul</a></p> <p>Install it in your pods to transform configs from consul to enviroment variables</p>
<p>I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?</p>
<p>The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.</p> <p>You can find instructions for setting it up <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="nofollow noreferrer">here</a>. (It says it's for GKE, but the principles are the same) </p> <p>If you prefer something a little more hands on, <a href="https://github.com/GoogleCloudPlatform/gmemegen" rel="nofollow noreferrer">this</a> codelab will walk you through taking an app from local to on a Kubernetes Cluster.</p>
<p>I understand that <code>{{.Release.namespace}}</code> will render the namespace where the application being installed by <code>helm</code>. In that case, <code>helm template</code> command will render it as empty string (since it doesn't know yet the release namespace). </p> <p>However, what makes me surprise is <code>helm upgrade --install</code> command (i haven't tried other command such as <code>helm install</code>) also renders it empty on some cases. </p> <p>Here is the example of my helm chart template:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{.Values.app.name}}-{{.Values.app.track}}-internal namespace: {{.Release.namespace}} annotations: testAnnotate: "{{.Release.namespace}}" spec: ports: - protocol: TCP port: 80 targetPort: 8080 selector: app: {{.Values.app.name}} environment: {{.Values.app.env}} track: {{.Values.app.track}} type: ClusterIP </code></pre> <p>After invoke <code>helm upgrade --install</code> on that chart template (and installed it successfully), I then try to see the output of my resource </p> <pre><code>&gt; kubectl get -o yaml svc java-maven-app-stable-internal -n data-devops apiVersion: v1 kind: Service metadata: annotations: testAnnotate: "" creationTimestamp: 2018-08-09T06:56:41Z name: java-maven-app-stable-internal namespace: data-devops resourceVersion: "62906341" selfLink: /api/v1/namespaces/data-devops/services/java-maven-app-stable-internal uid: 5e888e6a-9ba1-11e8-912b-42010a9400fa spec: clusterIP: 10.32.76.208 ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: java-maven-app environment: stg track: stable sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>As you can see, I put <code>{{.Release.namespace}}</code> on 2 places:</p> <ul> <li>in <code>metadata.namespace</code> field</li> <li>in <code>metadata.annotations.testAnnotate</code> field.</li> </ul> <p>But it only renders the correct namespace on <code>metadata.namespace</code> field. Any idea why?</p>
<p>The generated value <code>.Release.Namespace</code> is case-sensitive. The letter N in "namespace" should be capitalized.</p>
<p>Given a Node with 10 cpu, if I request a Pod with <code>request 4cpu and limit 6cpu</code>, does that mean the Node has only 4cpu for usage left?</p> <p>real question here is if I need a Pod with 2 cpu request and 5 cpu limit, as the Node doesn't have 5 cpu left the pod won't be provisioned?</p> <p>its not clear in the docs</p>
<p>The requests are used for scheduling, the limits are that, limits. So, in your example, the node will still have 6 CPU remaining after scheduling your 4 CPU request. It will let your pod use up to 6 CPU, but it will start to limit its performance if it tries to use more than 6, so that it never exceeds 6.</p> <p>The 2 CPU request with 5 CPU limit can be scheduled on a 2 CPU node, providing that nothing else is running there which requested any CPU.</p>
<p>Kubernetes allows to specify the cpu <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">limit and/or request</a> for a POD.</p> <p>Limits and requests for CPU resources are measured in <em>cpu units</em>. One cpu, in Kubernetes, is equivalent to:</p> <pre><code>1 AWS vCPU 1 GCP Core 1 Azure vCore 1 IBM vCPU 1 Hyperthread on a bare-metal Intel processor with Hyperthreading </code></pre> <p>Unfortunately, when using an heterogeneous cluster (for instance with different processors), the <em>cpu limit/request</em> depends on the node on which the POD is assigned; especially for real time applications.</p> <p>If we assume that:</p> <ul> <li>we can compute a fined tuned cpu limit for the POD for each kind of hardware of the cluster</li> <li>we want to let the Kubernetes scheduler choose a matching node in the whole cluster</li> </ul> <p>Is there a way to launch the POD so that the cpu limit/request depends on the node chosen by the Kubernetes scheduler (or a Node label)?</p> <p>The obtained behavior should be (or equivalent to):</p> <ul> <li>Before assigning the POD, the scheduler chooses the node by checking different cpu requests depending on the Node (or a Node Label)</li> <li>At runtime, Kublet checks a specific cpu limit depending on the Node (or a Node Label)</li> </ul>
<p>No, you can't have different requests per node type. What you can do is create a pod manifest with a node affinity for a specific kind of node, and requests which makes sense for that node type. For a second kind of node, you will need a second pod manifest which makes sense for that node type. These pod manifests will differ only in their affinity spec and resource requests - so it would be handy to parameterize them. You could do this with Helm, or write a simple script to do it.</p> <p>This approach would let you launch a pod within a subset of your nodes with resource requests which make sense on those nodes, but there's no way to globally adjust its requests/limits based on where it ends up.</p>
<p>I am running Kubernetes 1.9.6 with Weave Net 2.4.0. I am trying to lock down access to the Kubernetes internal DNS server and a specific port on another host. I cannot seem to find the proper format for the egress.</p> <p>I know the following is not a valid policy but is a representation of what I want to do. How do I write the network policy to support this?</p> <pre><code>--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: dev spec: podSelector: matchLabels: app: plem-network-policy policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.3.0.10/32 ports: - protocol: TCP port: 53 - protocol: UDP port: 53 - ipBlock: cidr: 10.49.100.37/32 ports: - protocol: TCP port: 8200 </code></pre>
<p>I was not paying enough attention to multiple blocks for the cidr and ports. This is what I was looking for.</p> <pre><code>--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: dev spec: podSelector: matchLabels: app: plem-network-policy policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.2.0.0/16 - ipBlock: cidr: 10.3.0.10/32 ports: - protocol: UDP port: 53 - protocol: TCP port: 53 - to: - ipBlock: cidr: 10.49.100.37/32 - ipBlock: cidr: 10.49.100.137/32 - ipBlock: cidr: 10.49.100.85/32 ports: - protocol: TCP port: 8200 - to: - ipBlock: cidr: 10.29.30.56/32 ports: - protocol: TCP port: 5439 </code></pre>
<p>I've followed this doc from microsoft <a href="https://learn.microsoft.com/en-us/azure/aks/ingress" rel="noreferrer">Deploy an HTTPS ingress controller on Azure Kubernetes Service (AKS)</a> and have successfully deployed a managed Kubernetes cluster (AKS) with nginx ingress controller. it works with https as expected.</p> <p>However, the domain that responds of the format <strong>subdomain.eastus2.cloudapp.azure.com</strong>. However I would like to use my own custom domain <strong>www.somedomain.com</strong>. I then add a CNAME entry to my custom domain, pointing to the public ip address configured by the kubernetes cluster.</p> <p>However, when I do this, I get a response on the browser of </p> <p><strong>default backend - 404</strong></p> <p>It looks like I need to change the public ip address in Azure (or somewhere) so that it understands that it will be used by a custom domain as well as by an azure subdomain.</p> <p>I've had a look at the command:</p> <p>az network </p> <p>command. However, it's not very clear is this is the right command to use or not. Does anyone know how I can make the changes required so that my custom FQDN can be routed properly to my kubernetes cluster?</p> <p>thanks</p>
<p>Here's the yaml that worked for me. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: webapp-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-staging nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - subdomain.eastus2.cloudapp.azure.com - subdomain.domain.com secretName: tls-secret rules: - host: subdomain.eastus2.cloudapp.azure.com http: paths: - path: / backend: serviceName: aks-helloworld servicePort: 80 - host: subdomain.domain.com http: paths: - path: / backend: serviceName: aks-helloworld servicePort: 80 </code></pre> <p>See here for worked through example: <a href="https://learn.microsoft.com/en-us/azure/aks/ingress" rel="noreferrer">Deploy an HTTPS ingress controller on Azure Kubernetes Service (AKS)</a></p>
<p>I've been struggling for a while trying to get HTTPS access to my Elasticsearch cluster in Kubernetes.</p> <p>I <em>think</em> the problem is that Kubernetes doesn't like the TLS certificate I'm trying to use, which is why it's not passing it all the way through to the browser.</p> <p>Everything else seems to work, since when I accept the Kubernetes Ingress Controller Fake Certificate, the requests go through as expected.</p> <p>In my attempt to do this I've set up:</p> <ul> <li>The cluster itself</li> <li>An nginx-ingress controller</li> <li>An ingress resource</li> </ul> <p>Here's the related yaml:</p> <ul> <li><p>Cluster:</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-08-03T03:20:47Z labels: run: my-es name: my-es namespace: default resourceVersion: "3159488" selfLink: /api/v1/namespaces/default/services/my-es uid: 373047e0-96cc-11e8-932b-42010a800043 spec: clusterIP: 10.63.241.39 ports: - name: http port: 8080 protocol: TCP targetPort: 9200 selector: run: my-es sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre></li> <li><p>The ingress resource</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations:kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS nginx.ingress.kubernetes.io/cors-origins: http://localhost:3425 https://mydomain.ca https://myOtherDomain.ca nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/rewrite-target: / creationTimestamp: 2018-08-12T08:44:29Z generation: 16 name: es-ingress namespace: default resourceVersion: "3159625" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/es-ingress uid: ece0071d-9e0b-11e8-8a45-42001a8000fc spec: rules: - http: paths: - backend: serviceName: my-es servicePort: 8080 path: / tls: - hosts: - mydomain.ca secretName: my-tls-secret status: loadBalancer: ingress: - ip: 130.211.179.225 </code></pre></li> <li><p>The nginx-ingress controller:</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: 2018-08-12T00:41:32Z labels: app: nginx-ingress chart: nginx-ingress-0.23.0 component: controller heritage: Tiller release: nginx-ingress name: nginx-ingress-controller namespace: default resourceVersion: "2781955" selfLink: /api/v1/namespaces/default/services/nginx-ingress-controller uid: 755ee4b8-9dc8-11e8-85a4-4201a08000fc spec: clusterIP: 10.63.250.256 externalTrafficPolicy: Cluster ports: - name: http nodePort: 32084 port: 80 protocol: TCP targetPort: http - name: https nodePort: 31182 port: 443 protocol: TCP targetPort: https selector: app: nginx-ingress component: controller release: nginx-ingress sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 35.212.6.131 </code></pre></li> </ul> <p>I feel like I'm missing something basic, because it doesn't seem like it should be this hard to expose something this simple...</p> <p>To get my certificate, I just requested one for mydomain.ca from godaddy.</p> <p>Do I need to somehow get a certificate using my ingress resource's cluster IP as the common name? </p> <p>It doesn't seem possible to verify ownership of an IP.</p> <p>I've seen people mention ways for Kubernetes to automatically create certificates for ingress resources, but those seem to be self signed.</p> <p>Here are some logs from the nginx-controller:</p> <p>This one is talking about a PEM with the tls-secret, but it's only a warning.</p> <pre><code>{ insertId: "1kvvhm7g1q7e0ej" labels: { compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n" container.googleapis.com/namespace_name: "default" container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s" container.googleapis.com/stream: "stderr" } logName: "projects/project-7d320/logs/nginx-ingress-controller" receiveTimestamp: "2018-08-14T02:58:42.135388365Z" resource: { labels: { cluster_name: "my-elasticsearch-cluster" container_name: "nginx-ingress-controller" instance_id: "2341889542400230234" namespace_id: "default" pod_id: "nginx-ingress-controller-58f57fc597-zl25s" project_id: "project-7d320" zone: "us-central1-a" } type: "container" } severity: "WARNING" textPayload: "error obtaining PEM from secret default/my-tls-cert: error retrieving secret default/my-tls-cert: secret default/my-tls-cert was not found" timestamp: "2018-08-14T02:58:37Z" } </code></pre> <p>I have a few occurences of this handshake error, which may be a result of the last warning...</p> <pre><code>{ insertId: "148t6rfg1xmz978" labels: { compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n" container.googleapis.com/namespace_name: "default" container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s" container.googleapis.com/stream: "stderr" } logName: "projects/project-7d320/logs/nginx-ingress-controller" receiveTimestamp: "2018-08-14T15:55:52.438035706Z" resource: { labels: { cluster_name: "my-elasticsearch-cluster" container_name: "nginx-ingress-controller" instance_id: "2341889542400230234" namespace_id: "default" pod_id: "nginx-ingress-controller-58f57fc597-zl25s" project_id: "project-7d320" zone: "us-central1-a" } type: "container" } severity: "ERROR" textPayload: "2018/08/14 15:55:50 [crit] 1548#1548: *860 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442" timestamp: "2018-08-14T15:55:50Z" } </code></pre> <p>The above logs make it seem like my tls secret isnt working, but when I run kubectl describe ingress, it says my secret terminates.</p> <pre><code>aaronmw@project-7d320:~$ kubectl describe ing Name: es-ingress Namespace: default Address: 130.221.179.212 Default backend: default-http-backend:80 (10.61.3.7:8080) TLS: my-tls-secret terminates mydomain.ca Rules: Host Path Backends ---- ---- -------- * / my-es:8080 (&lt;none&gt;) Annotations: Events: &lt;none&gt; </code></pre>
<p>I figured it out!</p> <p>What I ended up doing was adding a default ssl certificate to my nginx-ingress controller on creation using the following command</p> <pre><code>helm install --name nginx-ingress --set controller.extraArgs.default-ssl-certificate=default/search-tls-secret stable/nginx-ingress </code></pre> <p>Once I had that, it was passing the cert as expected, but I still had the wrong cert as the CN didn't match my load balancer IP.</p> <p>So what I did was:</p> <ul> <li>Make my load balancer IP static</li> <li>Add an A record to my domain, to map a subdomain to that IP</li> <li>Re-key my cert to match that new subdomain</li> </ul> <p>And I'm in business!</p> <p>Thanks to @Crou, who's comment reminded me to look at the logs and got me on the right track.</p>
<p>I want to run a CronJob on my GKE in order to perform a batch operation on a daily basis. The ideal scenario would be for my cluster to scale to 0 nodes when the job is not running and to dynamically scale to 1 node and run the job on it every time the schedule is met.</p> <p>I am first trying to achieve this by using a simple CronJob found in the <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">kubernetes</a> doc that only prints the current time and terminates. </p> <p>I first created a cluster with the following command:</p> <pre><code>gcloud container clusters create $CLUSTER_NAME \ --enable-autoscaling \ --min-nodes 0 --max-nodes 1 --num-nodes 1 \ --zone $CLUSTER_ZONE </code></pre> <p>Then, I created a CronJob with the following description:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: Never </code></pre> <p>The job is scheduled to run every hour and to print the current time before terminating.</p> <p>First thing, I wanted to create the cluster with 0 nodes but setting <code>--num-nodes 0</code> results in an error. Why is it so? Note that I can manually scale down the cluster to 0 nodes after it has been created.</p> <p>Second, if my cluster has 0 nodes, the job won't be scheduled because the cluster does not scale to 1 node automatically but instead gives the following error: </p> <blockquote> <p>Cannot schedule pods: no nodes available to schedule pods.</p> </blockquote> <p>Third, if my cluster has 1 node, the job runs normally but after that, the cluster won't scale down to 0 nodes but stay with 1 node instead. I let my cluster run for two successive jobs and it did not scale down in between. I assume one hour should be long enough for the cluster to do so.</p> <p>What am I missing?</p> <p>EDIT: I've got it to work and detailed my solution <a href="https://stackoverflow.com/questions/49903951/node-pool-does-not-reduce-his-node-size-to-zero-although-autoscaling-is-enabled/51891485#51891485">here</a>.</p>
<p><strong>Update:</strong></p> <blockquote> <p>Note: Beginning with Kubernetes version 1.7, you can specify a minimum size of zero for your node pool. This allows your node pool to scale down completely if the instances within aren't required to run your workloads.</p> </blockquote> <p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler</a></p> <hr> <p><strong>Old answer:</strong></p> <p>Scaling the entire cluster to 0 is not supported, because you always need at least one node for system pods:</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#minimum_and_maximum_node_pool_size" rel="nofollow noreferrer">See docs</a></p> <p>You could create one node pool with a small machine for system pods, and an additional node pool with a big machine where you would run your workload. This way the second node pool can scale down to 0 and you still have space to run the system pods. </p> <p>After attempting, @xEc mentions: <em>Also note that there are scenarios in which my node pool wouldn't scale, like if I created the pool with an initial size of 0 instead of 1.</em></p> <p>Initial suggestion:</p> <p>Perhaps you could run a micro VM, with cron to scale the cluster up, submit a Job (instead of CronJob), wait for it to finish and then scale it back down to 0? </p>
<p>I'm having an extremely hard time setting up EKS on AWS. I've followed this tutorial: <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-launch-workers" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-launch-workers</a></p> <p>I got up to the <code>~/.kube/config</code> file and when I try to run <code>kubectl get svc</code> I'm prompted with the below.</p> <pre><code>▶ kubectl get svc Please enter Username: Alex Please enter Password: ******** Error from server (Forbidden): services is forbidden: User "system:anonymous" cannot list services in the namespace "default" </code></pre> <p>I'm unsure where to find the username and password for this entry. Please point me to the exact place where I can find this information.</p> <p>I think this also has to do with EKS RBAC. I'm not sure how to get around this without having access to the server.</p>
<p>This issue occurs if your <code>user</code> configuration isn't working in your <code>kubeconfig</code>, or if you are on a version of <code>kubectl</code> less than v1.10</p>
<p>I am a newbie in kubernetes clustering. I have a simple question. I am in multiple kubernetes clustering. It seems multiple clusters are available in kubernetes. So "Multiple clustering" in kubernetes means:</p> <blockquote> <ul> <li>It is physical multiple clusters which have their own master, nodes?</li> <li>Or it's kind of "logical" multiple clustering in one physical clustering?(with one master and nodes)?</li> </ul> </blockquote> <p>Would you like to teach me? Thanks.</p>
<p>It seems this has been answered by @<a href="https://stackoverflow.com/users/6716781/jonah-benton">Jonah Benton</a> and @<a href="https://stackoverflow.com/users/129570/oliver-charlesworth">Oliver Charlesworth</a> in the comments. Using <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="noreferrer">kubernetes cluster federation</a> means using multiple kubernetes clusters as though they were one. It is a set of distinct clusters acting as one, not a logical division within a single cluster. (Something like that idea of a logical division might resemble a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noreferrer">namespace</a>, which allows for working with a restricted 'part' of a cluster in terms of resource-allowance but a namespace isn't restricted to particular nodes.) In addition to the links provided, the kubernetes documentation also has resources for <a href="https://kubernetes.io/docs/tasks/federation/" rel="noreferrer">deploying to a cluster federation</a> and that page includes links for setting up a federation. </p> <p>I guess you know this now as the question is a little old but I thought it would help to fill in an answer.</p>
<p>i want to uninstall kubernetes from Centos 7 and all it's dependencies and files like : </p> <pre><code>kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler </code></pre>
<p>Check this <a href="https://stackoverflow.com/questions/44698283/how-to-completely-uninstall-kubernetes">thread</a> or else these steps should help, </p> <p>First clean up the pods running into your k8s cluster using,</p> <pre><code>$ kubectl delete node --all </code></pre> <p>then remove data volumes and back-up(if it's not needed) from your host system. Finally, you can stop all the k8s services using the script,</p> <pre><code>$ for service in kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler; do systemctl stop $service done $ yum -y remove kubernetes #if it's registered as a service </code></pre>
<pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-app labels: run: my-app spec: replicas: 3 selector: matchLabels: run: my-app template: metadata: labels: run: my-app spec: containers: - image: gcr.io/google-samples/hello-app:1.0 name: my-app ports: - containerPort: 8080 </code></pre> <p>This is a sample yaml from kubenetes site, there are so many <code>my-app</code>, do they all have to be same? what are their purpose?</p>
<blockquote> <p>This is a sample yaml from kubenetes site, there are so many my-app, do they all have to be same? what are their purpose?</p> </blockquote> <p>No they don't have to be the same as far as the <code>name</code> field goes, that can be different. The <code>my-app</code> references seen in the <code>metadata</code> and <code>selector</code> sections are <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">labels</a> that can be used to glue the different Kubernetes objects together or simply select a subset of objects when querying Kubernetes. They will sometimes be the same.</p> <p>Depending on how you've created the Deployment you may have <code>run: myapp</code> throughout the Deployment and in the objects derived from it. Using <code>kubectl run my-app --image=gcr.io/google-samples/hello-app:1.0 --replicas=3</code> would create a identical Deployment you're referring to.</p> <p>Here's a picture showing how the different <code>run: my-app</code> labels are used, using the Deployment above as an inspiration:</p> <p><a href="https://i.stack.imgur.com/HMMTa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HMMTa.png" alt="enter image description here"></a></p> <p>The picture above shows you the Deployment and how the <code>template</code> box (blue) are used to create the number of specified replicas (Pods). Each Pod will get a <code>run: my-app</code> label in it's <code>metadata</code> section, from the Deployment point of view this will be used as a way of selecting the Pods it's responsible for.</p> <p>A similar selection of a subset of Pods using <code>kubectl</code> would be:</p> <p><code>kubectl get pods -l run=my-app</code></p> <p>This will give you all Pods labeled <code>run: my-app</code>.</p> <p>To sum up a bit, labels can be used to select a subset of resources when querying using e.g. <code>kubectl</code> or by other Kubernetes resources to do selections. You can create your own labels and they don't necessarily have to be the same throughout your specific Deployment but if they are it would be pretty easy to query for any resource with a specific label.</p>
<p>We are trying to get <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">this example</a> to work on a kubernetes setup on a bunch of virtual servers as a proof of concept. However, we are running in to a problem with the <code>kubectl port-forward</code> command.</p> <p>The error we get is <code>error: error upgrading connection: unable to upgrade connection: pod does not exist</code>, and tacking on a <code>-v=10</code> reveals a 404 on an API call to /portforward. The entire output of the command:</p> <pre><code>oschusler@shepherd:~$ kubectl port-forward pod/redis-master-6b464554c8-vwns2 6379:6379 -v=10 I0815 11:28:18.281223 17549 loader.go:359] Config loaded from file /home/oschusler/.kube/config I0815 11:28:18.282045 17549 loader.go:359] Config loaded from file /home/oschusler/.kube/config I0815 11:28:18.284388 17549 loader.go:359] Config loaded from file /home/oschusler/.kube/config I0815 11:28:18.285945 17549 cached_discovery.go:104] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/servergroups.json I0815 11:28:18.286747 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apps/v1beta2/serverresources.json I0815 11:28:18.287309 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/v1/serverresources.json I0815 11:28:18.287655 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apiregistration.k8s.io/v1/serverresources.json I0815 11:28:18.287985 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apiregistration.k8s.io/v1beta1/serverresources.json I0815 11:28:18.288132 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/scheduling.k8s.io/v1beta1/serverresources.json I0815 11:28:18.288536 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apps/v1beta1/serverresources.json I0815 11:28:18.288611 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/events.k8s.io/v1beta1/serverresources.json I0815 11:28:18.288761 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authentication.k8s.io/v1/serverresources.json I0815 11:28:18.288981 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authentication.k8s.io/v1beta1/serverresources.json I0815 11:28:18.289049 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authorization.k8s.io/v1/serverresources.json I0815 11:28:18.289137 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/extensions/v1beta1/serverresources.json I0815 11:28:18.289247 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/autoscaling/v1/serverresources.json I0815 11:28:18.289138 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authorization.k8s.io/v1beta1/serverresources.json I0815 11:28:18.289309 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/autoscaling/v2beta1/serverresources.json I0815 11:28:18.289364 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/batch/v1/serverresources.json I0815 11:28:18.289412 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/batch/v1beta1/serverresources.json I0815 11:28:18.289463 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/certificates.k8s.io/v1beta1/serverresources.json I0815 11:28:18.289503 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/networking.k8s.io/v1/serverresources.json I0815 11:28:18.289560 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/policy/v1beta1/serverresources.json I0815 11:28:18.289704 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/rbac.authorization.k8s.io/v1/serverresources.json I0815 11:28:18.289853 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/rbac.authorization.k8s.io/v1beta1/serverresources.json I0815 11:28:18.289854 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apps/v1/serverresources.json I0815 11:28:18.289914 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/storage.k8s.io/v1/serverresources.json I0815 11:28:18.289967 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/storage.k8s.io/v1beta1/serverresources.json I0815 11:28:18.290016 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apiextensions.k8s.io/v1beta1/serverresources.json I0815 11:28:18.290165 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/admissionregistration.k8s.io/v1beta1/serverresources.json I0815 11:28:18.290626 17549 cached_discovery.go:104] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/servergroups.json I0815 11:28:18.291271 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/v1/serverresources.json I0815 11:28:18.291664 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apiregistration.k8s.io/v1/serverresources.json I0815 11:28:18.291893 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apiregistration.k8s.io/v1beta1/serverresources.json I0815 11:28:18.292185 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/extensions/v1beta1/serverresources.json I0815 11:28:18.292520 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apps/v1/serverresources.json I0815 11:28:18.293000 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apps/v1beta2/serverresources.json I0815 11:28:18.293328 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apps/v1beta1/serverresources.json I0815 11:28:18.293618 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/events.k8s.io/v1beta1/serverresources.json I0815 11:28:18.293839 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authentication.k8s.io/v1/serverresources.json I0815 11:28:18.294052 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authentication.k8s.io/v1beta1/serverresources.json I0815 11:28:18.294293 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authorization.k8s.io/v1/serverresources.json I0815 11:28:18.294445 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/authorization.k8s.io/v1beta1/serverresources.json I0815 11:28:18.294646 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/autoscaling/v1/serverresources.json I0815 11:28:18.294800 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/autoscaling/v2beta1/serverresources.json I0815 11:28:18.294946 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/batch/v1/serverresources.json I0815 11:28:18.294989 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/batch/v1beta1/serverresources.json I0815 11:28:18.295137 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/certificates.k8s.io/v1beta1/serverresources.json I0815 11:28:18.295283 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/networking.k8s.io/v1/serverresources.json I0815 11:28:18.295433 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/policy/v1beta1/serverresources.json I0815 11:28:18.295648 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/rbac.authorization.k8s.io/v1/serverresources.json I0815 11:28:18.295702 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/rbac.authorization.k8s.io/v1beta1/serverresources.json I0815 11:28:18.295855 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/storage.k8s.io/v1/serverresources.json I0815 11:28:18.296005 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/storage.k8s.io/v1beta1/serverresources.json I0815 11:28:18.296320 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/admissionregistration.k8s.io/v1beta1/serverresources.json I0815 11:28:18.296602 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/apiextensions.k8s.io/v1beta1/serverresources.json I0815 11:28:18.296678 17549 cached_discovery.go:70] returning cached discovery info from /home/oschusler/.kube/cache/discovery/192.168.99.100_6443/scheduling.k8s.io/v1beta1/serverresources.json I0815 11:28:18.297839 17549 loader.go:359] Config loaded from file /home/oschusler/.kube/config I0815 11:28:18.298363 17549 round_trippers.go:386] curl -k -v -XGET -H "User-Agent: kubectl/v1.11.2 (linux/amd64) kubernetes/bb9ffb1" -H "Accept: application/json, */*" 'https://192.168.99.100:6443/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2' I0815 11:28:18.308319 17549 round_trippers.go:405] GET https://192.168.99.100:6443/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2 200 OK in 9 milliseconds I0815 11:28:18.308336 17549 round_trippers.go:411] Response Headers: I0815 11:28:18.308341 17549 round_trippers.go:414] Content-Type: application/json I0815 11:28:18.308345 17549 round_trippers.go:414] Content-Length: 2525 I0815 11:28:18.308348 17549 round_trippers.go:414] Date: Wed, 15 Aug 2018 11:28:18 GMT I0815 11:28:18.308379 17549 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"redis-master-6b464554c8-vwns2","generateName":"redis-master-6b464554c8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2","uid":"0730fdd1-a07c-11e8-8302-02a6922ff6a3","resourceVersion":"569","creationTimestamp":"2018-08-15T11:11:59Z","labels":{"app":"redis","pod-template-hash":"2602011074","role":"master","tier":"backend"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"redis-master-6b464554c8","uid":"07250bc2-a07c-11e8-8302-02a6922ff6a3","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-xzf8z","secret":{"secretName":"default-token-xzf8z","defaultMode":420}}],"containers":[{"name":"master","image":"k8s.gcr.io/redis:e2e","ports":[{"containerPort":6379,"protocol":"TCP"}],"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"volumeMounts":[{"name":"default-token-xzf8z","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"sheep01","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-08-15T11:11:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-08-15T11:12:01Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-08-15T11:11:59Z"}],"hostIP":"10.0.2.15","podIP":"10.244.1.3","startTime":"2018-08-15T11:11:59Z","containerStatuses":[{"name":"master","state":{"running":{"startedAt":"2018-08-15T11:12:00Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/redis:e2e","imageID":"docker-pullable://k8s.gcr.io/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25","containerID":"docker://b2a009cc40d81a188bb9d8c96f2c7c60bc043260b1bed707408439a6b7f8ad80"}],"qosClass":"Burstable"}} I0815 11:28:18.314658 17549 loader.go:359] Config loaded from file /home/oschusler/.kube/config I0815 11:28:18.315684 17549 loader.go:359] Config loaded from file /home/oschusler/.kube/config I0815 11:28:18.316465 17549 loader.go:359] Config loaded from file /home/oschusler/.kube/config I0815 11:28:18.316846 17549 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.2 (linux/amd64) kubernetes/bb9ffb1" 'https://192.168.99.100:6443/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2' I0815 11:28:18.319240 17549 round_trippers.go:405] GET https://192.168.99.100:6443/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2 200 OK in 2 milliseconds I0815 11:28:18.319258 17549 round_trippers.go:411] Response Headers: I0815 11:28:18.319262 17549 round_trippers.go:414] Content-Type: application/json I0815 11:28:18.319268 17549 round_trippers.go:414] Content-Length: 2525 I0815 11:28:18.319272 17549 round_trippers.go:414] Date: Wed, 15 Aug 2018 11:28:18 GMT I0815 11:28:18.319301 17549 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"redis-master-6b464554c8-vwns2","generateName":"redis-master-6b464554c8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2","uid":"0730fdd1-a07c-11e8-8302-02a6922ff6a3","resourceVersion":"569","creationTimestamp":"2018-08-15T11:11:59Z","labels":{"app":"redis","pod-template-hash":"2602011074","role":"master","tier":"backend"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"redis-master-6b464554c8","uid":"07250bc2-a07c-11e8-8302-02a6922ff6a3","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-xzf8z","secret":{"secretName":"default-token-xzf8z","defaultMode":420}}],"containers":[{"name":"master","image":"k8s.gcr.io/redis:e2e","ports":[{"containerPort":6379,"protocol":"TCP"}],"resources":{"requests":{"cpu":"100m","memory":"100Mi"}},"volumeMounts":[{"name":"default-token-xzf8z","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"sheep01","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-08-15T11:11:59Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-08-15T11:12:01Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-08-15T11:11:59Z"}],"hostIP":"10.0.2.15","podIP":"10.244.1.3","startTime":"2018-08-15T11:11:59Z","containerStatuses":[{"name":"master","state":{"running":{"startedAt":"2018-08-15T11:12:00Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/redis:e2e","imageID":"docker-pullable://k8s.gcr.io/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25","containerID":"docker://b2a009cc40d81a188bb9d8c96f2c7c60bc043260b1bed707408439a6b7f8ad80"}],"qosClass":"Burstable"}} I0815 11:28:18.320165 17549 round_trippers.go:386] curl -k -v -XPOST -H "X-Stream-Protocol-Version: portforward.k8s.io" -H "User-Agent: kubectl/v1.11.2 (linux/amd64) kubernetes/bb9ffb1" 'https://192.168.99.100:6443/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2/portforward' I0815 11:28:18.339785 17549 round_trippers.go:405] POST https://192.168.99.100:6443/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2/portforward 404 Not Found in 19 milliseconds I0815 11:28:18.340171 17549 round_trippers.go:411] Response Headers: I0815 11:28:18.340348 17549 round_trippers.go:414] Date: Wed, 15 Aug 2018 11:28:18 GMT I0815 11:28:18.340515 17549 round_trippers.go:414] Content-Length: 18 I0815 11:28:18.340674 17549 round_trippers.go:414] Content-Type: text/plain; charset=utf-8 F0815 11:28:18.340939 17549 helpers.go:119] error: error upgrading connection: unable to upgrade connection: pod does not exist oschusler@shepherd:~$ </code></pre> <p>It looks like the issue is with <code>POST https://192.168.99.100:6443/api/v1/namespaces/default/pods/redis-master-6b464554c8-vwns2/portforward 404 Not Found in 19 milliseconds</code>.</p> <p>We've been at this issue for days straight, but haven't been able to figure it out so far. People suggested RBAC might be the culprit, but we are unsure how to check this. We also tried starting the kubelets with the <code>AlwaysAllow</code> authorization mode, but that gave all kinds of different issues.</p> <p>Not sure how to proceed on this one, we're at our wits end.</p>
<p>After diving into the apiserver logs (<code>--v=9</code> as argument in the apiserver yaml config, which we found a hint of on <a href="https://github.com/kubernetes/kubeadm/issues/132" rel="noreferrer">this</a> and <a href="https://github.com/kubernetes/kubernetes/issues/35054" rel="noreferrer">this</a> github issue) we found a line that suggested the actual problem:</p> <pre><code>I0815 12:18:23.953732 1 handler.go:143] kube-apiserver: POST "/api/v1/namespaces/default/pods/redis-master-6b464554c8-jsxrp/portforward" satisfied by gorestful with webservice /api/v1 I0815 12:18:23.975756 1 upgradeaware.go:259] Connecting to backend proxy (intercepting redirects) https://10.0.2.15:10250/portForward/default/redis-master-6b464554c8-jsxrp Headers: map[Connection:[Upgrade] Upgrade:[SPDY/3.1] X-Stream-Protocol-Version:[portforward.k8s.io] User-Agent:[kubectl/v1.11.2 (linux/amd64) kubernetes/bb9ffb1] Content-Length:[0] X-Forwarded-For:[192.168.99.100]] </code></pre> <p>The request is forwarded to 10.0.2.15, which happens to be a local NAT-ed IP that is assigned to all VM's. So every vm has the internal ip 10.0.2.15, as was also seen by listing the nodes:</p> <pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME sheep01 Ready &lt;none&gt; 49m v1.11.2 10.0.2.15 &lt;none&gt; Ubuntu 18.04.1 LTS 4.15.0-32-generic docker://17.12.1-ce shepherd Ready master 1h v1.11.2 10.0.2.15 &lt;none&gt; Ubuntu 18.04.1 LTS 4.15.0-32-generic docker://17.12.1-ce </code></pre> <p>As specified in <a href="https://devops.stackexchange.com/questions/3514/why-does-helm-time-out-on-almost-every-action#3612">this answer</a>, we added <code>--node-ip</code> option to override what the kubelet thinks is it's IP and that fixed the issue.</p>
<p>I have a container that performs some actions over some data. This container is heavy in memory and CPU resources, and I want it to start only an on-demand basis.</p> <p>As an example, with docker-compose, out of Kubernetes, I use it this way:</p> <pre><code>docker-compose run heavycontainer perform.sh some-action </code></pre> <p>The container performs the action and ends.</p> <p>In Kubernetes I want this container to perform the actions it provides, but in response to some messages (AMQP messages, created by other containers). I have a container that listens for messages. My first thought was a pod with two containers: listener and performer. But I don't know whether is possible or not start a container from another.</p> <p>Init or sidecar containers doesn't seem a solution. And I prefer to avoid creating a custom image to inject the listener into the performer.</p> <p>Is there any way to achieve this?</p>
<p><strike> I hope it help you.</p> <ul> <li><p>The pod need to <code>run regularly</code>, <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job" rel="nofollow noreferrer">CronJob</a></p></li> <li><p>The pod need to <code>run on demand</code>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a></strike></p></li> </ul> <p>Firstly, I apologize you about my wrong answer. </p> <p>I understand what you want now, and I think it can be available to run multiple containers in same pod. <a href="https://blog.openshift.com/patterns-application-augmentation-openshift/" rel="nofollow noreferrer">Patterns for Application Augmentation on OpenShift</a> is helpful for you. </p> <p>PS. OpenShift is Enterprise Kubernetes, so you can think OpenShift is same with Kubernetes.</p>
<p>I am looking for help to troubleshoot this basic scenario that isn't working OK:</p> <p>Three nodes installed with <strong>kubeadm</strong> on <strong>VirtualBox VMs</strong> running on a MacBook:</p> <pre><code>sudo kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetes-master Ready master 4h v1.10.2 kubernetes-node1 Ready &lt;none&gt; 4h v1.10.2 kubernetes-node2 Ready &lt;none&gt; 34m v1.10.2 </code></pre> <p>The Virtualbox VMs have 2 adapters: 1) Host-only 2) NAT. The node IP's from the guest computer are:</p> <pre><code>kubernetes-master (192.168.56.3) kubernetes-node1 (192.168.56.4) kubernetes-node2 (192.168.56.5) </code></pre> <p>I am using flannel pod network (I also tried Calico previously with the same result).</p> <p>When installing the master node I used this command:</p> <pre><code>sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.3 </code></pre> <p>I deployed an nginx application whose pods are up, one pod per node:</p> <pre><code>nginx-deployment-64ff85b579-sk5zs 1/1 Running 0 14m 10.244.2.2 kubernetes-node2 nginx-deployment-64ff85b579-sqjgb 1/1 Running 0 14m 10.244.1.2 kubernetes-node1 </code></pre> <p>I exposed them as a ClusterIP service:</p> <pre><code>sudo kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 22m nginx-deployment ClusterIP 10.98.206.211 &lt;none&gt; 80/TCP 14m </code></pre> <p>Now the problem:</p> <p>I ssh into kubernetes-node1 and curl the service using the cluster IP:</p> <pre><code>ssh 192.168.56.4 --- curl 10.98.206.211 </code></pre> <p>Sometimes the request goes fine, returning the nginx welcome page. I can see in the logs that this requests are always answered by the pod in the same node (kubernetes-node1). Some other requests are stuck until they time out. I guess that this ones were sent to the pod in the other node (kubernetes-node2).</p> <p>The same happens the other way around, when ssh'd into kubernetes-node2 the pod from this node logs the successful requests and the others time out.</p> <p>I seems there is some kind of networking problem and nodes can't access pods from the other nodes. How can I fix this?</p> <p><strong>UPDATE:</strong></p> <p>I downscaled the number of replicas to 1, so now there is only one pod on kubernetes-node2</p> <p>If I ssh into kubernetes-node2 all curls go fine. When in kubernetes-node1 all requests time out.</p> <p><strong>UPDATE 2:</strong> </p> <p><strong>kubernetes-master ifconfig</strong></p> <pre><code>cni0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet 10.244.0.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::20a0:c7ff:fe6f:8271 prefixlen 64 scopeid 0x20&lt;link&gt; ether 0a:58:0a:f4:00:01 txqueuelen 1000 (Ethernet) RX packets 10478 bytes 2415081 (2.4 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11523 bytes 2630866 (2.6 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099&lt;UP,BROADCAST,MULTICAST&gt; mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:cd:ce:84:a9 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s3: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 192.168.56.3 netmask 255.255.255.0 broadcast 192.168.56.255 inet6 fe80::a00:27ff:fe2d:298f prefixlen 64 scopeid 0x20&lt;link&gt; ether 08:00:27:2d:29:8f txqueuelen 1000 (Ethernet) RX packets 20784 bytes 2149991 (2.1 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 26567 bytes 26397855 (26.3 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s8: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255 inet6 fe80::a00:27ff:fe09:f08a prefixlen 64 scopeid 0x20&lt;link&gt; ether 08:00:27:09:f0:8a txqueuelen 1000 (Ethernet) RX packets 12662 bytes 12491693 (12.4 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4507 bytes 297572 (297.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::c078:65ff:feb9:e4ed prefixlen 64 scopeid 0x20&lt;link&gt; ether c2:78:65:b9:e4:ed txqueuelen 0 (Ethernet) RX packets 6 bytes 444 (444.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 444 (444.0 B) TX errors 0 dropped 15 overruns 0 carrier 0 collisions 0 lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt; mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10&lt;host&gt; loop txqueuelen 1000 (Local Loopback) RX packets 464615 bytes 130013389 (130.0 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 464615 bytes 130013389 (130.0 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tunl0: flags=193&lt;UP,RUNNING,NOARP&gt; mtu 1440 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethb1098eb3: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet6 fe80::d8a3:a2ff:fedf:4d1d prefixlen 64 scopeid 0x20&lt;link&gt; ether da:a3:a2:df:4d:1d txqueuelen 0 (Ethernet) RX packets 10478 bytes 2561773 (2.5 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11538 bytes 2631964 (2.6 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 </code></pre> <p><strong>kubernetes-node1 ifconfig</strong></p> <pre><code>cni0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet 10.244.1.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::5cab:32ff:fe04:5b89 prefixlen 64 scopeid 0x20&lt;link&gt; ether 0a:58:0a:f4:01:01 txqueuelen 1000 (Ethernet) RX packets 199 bytes 41004 (41.0 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 331 bytes 56438 (56.4 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099&lt;UP,BROADCAST,MULTICAST&gt; mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:0f:02:bb:ff txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s3: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 192.168.56.4 netmask 255.255.255.0 broadcast 192.168.56.255 inet6 fe80::a00:27ff:fe36:741a prefixlen 64 scopeid 0x20&lt;link&gt; ether 08:00:27:36:74:1a txqueuelen 1000 (Ethernet) RX packets 12834 bytes 9685221 (9.6 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9114 bytes 1014758 (1.0 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s8: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255 inet6 fe80::a00:27ff:feb2:23a3 prefixlen 64 scopeid 0x20&lt;link&gt; ether 08:00:27:b2:23:a3 txqueuelen 1000 (Ethernet) RX packets 13263 bytes 12557808 (12.5 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5065 bytes 341321 (341.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::7815:efff:fed6:1423 prefixlen 64 scopeid 0x20&lt;link&gt; ether 7a:15:ef:d6:14:23 txqueuelen 0 (Ethernet) RX packets 483 bytes 37506 (37.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 483 bytes 37506 (37.5 KB) TX errors 0 dropped 15 overruns 0 carrier 0 collisions 0 lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt; mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10&lt;host&gt; loop txqueuelen 1000 (Local Loopback) RX packets 3072 bytes 269588 (269.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3072 bytes 269588 (269.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 veth153293ec: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet6 fe80::70b6:beff:fe94:9942 prefixlen 64 scopeid 0x20&lt;link&gt; ether 72:b6:be:94:99:42 txqueuelen 0 (Ethernet) RX packets 81 bytes 19066 (19.0 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 129 bytes 10066 (10.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 </code></pre> <p><strong>kubernetes-node2 ifconfig</strong></p> <pre><code>cni0: flags=4099&lt;UP,BROADCAST,MULTICAST&gt; mtu 1500 inet 10.244.2.1 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::4428:f5ff:fe8b:a76b prefixlen 64 scopeid 0x20&lt;link&gt; ether 0a:58:0a:f4:02:01 txqueuelen 1000 (Ethernet) RX packets 184 bytes 36782 (36.7 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 284 bytes 36940 (36.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099&lt;UP,BROADCAST,MULTICAST&gt; mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:7f:e9:79:cd txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s3: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 192.168.56.5 netmask 255.255.255.0 broadcast 192.168.56.255 inet6 fe80::a00:27ff:feb7:ff54 prefixlen 64 scopeid 0x20&lt;link&gt; ether 08:00:27:b7:ff:54 txqueuelen 1000 (Ethernet) RX packets 12634 bytes 9466460 (9.4 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8961 bytes 979807 (979.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp0s8: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255 inet6 fe80::a00:27ff:fed8:9210 prefixlen 64 scopeid 0x20&lt;link&gt; ether 08:00:27:d8:92:10 txqueuelen 1000 (Ethernet) RX packets 12658 bytes 12491919 (12.4 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4544 bytes 297215 (297.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1450 inet 10.244.2.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::c832:e4ff:fe3e:f616 prefixlen 64 scopeid 0x20&lt;link&gt; ether ca:32:e4:3e:f6:16 txqueuelen 0 (Ethernet) RX packets 111 bytes 8466 (8.4 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 111 bytes 8466 (8.4 KB) TX errors 0 dropped 15 overruns 0 carrier 0 collisions 0 lo: flags=73&lt;UP,LOOPBACK,RUNNING&gt; mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10&lt;host&gt; loop txqueuelen 1000 (Local Loopback) RX packets 2940 bytes 258968 (258.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2940 bytes 258968 (258.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 </code></pre> <p><strong>UPDATE 3:</strong> </p> <p><strong>Kubelet logs:</strong></p> <p><a href="https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-master-kubelet-log" rel="noreferrer">kubernetes-master kubelet logs</a></p> <p><a href="https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node1-kubelet-log" rel="noreferrer">kubernetes-node1 kubelet logs</a></p> <p><a href="https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node2-kubelet-log" rel="noreferrer">kubernetes-node2 kubelet logs</a></p> <p><strong>IP Routes</strong></p> <p><strong>Master</strong></p> <pre><code>kubernetes-master:~$ ip route default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.3 </code></pre> <p><strong>Node1</strong></p> <pre><code>kubernetes-node1:~$ ip route default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.4 </code></pre> <p><strong>Node2</strong></p> <pre><code>kubernetes-node2:~$ ip route default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.5 </code></pre> <p><strong>iptables-save:</strong></p> <p><a href="https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-master-iptables-save" rel="noreferrer">kubernetes-master iptables-save</a></p> <p><a href="https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node1-iptables-save" rel="noreferrer">kubernetes-node1 iptables-save</a></p> <p><a href="https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node2-iptables-save" rel="noreferrer">kubernetes-node2 iptables-save</a></p>
<p>I was running into a similar problem with my K8s cluster with Flannel. I had set up the vms with a NAT nic for internet connectivity and a Host-Only nic for node to node communication. Flannel was choosing the NAT nic by default for node to node communication which obviously won't work in this scenario. </p> <p>I modified the flannel manifest before deploying to set the <strong><em>--iface=enp0s8</em></strong> argument to the Host-Only nic that should have been chosen (<strong><em>enp0s8</em></strong> in my case). In your case it looks like <strong><em>enp0s3</em></strong> would be the correct NIC. Node to node communication worked fine after that.</p> <p>I failed to note that I also modified the <strong><em>kube-proxy</em></strong> manifest to include the <strong><em>--cluster-cidr=10.244.0.0/16</em></strong> and <strong><em>--proxy-mode=iptables</em></strong> which appears to be required as well.</p>
<p>I'm developing a test hello app in Go, which will have access to a Postgres DB. This will use a statefulset to release in kubernetes and has one pod with two container images (one for pgsql &amp; one for goapp).</p> <pre><code>├── hello-app | ├── templates | ├── file1.gohtml | ├── file2.gohtml | └── file3.gohtml | ├── Dockerfile | └── hello-app.go ├── psql | ├── Dockerfile | ├── createUser.sh | └── createDB.sql ├── yaml | └── statefulset.yaml </code></pre> <p>I am stuck getting the Dockerfile and Go app to tie up. In my first bit of Go code I use the 'template.Must' function to reference the 'templates' directory. Obviously when I run this up as a container the directory structure is different. </p> <p>I've not quite worked out how to do this in the Dockerfile yet and was looking for some guidance.</p> <p><strong>/app/hello-app.go</strong></p> <pre><code>package main import ( "database/sql" "fmt" "os" _ "github.com/lib/pq" "html/template" "net/http" "strconv" ) var db *sql.DB var tpl *template.Template func init() { host := os.Getenv("VARIABLE") var err error db, err = sql.Open("postgres", "postgres://user:password@"+host+"/dbname?sslmode=disable") if err != nil { panic(err) } if err = db.Ping(); err != nil { panic(err) } fmt.Println("You connected to your database.") tpl = template.Must(template.ParseGlob("templates/*.gohtml")) </code></pre> <p><strong>/app/Dockerfile</strong></p> <pre><code>FROM golang:1.8-alpine RUN apk add --update go git RUN go get github.com/lib/pq/... ADD . /go/src/hello-app RUN go install hello-app Add templates templates/ ENV USER=username \ PASSWORD=password \ DB=dbname \ HOST=hostname \ PORT=5432 FROM alpine:latest COPY --from=0 /go/bin/hello-app/ . ENV PORT 4040 CMD ["./hello-app"] </code></pre> <p>When I run this up as is in kubernetes (GCP) I get the following log entry on the hello-app container. </p> <blockquote> <p>panic: html/template: pattern matches no files: <code>templates/*.gohtml</code> goroutine 1 [running]: html/template.Must</p> </blockquote>
<p>In the second stage of your Dockerfile, you are only copying your Go binary from the previous stage. You must also copy your <code>templates</code> directory to the second stage as well so the Go binary can reference your HTML templates:</p> <pre><code>FROM golang:1.8-alpine RUN apk add --update go git RUN go get github.com/lib/pq/... ADD . /go/src/hello-app RUN go install hello-app ENV USER=username \ PASSWORD=password \ DB=dbname \ HOST=hostname \ PORT=5432 FROM alpine:latest COPY --from=0 /go/bin/hello-app/ . COPY --from=0 /go/src/hello-app/templates ./templates ENV PORT 4040 CMD ["./hello-app"] </code></pre> <p>I'm not sure if this is common practice but when I'm confused about what contents are in what folder within the build process, I simply <code>ls</code> the directory in question to get a better understanding of what might be happening during the build process:</p> <pre><code>RUN ls </code></pre> <p>Obviously you can remove these lines once you've finalized your Dockerfile.</p>
<p>I set up 1 master 2 nodes k8s cluster in according to documentation. A pod can ping the other pod on the same node but can't ping the pod on the other node.</p> <p>To demonstrate the problem I deployed below deployments which has 3 replica. While two of them sits on the same node, the other pod sits on the other node.</p> <pre> $ cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: nginx-svc spec: selector: app: nginx ports: - protocol: TCP port: 80 $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-21-115.us-west-2.compute.internal Ready master 20m v1.11.2 ip-172-31-26-62.us-west-2.compute.internal Ready 19m v1.11.2 ip-172-31-29-204.us-west-2.compute.internal Ready 14m v1.11.2 $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deployment-966857787-22qq7 1/1 Running 0 11m 10.244.2.3 ip-172-31-29-204.us-west-2.compute.internal nginx-deployment-966857787-lv7dd 1/1 Running 0 11m 10.244.1.2 ip-172-31-26-62.us-west-2.compute.internal nginx-deployment-966857787-zkzg6 1/1 Running 0 11m 10.244.2.2 ip-172-31-29-204.us-west-2.compute.internal $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 21m nginx-svc ClusterIP 10.105.205.10 80/TCP 11m </pre> <p>Everything looks fine.</p> <p>Let me show you containers.</p> <pre> # docker exec -it 489b180f512b /bin/bash root@nginx-deployment-966857787-zkzg6:/# ifconfig eth0: flags=4163 mtu 8951 inet 10.244.2.2 netmask 255.255.255.0 broadcast 0.0.0.0 inet6 fe80::cc4d:61ff:fe8a:5aeb prefixlen 64 scopeid 0x20 root@nginx-deployment-966857787-zkzg6:/# ping 10.244.2.3 PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data. 64 bytes from 10.244.2.3: icmp_seq=1 ttl=64 time=0.066 ms 64 bytes from 10.244.2.3: icmp_seq=2 ttl=64 time=0.055 ms ^C </pre> <p>So it pings its neighbor pod on the same node.</p> <pre> root@nginx-deployment-966857787-zkzg6:/# ping 10.244.1.2 PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data. ^C --- 10.244.1.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1059ms </pre> <p>And can't ping its replica on the other node.</p> <p>Here is host interfaces:</p> <pre> # ifconfig cni0: flags=4163 mtu 8951 inet 10.244.2.1 netmask 255.255.255.0 broadcast 0.0.0.0 docker0: flags=4099 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 eth0: flags=4163 mtu 9001 inet 172.31.29.204 netmask 255.255.240.0 broadcast 172.31.31.255 flannel.1: flags=4163 mtu 8951 inet 10.244.2.0 netmask 255.255.255.255 broadcast 0.0.0.0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 veth09fb984a: flags=4163 mtu 8951 inet6 fe80::d819:14ff:fe06:174c prefixlen 64 scopeid 0x20 veth87b3563e: flags=4163 mtu 8951 inet6 fe80::d09c:d2ff:fe7b:7dd7 prefixlen 64 scopeid 0x20 # ifconfig cni0: flags=4163 mtu 8951 inet 10.244.1.1 netmask 255.255.255.0 broadcast 0.0.0.0 docker0: flags=4099 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 eth0: flags=4163 mtu 9001 inet 172.31.26.62 netmask 255.255.240.0 broadcast 172.31.31.255 flannel.1: flags=4163 mtu 8951 inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 veth9733e2e6: flags=4163 mtu 8951 inet6 fe80::8003:46ff:fee2:abc2 prefixlen 64 scopeid 0x20 </pre> <p>Processes on the nodes:</p> <pre> # ps auxww|grep kube root 4059 0.1 2.8 43568 28316 ? Ssl 00:31 0:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf root 4260 0.0 3.4 358984 34288 ? Ssl 00:31 0:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 4455 1.1 9.6 760868 97260 ? Ssl 00:31 0:14 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni </pre> <p>Because of this network problem clusterIP is also unreachable:</p> <p>$ curl 10.105.205.10:80</p> <p>Any suggestion?</p> <p>Thanks.</p>
<p>I found the problem.</p> <p>Flannel uses UDP port 8285 and 8472 which was being blocked by AWS security groups. I had only opened TCP ports. </p> <p>I enable UDP port 8285 and UDP port 8472 as well as TCP 6443, 10250, 10256.</p>
<p>In Istio, I was wondering why VirtualService and DestinationRule are separated into two separate config files? AFAICT, the DestinationRule defines the subset and the VirtualService routes to the subset. Obviously, they both do more than just that, but my question is, what could have been the design thought behind separating the two. Why couldn't we have defined the subset in the VirtualService yaml itself?</p> <p>A follow-on question is: Why is the circuit-breaker rules configured in the destination rule, while the timeouts and retries are configured in the VirtualService? Again, I guess, I am unable to comprehend the reasoning behind this design decision. If that is clear, both questions may be answered simultaneously. Any help in understanding this conceptually will be much appreciated. </p>
<p>They were designed to provide a clear separation of routing from post-routing behaviors.</p> <p>A VirtualService is used to describe the mapping (route rules) from one or more user-addressable destinations (hosts) to the actual destination workloads (services) inside the mesh.</p> <p>A DestainationRule then defines the set of policies to be applied to a request after VirtualService routing has occurred. They are intended to be authored by service owners, describing the circuit breakers, load balancer settings, TLS settings, etc.</p> <p>An overview of how they work is <a href="https://istio.io/docs/concepts/traffic-management/#rule-configuration" rel="nofollow noreferrer">here</a>.</p>
<p>I am trying to switch from an <code>nginx</code> Ingress to using <code>Istio</code> to take advantage of route weights for canary deployments, and integrated monitoring, among other things.</p> <p>My regular routing was defined as:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: solar-demo annotations: nginx.org/server-snippet: "proxy_ssl_verify off;" spec: rules: - host: shmukler.example.com http: paths: - path: /city/* backend: serviceName: solar-demo servicePort: 3000 - path: /solar/* backend: serviceName: solar-demo servicePort: 3001 -- kind: Service apiVersion: v1 metadata: name: solar-demo spec: ports: - name: city protocol: TCP port: 3000 targetPort: 3000 - name: solar protocol: TCP port: 3001 targetPort: 3001 selector: app: solar-demo </code></pre> <p>I don't even need <code>auth</code>, right now. When I started <code>install/kubernetes/istio-demo.yaml</code>, it created a bunch of pods and services in the <code>istio-system</code> namespace.</p> <p>I figured, possibly incorrectly, that I need to have a <code>VirtualService</code> and maybe route rules defined. Wrote:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: solar-demo spec: hosts: - shmukler.example.com http: - route: - destination: host: shmukler.example.com subset: blue weight: 90 - destination: host: shmukler.example.com subset: green weight: 10 </code></pre> <p>Are ports defined in the regular service, while weights and paths in a <code>VirtualService</code>? Do I need to stick anything into the <code>istio-system</code> namespace? Is it possible, and what would I need to extend <code>istio-demo.yaml</code> to do the routing, I need, just to get the things rolling?</p> <p>Any pointers are appreciated.</p>
<p>You need a Gateway and a VirtualService.</p> <p>Check out <a href="https://istio.io/docs/tasks/traffic-management/ingress/" rel="nofollow noreferrer">this task</a> for an example.</p>
<p>I have an image of size of 6.5GB in the Google Container Registry. When I try to pull the image on a Kubernetes cluster node(worker node) via a deployment, an error occurs: ErrImagePull(or sometimes ImagePullBackOff). I used the describe command to see the error in detail. The error is described as <strong>Failed to pull image "gcr.io/.../.. ": rpc error: code = Canceled desc = context canceled</strong> What may be the issue and how to mitigate it?</p>
<p>It seems that the kubelet expects a updates on progress during the pull of a large image but this currently isn't available by default with most container registries. It's not ideal behaviour but it appears people have been able to work around it from reading the responses on <a href="https://github.com/kubernetes/kubernetes/issues/59376" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/59376</a> and <a href="https://stackoverflow.com/questions/43342014/kubernetes-set-a-timeout-limit-on-image-pulls">Kubernetes set a timeout limit on image pulls</a> by adjusting the timeout</p>
<p>I am using <strong>container Probes</strong> to check the health of the application running inside the container within kubernetes pod. For now my example pod config looks like,</p> <pre><code>"spec":{ "containers":[ { "image":"tomcat", "name":"tomcat", "livenessProbe":{ "httpGet":{ "port": 80 }, "initialDelaySeconds": 15, "periodSeconds": 10 } } ] } </code></pre> <p>In my case, I need to monitor two ports for the same container. <strong>80</strong> and <strong>443</strong>. But I am unable to find a method to provide both the ports for same container in the config file. Is there an alternate way of doing this?</p>
<p>If you have curl / wget on the container you could just run a container exec healthcheck, and do something like <code>curl localhost:80 &amp;&amp; curl localhost:443</code>.</p>
<p>To be specific, here is the permalink to the relevant line of code: <a href="https://github.com/istio/istio/blob/e3a376610c2f28aef40296aac722c587629123c1/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml#L84" rel="nofollow noreferrer">https://github.com/istio/istio/blob/e3a376610c2f28aef40296aac722c587629123c1/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml#L84</a></p> <blockquote> <p>{{ "[[ .ProxyConfig.ZipkinAddress ]]" }}</p> </blockquote> <p>The <code>[[</code> and <code>]]</code> seems alien to me, and in the helm chart developer guide doc <a href="https://docs.helm.sh/chart_template_guide/#the-chart-template-developer-s-guide" rel="nofollow noreferrer">here</a>, it doesn't show any example or documentation about <code>[[</code> and <code>]]</code> syntax. </p> <p>Also, when I tried to render my istio installation (using <code>helm template</code> command), the <code>{{ "[[ .ProxyConfig.ZipkinAddress ]]" }}</code> part only rendered as <code>[[ .ProxyConfig.ZipkinAddress ]]</code>. So I guess that <code>[[</code> and <code>]]</code> is not part of helm template syntax. My guess it would be internal istio's related syntax, which I don't know what exactly it is.</p> <p>Any idea?</p>
<p>I got the answer after posting the same question on the Istio's google group <a href="https://groups.google.com/forum/#!topic/istio-users/0dfU_y06n1Q" rel="nofollow noreferrer">here</a>. Without discrediting the author who answering me in the google group, the answer is yes it is a template of a template. The template syntax is used by sidecar injection described in here: <a href="https://istio.io/docs/setup/kubernetes/sidecar-injection/#template" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/sidecar-injection/#template</a></p>
<p>I created a Kubernetes cluster with the following CLI command: <code> gcloud container clusters create some-cluster --tags=some-tag --network=some-network </code></p> <p>I would now like to:</p> <ol> <li>Disable the <code>--tags</code> option, so that new nodes/VMs are created <em>without</em> the tag <code>some-tag</code>. (Optional: Remove the tag from existing machines, which should be possible through <code>gcloud compute instances remove-tags</code>.)</li> <li>Disable the <code>--network</code> flag, returning the cluster to the default GCP network.</li> </ol> <p>Are either of these operations possible, or will I have to re-create the cluster?</p> <p>For context, I was using the node tags and networking rules to route outgoing network traffic through a single GCE instance serving as a NAT gateway. I now want to turn this routing off.</p>
<p>Currently, its not possible to update cluster network and remove tags for existing cluster using the gcloud command. I have verified this information using the gcloud container clusters update command <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/update" rel="noreferrer">documentation</a>. Additionally, <a href="https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/update" rel="noreferrer">alpha</a> and <a href="https://cloud.google.com/sdk/gcloud/reference/beta/container/clusters/update" rel="noreferrer">beta</a> command don't provide this feature yet. API doc also provides information related to different configuration which can be changed. </p> <p>As a workaround, I was able to remove one of the tags using the rolling update feature within the instance group setting:</p> <blockquote> <ol> <li>Go to instance template of some-cluster -> select the template </li> <li>Click (copy from the top of instance template)-> Remove the tag -> creates a new template</li> <li>Select the some-cluster Instance Group-> click on rolling update -> change the instance template to the one you created -> update</li> </ol> </blockquote> <p>If you change the network in step-2, I was not able to select the instance template with the new network in step-3. Changing the tag alone won’t solve your purpose, it's better to create a new cluster.</p> <p>If you are interested to update tags and network using the gcloud command, I suggest creating a feature request (FR) in <a href="http://%20https://issuetracker.google.com/" rel="noreferrer">Public Issue tracker</a>. </p>
<p>I was wondering what the correct approach to deploying a containerized Django app using gunicorn &amp; celery was. </p> <p>Specifically, each of these processes has a built-in way of scaling vertically, using <code>workers</code> for gunicorn and <code>concurrency</code> for celery. And then there is the Kubernetes approach to scaling using <code>replicas</code></p> <p>There is also this notion of setting workers equal to some function of the CPUs. Gunicorn recommends </p> <blockquote> <p>2-4 workers per core</p> </blockquote> <p>However, I am confused what this translates to on K8s where CPU is a divisible shared resource - unless I use resoureceQuotas.</p> <p>I want to understand what the Best Practice is. There are three options I can think of:</p> <ul> <li>Have single workers for gunicorn and a concurrency of 1 for celery, and scale them using the replicas? (horizontal scaling)</li> <li>Have gunicorn &amp; celery run in a single replica deployment with internal scaling (vertical scaling). This would mean setting fairly high values of workers &amp; concurrency respectively. </li> <li>A mixed approach between 1 and 2, where we run gunicorn and celery with a small value for workers &amp; concurrency, (say 2), and then use K8s Deployment replicas to scale horizontally.</li> </ul> <p>There are some questions on SO around this, but none offer an in-depth/thoughtful answer. Would appreciate if someone can share their experience.</p> <p>Note: We use the default worker_class <code>sync</code> for Gunicorn</p>
<p>These technologies aren't as similar as they initially seem. They address different portions of the application stack and are actually complementary.</p> <p>Gunicorn is for scaling web request concurrency, while celery should be thought of as a worker queue. We'll get to kubernetes soon.</p> <hr /> <h3>Gunicorn</h3> <p>Web request concurrency is primarily limited by network I/O or &quot;I/O bound&quot;. These types of tasks can be scaled using cooperative scheduling provided by threads. If you find request concurrency is limiting your application, increasing gunicorn worker threads may well be the place to start.</p> <hr /> <h3>Celery</h3> <p>Heavy lifting tasks e.g. compress an image, run some ML algo, are &quot;CPU bound&quot; tasks. They can't benefit from threading as much as more CPUs. These tasks should be offloaded and parallelized by celery workers.</p> <hr /> <h3>Kubernetes</h3> <p>Where Kubernetes comes in handy is by providing out-of-the-box horizontal scalability and fault tolerance.</p> <p>Architecturally, I'd use two separate k8s deployments to represent the different scalablity concerns of your application. One deployment for the Django app and another for the celery workers. This allows you to independently scale request throughput vs. processing power.</p> <p>I run celery workers pinned to a single core per container (<code>-c 1</code>) this vastly simplifies debugging and adheres to Docker's &quot;one process per container&quot; mantra. It also gives you the added benefit of predictability, as you can scale the processing power on a per-core basis by incrementing the replica count.</p> <p>Scaling the Django app deployment is where you'll need to DYOR to find the best settings for your particular application. Again stick to using <code>--workers 1</code> so there is a single process per container but you should experiment with <code>--threads</code> to find the best solution. Again leave horizontal scaling to Kubernetes by simply changing the replica count.</p> <p>HTH It's definitely something I had to wrap my head around when working on similar projects.</p>
<p>I have below small python script </p> <pre><code>import os os.system('kubectl get pods --context students-cmn') </code></pre> <p>when i run this command manually from terminal it is working , no issue , so i configured it to run as a cron job , but when cron job triggered getting below error </p> <pre><code>sh: kubectl: command not found </code></pre> <p>why , when the cronjob triggered , kubectl not working ?</p> <p>can anyone please help </p>
<p>First of all, I imagine you are planning on adding code to your python script and that that is why you use python. I assume you used the crontab of the user that can run the command.</p> <p>When you execute a command in <code>cron</code> you must specify the full path to the command. To find the full path to <code>kubectl</code>, you issue the following in Terminal:</p> <pre><code>which kubectl </code></pre> <p>It will print the full path.</p> <p>Then, you edit your script (assuming the full path is "/opt/Kubernetes/bin"):</p> <pre><code>import os os.system('/opt/Kubernetes/bin/kubectl get pods --context students-cmn') </code></pre>
<p>We are using a Apache-Kafka deployment on Kubernetes which is based on the ability to label pods after they have been created (see <a href="https://github.com/Yolean/kubernetes-kafka" rel="noreferrer">https://github.com/Yolean/kubernetes-kafka</a>). The init container of the broker pods takes advantage of this feature to set a label on itself with its own numeric index (e.g. "0", "1", etc) as value. The label is used in the service descriptors to select exactly one pod.</p> <p>This approach works fine on our DIND-Kubernetes environment. However, when tried to port the deployment onto a Docker-EE Kubernetes environment we ran into trouble because the command <code>kubectl label pod</code> generates a run time error which is completely misleading (also see <a href="https://github.com/fabric8io/kubernetes-client/issues/853" rel="noreferrer">https://github.com/fabric8io/kubernetes-client/issues/853</a>).</p> <p>In order to verify the run time error in a minimal setup we created the following deployment scripts.</p> <h1>First step: Successfully label pod using the Docker-EE-Host</h1> <pre><code># create a simple pod as a test target for labeling &gt; kubectl run -ti -n default --image alpine sh # get the pod name for all further steps &gt; kubectl -n default get pods NAME READY STATUS RESTARTS AGE nfs-provisioner-7d49cdcb4f-8qx95 1/1 Running 1 7d nginx-deployment-76dcc8c697-ng4kb 1/1 Running 1 7d nginx-deployment-76dcc8c697-vs24j 1/1 Running 0 20d sh-777f6db646-hrm65 1/1 Running 0 3m &lt;--- This is the test pod test-76bbdb4654-9wd9t 1/1 Running 2 6d test4-76dbf847d5-9qck2 1/1 Running 0 5d # get client and server versions &gt; kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11- docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} # set label kubectl -n default label pod sh-777f6db646-hrm65 "mylabel=hallo" pod "sh-777f6db646-hrm65" labeled &lt;---- successful execution </code></pre> <p>Everything works fine as expected.</p> <h1>Second step: Reproduce run-time error from within pod</h1> <h2>Create Docker image containing <code>kubectl</code> 1.10.5</h2> <pre><code>FROM debian:stretch- slim@sha256:ea42520331a55094b90f6f6663211d4f5a62c5781673935fe17a4dfced777029 ENV KUBERNETES_VERSION=1.10.5 RUN set -ex; \ export DEBIAN_FRONTEND=noninteractive; \ runDeps='curl ca-certificates procps netcat'; \ buildDeps=''; \ apt-get update &amp;&amp; apt-get install -y $runDeps $buildDeps --no-install- recommends; \ rm -rf /var/lib/apt/lists/*; \ \ curl -sLS -o k.tar.gz -k https://dl.k8s.io/v${KUBERNETES_VERSION}/kubernetes-client-linux-amd64.tar.gz; \ tar -xvzf k.tar.gz -C /usr/local/bin/ --strip-components=3 kubernetes/client/bin/kubectl; \ rm k.tar.gz; \ \ apt-get purge -y --auto-remove $buildDeps; \ rm /var/log/dpkg.log /var/log/apt/*.log </code></pre> <p>This image is deployed as <code>10.100.180.74:5000/test/kubectl-client-1.10.5</code> in a site local registry and will be referred to below.</p> <h2>Create a pod using the container above</h2> <pre><code>apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: pod-labeler namespace: default spec: selector: matchLabels: app: pod-labeler replicas: 1 serviceName: pod-labeler updateStrategy: type: OnDelete template: metadata: labels: app: pod-labeler annotations: spec: terminationGracePeriodSeconds: 10 containers: - name: check-version image: 10.100.180.74:5000/test/kubectl-client-1.10.5 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME value: sh-777f6db646-hrm65 command: ["/usr/local/bin/kubectl", "version" ] - name: label-pod image: 10.100.180.74:5000/test/kubectl-client-1.10.5 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME value: sh-777f6db646-hrm65 command: ["/bin/bash", "-c", "/usr/local/bin/kubectl -n default label pod $POD_NAME 'mylabel2=hallo'" ] </code></pre> <h2>Logging output</h2> <p>We get the following logging output</p> <pre><code># Log of the container "check-version" 2018-07-18T11:11:10.791011157Z Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-\ 06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} 2018-07-18T11:11:10.791058997Z Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>and the run time error</p> <pre><code>2018-07-18T11:24:15.448695813Z The Pod "sh-777f6db646-hrm65" is invalid: spec.tolerations: Forbidden: existing toleration can not be modified except its tolerationSeconds </code></pre> <h2>Notes</h2> <ul> <li>This is not an authorization problem since we've given the default user of the default namespace full administrative rights. In case we don't, we get an error message referring to missing permissions.</li> <li>Both client and servers versions "outside" (e.g on the docker host) and "inside" (e.g. the pod) are identical down to the GIT commit tag</li> <li>We are using version 3.0.2 of the Universal Control Plane</li> </ul> <p>Any ideas?</p>
<p>It was pointed out in one of the comments that the issue may be caused by a missing permission even though the error message does not insinuate so. We officially filed a ticket with Docker and actually got exactly this result: In order to be able to set/modify a label from within a pod the default user of the namespace must be given the "Scheduler" role on the swarm resource (which later shows up as <code>\</code> in the GUI). Granting this permission fixes the problem. See added grant in Docker-EE-GUI below.</p> <p><a href="https://i.stack.imgur.com/63xsH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/63xsH.png" alt="New grant on Docker-EE-GUI"></a></p> <p>From my point of view, this is far from obvious. The Docker support representative offered to investigate if this is actually expected behavior or results from a bug. As soon as we learn more on this question I will include it into our answer.</p> <p>As for using more debugging output: Unfortunately, adding <code>--v=9</code> to the calls of <code>kubectl</code> does not return any useful information. There's too much output to be displayed here but the overall logging is very similar in both cases: It consists of a lot GET API requests which are all successful followed by a final PATCH API request which succeeds in the one case but fails in the other as described above.</p>
<p>I would like to implement my own iptables rules before Kubernetes (kube-proxy) start doing it's magic and dynamically create rules based on services/pods running on the node. The kube-proxy is running in <code>--proxy-mode=iptables</code>.</p> <p>Whenever I tried to load rules when booting up the node, for example in the <code>INPUT</code> chain, the Kubernetes rules (<code>KUBE-EXTERNAL-SERVICES</code> and <code>KUBE-FIREWALL</code>) are inserted on top of the chain even though my rules were also with <code>-I</code> flag. </p> <p>What am I missing or doing wrong?</p> <p>If it is somehow related, I am using weave-net plugin for the pod network.</p>
<p>The most common practice is to put all custom firewall rules on the gateway(<a href="https://en.wikipedia.org/wiki/Application_delivery_controller" rel="noreferrer">ADC</a>) or into cloud <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/" rel="noreferrer">security groups</a>. The rest of the cluster security is implemented by other features, like <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noreferrer">Network Policy</a> (It depends on the network <a href="https://docs.projectcalico.org/v3.0/reference/calicoctl/resources/networkpolicy" rel="noreferrer">providers</a>), <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a>, <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">RBAC</a> and others.</p> <p>Check out the articles about <a href="https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/" rel="noreferrer">Securing a Cluster</a> and <a href="https://github.com/freach/kubernetes-security-best-practice" rel="noreferrer">Kubernetes Security - Best Practice Guide</a>.</p> <p>These articles can also be helpful to secure your cluster:</p> <ul> <li><p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster" rel="noreferrer">Hardening your cluster's security</a></p></li> <li><p><a href="https://neuvector.com/container-security/kubernetes-security-guide/" rel="noreferrer">The Ultimate Guide to Kubernetes Security</a></p></li> </ul>
<p>I am attempting to enforce a limit on the number of pods per deployment within a namespace / cluster from a single configuration point. I have two non-production environments, and I want to deploy fewer replicas on these as they do not need the redundancy.</p> <p>I've read <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/" rel="nofollow noreferrer">this</a> but I'm unable to find out how to do it on the deployment level. Is it possible to enforce this globally without going into each one of them?</p>
<p>Since the replicas is a fixed value on a deployment YAML file, you may better to use <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> to make a template for adding some flexibility in your application.</p>
<p>Currently, I want to introduce istio as our service-mesh framework for our microservices. I have played it sometime (&lt; 1 week), and my understanding is that Istio really provides an easy way to secure service to service communication. Much (or all?) of Istio docs/article provides an example how client and server who have istio-proxy (envoy) installed as a sidecar container, can establish secure communication using <code>mtls</code> method. </p> <p>However, since our existing client (which I don't have any control) who consume our service (which will be migrated to use istio) doesn't have istio, I still don't understand it well how we should do it better.</p> <ul> <li>Is there any tutorial or example that provides my use case better? </li> <li>How can the non-istio-based client use <code>mtls</code> for consuming our istio-based service? Think about using basic <code>curl</code> command to simulate such thing.</li> <li>Also, I am thinking of distributing a specific service account (kubernetes, gcp iam service account, etc) to the client to limit the client's privilege when calling our service. I have many questions on how these things: gcp iam service account, istio, rbac, mtls, jwt token, etc contributes to securing our service API?</li> </ul> <p>Any advice?</p>
<p>You want to add a third party to your Istio mesh outside of your network via SSL over public internet?</p> <p>I dont think Istio is really meant for federating external services but you could just have an istio ingress gateway proxy sat at the edge of your network for routing into and back out of your application.</p> <p><a href="https://istio.io/docs/tasks/traffic-management/ingress/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/ingress/</a></p> <p>If you're building microservices then surely you have an endpoint or gateway, that seems more sensible to me, try Apigee or something.</p>
<p>Pretty basic question. We have an existing swarm and I want to start migrating to Kubernetes. Can I run both using the same docker hosts?</p>
<p>See the official documentation for <em>Docker for Mac</em> at <a href="https://docs.docker.com/docker-for-mac/kubernetes/" rel="nofollow noreferrer">https://docs.docker.com/docker-for-mac/kubernetes/</a> stating:</p> <blockquote> <p>When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.</p> </blockquote> <p>So: yes, both should be able to run in parallel.</p> <p>If you're using Docker on Linux you won't have the convenient tools available like in Docker for Mac/Windows, but both orchestrators should still be able to run in parallel without further issues. On system level, details like e.g. ports on a network interface are still shared resources, so they cannot be bound by different orchestrators.</p>
<p>I am using YML to create pod and have specified the resource request and limit. Now I am not aware of how to modify the resource limits of a running pod. For example</p> <p>memory-demo.yml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: memory-demo namespace: mem-example spec: containers: - name: memory-demo-ctr image: polinux/stress resources: limits: memory: "200Mi" requests: memory: "100Mi" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] </code></pre> <p>Then I run the command <code>oc create -f memory-demo.yml</code> and a pod name memory-demo get created. </p> <p>My Question is what should I do to modify the memory limits from <code>200Mi --&gt; 600 Mi</code>, Do I need to delete the existing pod and recreate the pod using the modified YML file? </p> <p>I am a total newbie. Need help.</p>
<p>First and for most, it is very unlikely that you really want to be (re)creating Pods directly. Dig into what <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> is and how it works. Then you can simply apply the change to the spec template in deployment and kubernetes will upgrade all the pods in that deployment to match new spec in a hands free rolling update.</p> <p>Live change of the memory limit for a running container is certainly possible but not by means of kubernetes (and will not be reflected in kube state if you do so). Look at <a href="https://docs.docker.com/engine/reference/commandline/update/" rel="noreferrer">docker update</a>.</p>
<p>Following the interactive tutorial on <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">kubernetes.io</a> where a <code>NodePort</code> type service is created via <code>kubectl expose deploy kubernetes-bootcamp --type=&quot;NodePort&quot; --port 8080</code> I am confused about the results.</p> <p>First the tutorial states</p> <blockquote> <p>Let's run (...) <code>kubectl get services</code> (...) we see that the service received (...) an external-IP (the IP of the Node).</p> </blockquote> <p>This is not true according to the output, no external IP:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORTS kubernetes-bootcamp NodePort ... &lt;none&gt; 8080:31479 </code></pre> <p>Now with <code>kubectl describe service kubernetes-bootcamp</code> one gets:</p> <pre><code>Type NodePort IP ... Port &lt;unset&gt; 8080/TCP NodePort &lt;unset&gt; 31479/TCP </code></pre> <p>Now the tutorial suggests to <code>curl $(minikube ip):$NODE_PORT</code> which equals <code>curl &lt;internal node IP&gt;:&lt;NodePort&gt;</code> - which works, but leaves many questions:</p> <ul> <li>According to <code>kubectl expose -h</code> the <code>--port</code> command specifies the port on which to serve, thus I expected <code>--port</code> to set the exposed <code>&lt;NodePort&gt;</code> for <code>NodePort</code> type services, but that is not the case. Why and how to set it?</li> <li>The <code>&lt;internal node IP&gt;</code> under which the container app can indeed be reached is not shown anywhere in the service metadata, I can only get it from <code>kubectl get nodes</code>. Why?</li> <li>The <code>IP</code> shown through <code>kubectl describe service</code> (which equals the cluster IP) combined with the port also shown there (which equals the one set via <code>--port</code>) also routes to the container app, thus <code>curl &lt;clusterIP&gt;:8080</code> works and fulfills the promise that you can set the exposed port via <code>--port</code>. But in an unexpected way: why does a <code>NodePort</code> service have such a <code>clusterIP</code> type interface?</li> <li>What is the meaning of the <code>Endpoints</code> shown through <code>kubectl describe service</code> if it is neither the <code>clusterIP</code> endpoint nor the <code>NodePort</code> endpoint?</li> </ul>
<p>It will help to break this down but the general theme is that NodePort isn't the most common way to expose services outside of a cluster so some of the features around it aren't all that intuitive. LoadBalancer (or an ingress controller using it) is the preferred way to expose services to the world outside the cluster.</p> <blockquote> <p>The IP shown through <code>kubectl describe service</code> (which equals the cluster IP) combined with the port also shown there (which equals the one set via <code>--port</code>) also routes to the container app, thus <code>curl &lt;clusterIP&gt;:8080</code> works and fulfills the promise that you can set the exposed port via <code>--port</code>. But in an unexpected way: why does a NodePort service have such a clusterIP type interface?</p> </blockquote> <p>You also see ClusterIP because that is the way to get to the service <em>within</em> the cluster. Other services can reach that service within the cluster using that service name and port (8080 in this case) but ClusterIP in itself doesn't make the service available outside of the cluster. </p> <blockquote> <p>What is the meaning of the Endpoints shown through <code>kubectl describe service</code> if it is neither the clusterIP endpoint nor the NodePort endpoint?</p> </blockquote> <p>The endpoints value of <code>172.18.0.2:8080</code> or similar will be the internal IP and port for the service. Within the cluster you can also reach the service via its name using the cluster's internal dns (which is normally the more convenient way).</p> <blockquote> <p>The <code>&lt;internal node IP&gt;</code> under which the container app can indeed be reached is not shown anywhere in the service metadata, I can only get it from <code>kubectl get nodes</code>. Why?</p> </blockquote> <p>This links to the point that the output you see from <code>kubectl get services</code> and <code>kubectl describe service &lt;service&gt;</code> when exposing via NodePort is a bit misleading - one would expect to see something in or analogous to <code>external-ip</code> for the entry. NodePort dedicates a port on that node just for that service. This is a bit limiting as if you kept doing that for each service you add then you'll run out of ports. So the <code>external-ip</code> if one were to be shown would have to be the IP of the node, which might look a bit odd when shown alongside the IPs of LoadBalancer-type services.</p> <blockquote> <p>According to <code>kubectl expose -h</code> the <code>--port</code> command specifies the port on which to serve, thus I expected <code>--port</code> to set the exposed <code>NodePort</code> for NodePort type services, but that is not the case. Why and how to set it?</p> </blockquote> <p>The expose command-line option is especially limited as you currently <a href="https://github.com/kubernetes/kubernetes/issues/25478" rel="nofollow noreferrer">can't even use it to choose which port in the NodePort range is used</a>. In your case you've been given the external port 31479 but you didn't get to choose that. It might not be advisable to let users choose that too easily as there are a limited number of ports available and you could easily find yourself allocating the same port twice. But you can choose it the external port when deploying via descriptor files.</p>
<p>I've a simple kubernetes ingress network.</p> <p>I need deny the access some critical paths like /admin or etc.</p> <p>My ingress network file shown as below.</p> <pre><code> apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-test spec: rules: - host: host.host.com http: paths: - path: /service-mapping backend: serviceName: /service-mapping servicePort: 9042 </code></pre> <p>How I can deny the custom path with kubernetes ingress network, with nginx annonations or another methods . </p> <hr> <p>I handle this issue with annotations shown as below . </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-configuration-snippet annotations: nginx.ingress.kubernetes.io/configuration-snippet: | server_tokens off; location DANGER-PATH { deny all; return 403; } spec: rules: - host: api.myhost.com http: paths: - backend: serviceName: bookapi-2 servicePort: 8080 path: PATH </code></pre>
<p>You can use <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="noreferrer">server-snippet</a> annotation. <a href="https://github.com/nginxinc/kubernetes-ingress/issues/161#issuecomment-322224255" rel="noreferrer">This</a> seems like exactly what you want to achieve.</p>
<p>All kubernetes instruction ask to work with minikube that gives you a single kubernetes host. What options are available to work with true multinode clusters. For example I have VM servers available to me. What tools I need to employ to spin up a few VMWare fusion machines on my Mac and create a Kubernetes cluster. Can I start a few machines in VirtualBox and use them as a Kubernetes Cluster. </p>
<p>I think that we need to clear up some definitions here.</p> <p>First, what is Kubernetes:</p> <blockquote> <p>Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.</p> <p>Kubernetes has a number of features. It can be thought of as: a container platform - a microservices platform - a portable cloud platform and a lot more.</p> </blockquote> <p>You can read more about what is and what is not Kubernetes <a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/" rel="noreferrer">here.</a></p> <p>Second, what is a minikube? Minikube is an all-in-one install of Kubernetes, as it takes all the <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="noreferrer">components</a> of Kubernetes (Master Components, Node Components, potential Add-ons) and runs them in a single virtual machine. You can install Kubernetes in many different ways as listed in this <a href="https://kubernetes.io/docs/setup/" rel="noreferrer">documentation section</a>, and if there is no minikube as you have asked, there still are many different ways to install minikube, for example, kubeadm as <a href="https://stackoverflow.com/users/1416534/murli">Murli</a> said, <a href="https://github.com/kubernetes-incubator/kubespray" rel="noreferrer">kubespray</a>, <a href="https://cloud.google.com/kubernetes-engine/" rel="noreferrer">Google Kubernetes Engine</a>, <a href="https://github.com/kubernetes/kops" rel="noreferrer">kops</a>, <a href="https://azure.microsoft.com/en-us/services/kubernetes-service/" rel="noreferrer">Azure Kubernetes Service</a>, etc. </p> <p>I highly recommend you to try <a href="https://github.com/kelseyhightower" rel="noreferrer">kelseyhightower</a>/<a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="noreferrer">kubernetes-the-hard-way</a> as it will walk you through setting up the cluster manually which will give you a good understanding on how things work inside Kubernetes - you can do that in <a href="https://cloud.google.com/" rel="noreferrer">Google Cloud Platform</a> (there is 300$ free trial) or other platforms. There is also this <strong><a href="https://kubernetes.io/docs/setup/scratch/" rel="noreferrer">guide</a></strong> in Kubernetes documentation where you can run Kubernetes on physical device step by step:</p> <blockquote> <p>Note that it requires considerably more effort than using one of the pre-defined guides.</p> <p>This guide is also useful for those wanting to understand at a high level some of the steps that existing cluster setup scripts are making.</p> </blockquote>
<p>I have stacked in this phase:</p> <ol> <li>Have local docker insecure registry and some images in it, e.g. 192.168.1.161:5000/kafka:latest</li> <li>Have kubernetes cloud cluster, for which I can access only via ~/.kube/config file, e,g. token.</li> </ol> <p>Need to deploy below deployment, but kubernetes cannot pull images, error message: </p> <blockquote> <p>Failed to pull image "192.168.1.161:5000/kafka:latest": rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://192.168.1.161:5000/v2/" rel="noreferrer">https://192.168.1.161:5000/v2/</a>: http: server gave HTTP response to HTTPS client</p> </blockquote> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka labels: app: kafka spec: type: NodePort ports: - name: port9094 port: 9094 targetPort: 9094 selector: app: kafka --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kafka spec: replicas: 1 template: metadata: labels: app: kafka spec: hostname: kafka containers: - name: redis image: 192.168.1.161:5000/kafka:latest imagePullPolicy: Always ports: - name: port9094 containerPort: 9094 - envFrom: - configMapRef: name: env imagePullSecrets: - name: regsec </code></pre> <p>ON Kubernetes cluster I have created secret file "regsec" with this command:</p> <pre><code>kubectl create secret docker-registry regsec --docker-server=192.168.1.161 --docker-username=&lt;name from config file&gt; --docker-password=&lt;token value from config file&gt; cat ~/.docker/config.json { "auths": {}, "HttpHeaders": { "User-Agent": "Docker-Client/18.06.0-ce (linux)" } cat /etc/docker/daemon.json { "insecure-registries":["192.168.1.161:5000"] } kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} docker version Client: Version: 18.06.0-ce API version: 1.38 Go version: go1.10.3 Git commit: 0ffa825 Built: Wed Jul 18 19:09:54 2018 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 18.06.0-ce API version: 1.38 (minimum version 1.12) Go version: go1.10.3 Git commit: 0ffa825 Built: Wed Jul 18 19:07:56 2018 OS/Arch: linux/amd64 Experimental: false </code></pre>
<p>You need to go to each of your nodes, edit the file <code>/etc/default/docker.json</code> and add the following in it:</p> <pre><code>{ &quot;insecure-registries&quot;: [&quot;192.168.1.161:5000&quot;] } </code></pre>
<p>I have a problem with headers not forwarded into my services, I am not sure how support for Ingress was added, however I have the following Ingress service:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: ingress.kubernetes.io/rewrite-target: / "nginx.org/proxy-pass-headers": "custom_header" spec: rules: - host: myingress.westus.cloudapp.azure.com http: paths: - path: /service1 backend: serviceName: service1 servicePort: 8080 </code></pre> <p>However, my custom_header will not be forwarded. In nginx I set underscores_in_headers:</p> <pre><code>underscores_in_headers on; </code></pre> <p>How can I add this configuration into my ingress nginx service?</p> <p>Thanks.</p>
<p>I've just changed &quot;true&quot; instead of &quot;on&quot;, for nginx ingress controller , and workd for me . As mentioned here : <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/</a></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app: ingress-nginx data: enable-underscores-in-headers: &quot;true&quot; </code></pre> <p>kubectl apply -f configmap.yml</p> <p><a href="https://i.stack.imgur.com/1RBPi.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I solved a permission issue when mounting <code>/var/lib/postgresql/data</code> by following <a href="https://stackoverflow.com/questions/51200115/chown-changing-ownership-of-data-db-operation-not-permitted/51203031#51203031">this answer</a> with <code>initContainers</code>.</p> <p>Now I'm trying to mount <code>postgresql.conf</code> as a volume, and I'm running into a similar permissioning issue that throws <code>chown: /var/lib/postgresql/data/postgresql.conf: Read-only file system</code>.</p> <p>What could I be missing? I've tried a bunch of different variations with little luck.</p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres labels: app: postgres spec: serviceName: postgres replicas: 1 updateStrategy: type: RollingUpdate template: metadata: labels: app: postgres spec: terminationGracePeriodSeconds: 10 initContainers: - name: chmod-er image: busybox:latest command: - /bin/chown - -R - '0' - /var/lib/postgresql/data volumeMounts: - name: postgredb mountPath: /var/lib/postgresql/data - name: pg-config mountPath: /var/lib/postgresql/data/postgresql.conf subPath: postgresql.conf containers: - name: postgres image: mdillon/postgis:10-alpine ports: - containerPort: 5432 volumeMounts: - name: postgredb mountPath: /var/lib/postgresql/data subPath: data - name: pg-config mountPath: /var/lib/postgresql/data/postgresql.conf subPath: postgresql.conf volumes: - name: postgredb persistentVolumeClaim: claimName: postgres-pvc - name: pg-config configMap: name: pg-config items: - key: postgresql.conf path: postgresql.conf </code></pre>
<p>From kubernetes 1.8 on, configmap is mounted readonly, excerpt from the CHANGELOG-1.8.md:</p> <blockquote> <p>Changes secret, configMap, downwardAPI and projected volumes to mount read-only, instead of allowing applications to write data and then reverting it automatically. Until version 1.11, setting the feature gate ReadOnlyAPIDataVolumes=false will preserve the old behavior. (#58720, @joelsmith)</p> </blockquote> <p>If you want to change the file that mounted from the configmap, you can copy it to another directory, then update it.</p>
<p>I'm trying to update an image in Kubernetes by using the following command:</p> <pre><code>kubectl set image deployment/ms-userservice ms-userservice=$DOCKER_REGISTRY_NAME/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT --insecure-skip-tls-verify </code></pre> <p>But when I receive the following error:</p> <pre><code>error: the server doesn't have a resource type "deployment" </code></pre> <p>I have checked that i am in the right namespace, and that the pod with the name exists and is running.</p> <p>I can't find any meaningful resources in regards to this error.</p> <p>Sidenote: I'm doing this through Bitbucket and a pipeline, which also builds the image i want to use. </p> <p>Any suggestions?</p>
<blockquote> <p>I have a suspicion that it has something to do with user - not much help from the error message.</p> </blockquote> <p>@TietjeDK is correct that it is just a misleading error message. It means one of two things is happening (or maybe both): the <code>kubectl</code> binary is newer than the supported version range of the cluster (so: using a v1.11 binary against a v1.8 cluster, for example) or the provided JWT is incorrectly signed.</p> <p>You should be very very careful with <code>--insecure-skip-tls-verify</code> not only because it's bad security hygiene but also because if a kubeconfig is incorrect -- as is very likely the case here -- then seeing the x509 error is a much clearer indication than trying to troubleshoot an invalid JWT.</p> <p>The indicator that makes me believe it is actually the signature of the token, and not its contents, is that if it were the contents you would seen an RBAC message <code>User "[email protected]" cannot list deployments in $namespace namespace</code>, meaning the apiserver did unpack the JWT and found its assertions were insufficient for the operation. But if you sign a JWT using a random key, the JWT will not unpack since it will fail public key validation and be rejected outright.</p> <p>So, the tl;dr is two-fold:</p> <ol> <li>fix the kubeconfig to actually contain the correct certificate authority (CA) for the cluster, so <code>--insecure-skip-tls-verify</code> isn't required</li> <li>while fixing kubeconfig, issue a new token for the (<code>User</code> | <code>ServiceAccount</code>) that comes from the cluster it is designed to interact with</li> </ol>
<p>I am looking to access a Postgres service outside of GKE, but on a compute engine VM in the same zone using the VMs internal IP.</p> <p>So far I've managed to access it via the external IP from inside the pod, but I'm looking to access it without leaving google infrastructure.</p> <p>I have done some testing and I can SSH into the kubernetes VM and connect to the other VM using the internal IP, but when I attempt to do this from the pod running on the VM it cannot connect. I even tried using the compute engine internal DNS name with the same result, success from the VM but unable to connect from the pod.</p> <p>I am sure this isn't the first time this problem has come up but I cannot find out a way to communicate from the kubernetes pod to a compute engine instance in the same zone.</p>
<p>It turns out with the compute engine VM I did not have the kubernetes <strong>pod address range</strong> whitelisted for port 5432. I simply added that to the network configuration for the VM and it started working...</p>
<p>I have a simple ingress network, I want to access services at different namespaces, from this ingress network.</p> <p>How I can do this? My ingress network yaml file: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress spec: rules: - host: api.myhost.com http: paths: - backend: serviceName: bookapi-2 servicePort: 8080 path: /booking-service/ </code></pre> <p>I've set the ExternalNames service type to the yaml file: </p> <pre><code> apiVersion: v1 kind: Service metadata: name: bookapi-2 namespace: booking-namespace spec: type: ExternalName externalName: bookapi-2 ports: - name: app protocol: TCP port: 8080 targetPort: 8080 selector: app: bookapi-2 tier: backend-2 </code></pre>
<blockquote> <p>An ExternalName service is a special case of service that does not have selectors and uses DNS names instead.</p> </blockquote> <p>You can find out more about ExternalName service from the official <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="noreferrer">Kubernetes documentation</a>:</p> <p>When you want to access a service from a different namespace, your yaml could, for <a href="https://akomljen.com/kubernetes-tips-part-1/" rel="noreferrer">example</a>, look like this:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: test-service-1 namespace: namespace-a spec: type: ExternalName externalName: test-service-2.namespace-b.svc.cluster.local ports: - port: 80 </code></pre> <p>As to your Ingress yaml file, please recheck it and make sure it is compliant with the official examples, for <a href="https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e" rel="noreferrer">example</a> this one as it contains some inconsistency:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress spec: rules: - host: www.mysite.com http: paths: - backend: serviceName: website servicePort: 80 - host: forums.mysite.com http: paths: - path: backend: serviceName: forums servicePort: 80 </code></pre> <p>Please also recheck ExternalName yaml as it has TargetPorts and selectors which are not used in this type of <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Service</a> and make sure that:</p> <blockquote> <p>ExternalName Services are available only with <code>kube-dns</code> version 1.7 and later.</p> </blockquote> <p>In case you will not succeed, please share the kind of problem you have meet. </p>
<p>I am working on a design where I will have my git repository located at one of the google cloud instance. I am trying to run my tests in parallel using kubernetes cluster. However I am not able to share the Standard persistent disk(which is having the github repository) that is configured to my instance with the kubernetes cluster. I need my containers to work on the repository that is located in the instance and don't want to create copies of it in cluster instances. </p> <p>I have achieved the same thing using docker on a vm but now I would like to put everything on the cloud and run it in a containerised environment. </p>
<p>You need to create a persistant volume and then mount that pv to the kubernetes pod.</p> <p>You should note that this will be read only as you cant attach multiple containers to write to a persistent disk - if you want RW then you will need to use NFS/Gluster or more sensibly GCR.</p>
<p>How do I set the <code>client_max_body_size</code> parameter for a single subdomain? I have a server that will accept file uploads up to 5TB. All the examples I've looked at show you how to set it globally. I have multiple rules in my ingress.yaml, I don't want every single rule to inherit the <code>client_max_body_size</code> parameter, only the file upload server should.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-nginx annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: example.com http: paths: - backend: serviceName: homepage servicePort: 80 - host: storage.example.com http: paths: - backend: serviceName: storage servicePort: 80 </code></pre> <p>In the above ingress.yaml, I want to set <code>client_max_body_size</code> for the <code>storage</code> service only, which is located at the host <code>storage.example.com</code>.</p>
<p>Because I don't see <code>client-max-body-size</code> on the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">list of annotations</a>, that leads me to believe you'll have to use the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">custom-config-snippet</a> to include the <code>client_max_body_size 5tb;</code> for that Ingress:</p> <pre><code>metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | client_max_body_size 5tb; </code></pre> <p>However, given that you said that you only want it for <code>storage.example.com</code>, you'll need to split the Ingress config for <code>storage.example.com</code> out into its own Ingress resource, since (AFAIK) the annotations are applied to every <code>host:</code> record in the Ingress resource.</p>
<p>I have a kubernetes namespace that I want to leverage for Gitlab runners. I installed the runners following the <a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_runner_chart.html" rel="nofollow noreferrer">Helm Chart</a> instructions. The problem I am running into is that when the job container spins up, I get the following ERROR: </p> <p>Job failed: image pull failed: rpc error: code = Unknown desc = Get <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a>: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)</p> <p>It's trying to connect to the public docker repo but my organizations firewall is blocking it. How would I go about having the instace go to our private repo? </p> <p>Any help would be greatly appreciated as I've been stuck on this issue for some time now :(</p>
<p>I would presume you'll need to specify a <code>values.yaml</code> to the <code>helm install</code> that points to your mirrored copy of the images it needs. So:</p> <ul> <li><a href="https://gitlab.com/charts/gitlab-runner/blob/1e329394f69b80003a43c503e7d11eb6463d008a/values.yaml#L4" rel="nofollow noreferrer">gitlab/gitlab-runner</a></li> <li><a href="https://gitlab.com/charts/gitlab-runner/blob/1e329394f69b80003a43c503e7d11eb6463d008a/values.yaml#L14" rel="nofollow noreferrer">busybox</a></li> <li><a href="https://gitlab.com/charts/gitlab-runner/blob/1e329394f69b80003a43c503e7d11eb6463d008a/values.yaml#L87" rel="nofollow noreferrer">ubuntu</a></li> </ul> <p>or whatever ones you wish to use for the <code>init</code> and <code>runner: image:</code></p> <p>Since you already have the chart deployed, I am fairly certain you can just do an <a href="https://github.com/helm/helm/blob/v2.10.0/docs/using_helm.md#helm-upgrade-and-helm-rollback-upgrading-a-release-and-recovering-on-failure" rel="nofollow noreferrer">"helm upgrade"</a> that changes only those values:</p> <pre><code>helm upgrade --set "image=repo.example.com/gitlab/gitlab-runner" \ --set "init.image=repo.example.com/etc-etc" \ [and so forth] \ $releaese_name $chart_name </code></pre> <p><em>(substituting the release name and the name of the chart as your helm knows it, of course)</em></p>
<p>Basically we have a set of microservices we have deployed to a kubernetes cluster hosted in AWS. We would like to run these through a gateway configuration in nginx. </p> <p>Our current configuration which doesn't work looks something like this-</p> <pre><code>upstream some-api1 { server some-api1:80; } upstream some-api2 { server some-api2:80; } upstream some-api3 { server some-api3:80; } server { listen 80; server_name gateway.something.com; location /api1 { proxy_pass http://some-api1; } location /api2 { proxy_pass http://some-api2; } location /api3 { proxy_pass http://some-api3; } } </code></pre> <p>Our services have been built with dotnet core, so the underlying urls would be something like <a href="http://some-api1/" rel="nofollow noreferrer">http://some-api1/</a>{api/controllername} . I'm always getting a 404 when I try hitting these endpoints through postman, which tells me it can't resolve these mappings.</p> <p>However I'm able to access an api within the cluster using an explicit config for an api like so(which is what I don't want to do)-</p> <pre><code>server { listen 80; server_name someapi1.something.com; location /{ proxy_pass http://some-api1; } }.. </code></pre> <p>If someone could shed some light on what's wrong with the configuration or recommend the best approach for this it would be greatly appreciated.</p>
<p>As @Luminance suggests, you're seeing traffic to /api1 go to some-api1/api1 instead of just some-api1 on the base path for that target service (which is what your app would respond to). Following <a href="https://gist.github.com/soheilhy/8b94347ff8336d971ad0" rel="nofollow noreferrer">https://gist.github.com/soheilhy/8b94347ff8336d971ad0</a> you could re-write that target like</p> <pre><code>location /api1 { rewrite ^/api1(.*) /$1 break; proxy_pass http://some-api1; } </code></pre>
<p>Is it possible to use Kubespray with Bastion but on custom port and with agent forwarding? If it is not supported, what changes does one need to do?</p>
<p>Always, since you can configure that at three separate levels: via the host user's <code>~/.ssh/config</code>, via the entire playbook with <code>group_vars</code>, or as inline config (that is, on the command line or in the inventory file).</p> <p>The ssh config is hopefully straightforward:</p> <pre><code>Host 1.2.* *.example.com # or whatever pattern matches the target instances ProxyJump someuser@some-bastion:1234 # and then the Agent should happen automatically, unless you mean # ForwardAgent yes </code></pre> <p>I'll speak to the inline config next, since it's a little simpler:</p> <pre><code>ansible-playbook -i whatever \ -e '{"ansible_ssh_common_args": "-o ProxyJump=\"someuser@jump-host:1234\""}' \ cluster.yaml </code></pre> <p>or via the inventory in the same way:</p> <pre><code>master-host-0 ansible_host=1.2.3.4 ansible_ssh_common_args="-o ProxyJump='someuser@jump-host:1234'" </code></pre> <p>or via <code>group_vars</code>, which you can either add to an existing <code>group_vars/all.yml</code>, or if it doesn't exist then create that <code>group_vars</code> directory containing the <code>all.yml</code> file as a child of the directory containing your inventory file</p> <p>If you have more complex ssh config than you wish to encode in the inventory/command-line/group_vars, you can also instruct the ansible-invoked ssh to use a dedicated config file via the <a href="https://docs.ansible.com/ansible/2.6/user_guide/intro_inventory.html?highlight=ansible_ssh_extra_args#list-of-behavioral-inventory-parameters" rel="nofollow noreferrer"><code>ansible_ssh_extra_args</code></a> variable:</p> <pre><code>ansible-playbook -e '{"ansible_ssh_extra_args": "-F /path/to/special/ssh_config"}' ... </code></pre>
<p>When adding origin to Cloud CDN, only HTTP(S) Load Balancer appears in the options. While the TCP/UDP Load Balancer doesn't.</p> <p><a href="https://i.stack.imgur.com/fwPw1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fwPw1.png" alt="enter image description here"></a></p> <p>Is it possible to set TCP/UDP Load Balancer as origin to Google Cloud CDN? How?</p> <p>In my case, I really need TCP/UDP origin since the LB is managed by nginx (K8S Nginx Ingress, not K8S GCE Ingress). The same issue also need to be tackled when using Istio Gateway.</p>
<p>According to <a href="https://cloud.google.com/cdn/docs/using-cdn" rel="nofollow noreferrer">this</a> doc </p> <blockquote> <p>Cloud CDN uses HTTP(S) load balancing as the origin for cacheable content. You must use HTTP(S) load balancing as the origin of content cached by Cloud CDN.</p> </blockquote> <p>At the moment, it is not possible to use a TCP load balancer. </p> <p>You can file a feature request <a href="https://issuetracker.google.com" rel="nofollow noreferrer">here</a> to have this implemented. You should check to see if a similar feature request has been filled first, &amp; upvote/star it, as the feature requests which impact the most people will have a better chance of being implemented.</p>
<p>Anyone who can tell me how to get pods under the service with client-go the client library of kubernetes? </p> <p>thanks</p>
<p>I found this accepted answer a little lacking, as far as clarity. This code works under 1.10. In this example, my svc deployments all have a controlled name based on the artifact that it is fronting, and the pods leverage that as well. Please note I am a Java programmer learning go, so there may be a little too much OO for some go enthusiasts</p> <pre class="lang-golang prettyprint-override"><code>package main import ( &quot;os&quot; &quot;log&quot; &quot;path/filepath&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; &quot;k8s.io/client-go/kubernetes&quot; typev1 &quot;k8s.io/client-go/kubernetes/typed/core/v1&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; corev1 &quot;k8s.io/api/core/v1&quot; &quot;fmt&quot; &quot;strings&quot; &quot;errors&quot; &quot;k8s.io/apimachinery/pkg/labels&quot; ) func main(){ kubeconfig := filepath.Join( os.Getenv(&quot;HOME&quot;), &quot;.kube&quot;, &quot;config&quot;, ) namespace:=&quot;FOO&quot; k8sClient, err:= getClient(kubeconfig) if err!=nil{ fmt.Fprintf(os.Stderr, &quot;error: %v\n&quot;, err) os.Exit(1) } svc, err:=getServiceForDeployment(&quot;APP_NAME&quot;, namespace, k8sClient) if err!=nil{ fmt.Fprintf(os.Stderr, &quot;error: %v\n&quot;, err) os.Exit(2) } pods, err:=getPodsForSvc(svc, namespace, k8sClient) if err!=nil{ fmt.Fprintf(os.Stderr, &quot;error: %v\n&quot;, err) os.Exit(2) } } func getClient(configLocation string) (typev1.CoreV1Interface, error){ kubeconfig := filepath.Clean(configLocation) config, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, kubeconfig) if err != nil { log.Fatal(err) } clientset, err := kubernetes.NewForConfig(config) if err != nil { return nil, err } return clientset.CoreV1(), nil } func getServiceForDeployment(deployment string, namespace string, k8sClient typev1.CoreV1Interface) (*corev1.Service, error){ listOptions := metav1.ListOptions{} svcs, err := k8sClient.Services(namespace).List(listOptions) if err != nil{ log.Fatal(err) } for _, svc:=range svcs.Items{ if strings.Contains(svc.Name, deployment){ fmt.Fprintf(os.Stdout, &quot;service name: %v\n&quot;, svc.Name) return &amp;svc, nil } } return nil, errors.New(&quot;cannot find service for deployment&quot;) } func getPodsForSvc(svc *corev1.Service, namespace string, k8sClient typev1.CoreV1Interface) (*corev1.PodList, error){ set := labels.Set(svc.Spec.Selector) listOptions:= metav1.ListOptions{LabelSelector: set.AsSelector().String()} pods, err:= k8sClient.Pods(namespace).List(listOptions) for _,pod:= range pods.Items{ fmt.Fprintf(os.Stdout, &quot;pod name: %v\n&quot;, pod.Name) } return pods, err } </code></pre>
<p>I am looking for a way to rollback a helm release to its previous release without specifying the target release version as a number.</p> <p>Something like <code>helm rollback &lt;RELEASE&gt; ~1</code> (like <code>git reset HEAD~1</code>) would be nice.</p>
<p>As it turns out, there is an undocumented option to rollback to the previous release by defining the target release version as 0. like: <code>helm rollback &lt;RELEASE&gt; 0</code></p> <p>Source: <a href="https://github.com/helm/helm/issues/1796" rel="nofollow noreferrer">https://github.com/helm/helm/issues/1796</a></p>
<p>I am using external nginx loadbalancer and trying to configure K8s Master but its failing with below error :</p> <blockquote> <p>error uploading configuration: unable to create configmap: configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"**</p> </blockquote> <p>For me it more looks like cert issue but i am having hard time to find what i am missing , any help is appreciated in our infrastructure we use F5 loadbalancer in front of apiserver and i am seeing the same issue there this is the env i have created for troubleshooting</p> <p><strong>kubeadm-config:</strong></p> <pre><code> apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration kubernetesVersion: v1.11.0 apiServerCertSANs: - "ec2-23-23-244-63.compute-1.amazonaws.com" api: controlPlaneEndpoint: "ec2-23-23-244-63.compute-1.amazonaws.com:6443" etcd: external: endpoints: - https://172.31.32.160:2379 caFile: /etc/kubernetes/pki/etcd/ca.crt certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key networking: # This CIDR is a calico default. Substitute or remove for your CNI provider. podSubnet: "10.244.0.0/16" </code></pre> <p><strong>Env :</strong> Kubelet : 1.11.1 kubeadm 1.11.1 kubectl 1.11.1</p> <p><strong>Output</strong></p> <pre><code> [certificates] Using the existing ca certificate and key. [certificates] Using the existing apiserver certificate and key. [certificates] Using the existing apiserver-kubelet-client certificate and key. [certificates] Using the existing sa key. [certificates] Using the existing front-proxy-ca certificate and key. [certificates] Using the existing front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller- manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 41.036802 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace error uploading configuration: unable to create configmap: configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system" </code></pre> <p><strong>logs:</strong> </p> <pre><code> Unable to register node "ip-172-31-40-157" with API server: nodes is forbidden: User "system:anonymous" cannot create nodes at the cluster scope tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: nodes "ip-172-31-40-157" is forbidden: User "system:anonymous" cannot list nodes at t tor.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list pods at the cluster sco tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list services at the cluster on_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-40-157" not found tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: nodes "ip-172-31-40-157" is forbidden: User "system:anonymous" cannot list nodes at t tor.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list pods at the cluster sco tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list services at the cluster tor.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: nodes "ip-172-31-40-157" is forbidden: User "system:anonymous" cannot list nodes at t tor.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list pods at the cluster sco :172] Unable to update cni config: No networks found in /etc/cni/net.d </code></pre> <p>Nginx :</p> <pre><code> upstream mywebapp1 { server 172.31.40.157:6443; } server { listen 6443 ssl; server_name ec2-23-23-244-63.compute-1.amazonaws.com; ssl on; ssl_certificate /opt/certificates/server.crt; ssl_certificate_key /opt/certificates/server.key; ssl_trusted_certificate /opt/certificates/ca.crt; location / { proxy_pass https://mywebapp1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Nginx Server : 172-31-44-203 Master Server : 172-31-40-157 </code></pre> <p><strong>I am using Self Signed Certs and the CA to generate all the certs including the one in nginx are same</strong> </p> <p>I had same issue in our infrastructure when we use f5 loadbalancer </p>
<p>If your nodes speak to the apiserver through a load balancer, and expect to use client certificate credentials to authenticate (which is typical for nodes), the load balancer must not terminate or re-encrypt TLS, or the client certificate information will be lost and the apiserver will see the request as anonymous. </p>
<p>I'm new in Kubernetes and I have a code error 403 trying the access.</p> <pre><code>kubectl cluster info Kubernetes master is running at https://x.x.x.x:6443 KubeDNS is running at https://x.x.x.x:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy "status": "Failure", "message": "namespaces is forbidden: User \"system:anonymous\" cannot list namespaces at the cluster scope", "reason": "Forbidden", "details": { "kind": "namespaces" }, "code": 403 kubectl get pods --all-namespaces kube-system calico-etcd-6629s 1/1 Running 0 10h kube-system calico-kube-controllers-675684d4bb-5h28d 1/1 Running 0 10h kube-system calico-node-r75wv 2/2 Running 0 10h kube-system etcd-sp2013a.... 1/1 Running 0 10h kube-system kube-apiserver-sp2013a ... 1/1 Running 0 10h kube-system kube-controller-manager-sp2013a.... 1/1 Running 0 10h kube-system kube-dns-6f4....df-fcqvt 3/3 Running 0 10h kube-system kube-proxy-mpf2j 1/1 Running 0 10h kube-system kube-scheduler-sp2013a...... 1/1 Running 0 10h </code></pre> <p>everything is running..</p>
<p>That sounds like you're being blocked by the cluster's RBAC policies. The <code>system:anonymous</code> user is being prevented from listing the namespaces in the cluster. (Along the lines of <code>kubectl get namespaces</code>)</p> <p>Running <code>kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous</code> would create a <code>clusterrolebinding</code> that adds the <code>system:anonymous</code> user to the <code>cluster-admin</code> role. </p> <p>Blindly elevating accounts to <code>cluster-admin</code> is <em>not</em> recommended in a production cluster but since you are new, this should get you up and running. </p> <p>All clusters need some form of authorization before accessing the API-server (accessing <code>kubectl</code>) like certificate authentication. RBAC is a way to <em>limit</em> the actions that users (both human users and service accounts) can take <em>in</em> the cluster. </p> <p>A great RBAC primer from the CNCF can be found <a href="https://www.cncf.io/blog/2018/08/01/demystifying-rbac-in-kubernetes/" rel="noreferrer" title="Demystifying RBAC in Kubernetes">here</a> and the official <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer" title="RBAC Docs">docs</a> are great too! Good Luck! </p>
<p>I am trying to implement the CI/CD pipeline for my microservice by using Jenkins, Kubernetes and Kubernetes Helm. Here I am using Helm chart for packaging of YAML files and deployment into Kubernetes cluster. I am now learning the implementation of Helm chart and deployment. When I am learning, I found the image name definition in deployment YAML file.</p> <p>I have two questions:</p> <ol> <li>If we only defining the image name, then it will automatically pull from Docker Hub? Or do we need to define additionally anything in the deployment chart YAML file for pulling?</li> <li>How the Helm Tiller communicating with Docker Hub registry?</li> </ol>
<p>Docker image names in Kubernetes manifests follow the same rules as everywhere else. If you have an image name like <code>postgres:9.6</code> or <code>myname/myimage:foo</code>, those will be looked up on Docker Hub like normal. If you're using a third-party repository (Google GCR, Amazon ECR, quay.io, ...) you need to include the repository name in the image name. It's the exact same string you'd give to <code>docker run</code> or <code>docker build -t</code>.</p> <p>Helm doesn't directly talk to the Docker registry. The Helm flow here is:</p> <ol> <li>The local Helm client sends the chart to the Helm Tiller.</li> <li>Tiller applies any templating in the chart, and sends it to the Kubernetes API.</li> <li>This creates a Deployment object with an embedded Pod spec.</li> <li>Kubernetes creates Pods from the Deployment, which have image name references.</li> </ol> <p>So if your Helm chart names an image that doesn't exist, all of this flow will run normally, until it creates Pods that wind up in <code>ImagePullBackOff</code> state.</p> <p>P.S.: if you're not already doing this, you should make the image tag (the part after the colon) configurable in your Helm chart, and declare your image name as something like <code>myregistry.io/myname/myimage:{{ .Values.tag }}</code>. Your CD system can then give each build a distinct tag and pass it into <code>helm install</code>. This makes it possible to roll back fairly seamlessly.</p>
<p>I was trying to create a namespace using <code>kubectl</code>, but I got this error:</p> <blockquote> <p>Error from server (Forbidden): error when creating "namespacefoo": namespaces is forbidden: User "[email protected]" cannot create namespaces at the cluster scope</p> </blockquote> <p>Is there a concept of "scope" in Kubernetes? I couldn't find any information about different types of scope. If I cannot create namespace at the cluster scope, where can I create the namespace? How can I check which "scopes" do I have access to?</p>
<p>That depends on your Kubernetes environment.</p> <p>This <a href="https://stackoverflow.com/a/49094802/6309">answer suggest</a> (in a <a href="https://cloud.google.com/" rel="nofollow noreferrer">Google Cloud environment</a>):</p> <blockquote> <p>That suggests that <code>gcloud config set container/use_client_certificate</code> is set to <code>true</code> i.e. that <code>gcloud</code> is expecting a client cluster certificate to authenticate to the cluster (this is what the 'client' in the error message refers to).</p> <p>Unsetting <code>container/use_client_certificate</code> by issuing the following command in the <code>glcoud config</code> ends the need for a legacy certificate or credentials and prevents the error message:</p> <pre><code>gcloud config unset container/use_client_certificate </code></pre> <p>Issues such as this may be more likely if you are using an older version of <code>gcloud</code> on your home workstation or elsewhere.</p> </blockquote> <p>Still, <a href="https://github.com/kubernetes/kubernetes/issues/62361#issuecomment-397215728" rel="nofollow noreferrer">kubernetes/kubernetes issue 62361</a> mentions the same error message.</p>
<p>I need an Akka cluster to run multiple CPU intensive jobs. I cannot predict how much CPU power I need. Sometimes load is high, while at other times, there isn't much load. I guess autoscaling is a good option, which means, example: I should be able to specify that I need minimum 2 and maximum 10 Actors. The cluster should scale up or down along with a cool off period as load goes up or down. Is there a way to do that? I am guessing, maybe one can make an Docker image of the codebase, and autoscale it using Kubernetes. Is it possible? Is there a native Akka solution? Thanks</p>
<p>If you consider a <a href="https://github.com/hseeberger/constructr" rel="nofollow noreferrer">project like <code>hseeberger/constructr</code></a> and its <a href="https://github.com/hseeberger/constructr/issues/179" rel="nofollow noreferrer">issue 179</a>, a native Akka solution should be based on <a href="https://github.com/akka/akka-management" rel="nofollow noreferrer"><code>akka/akka-management</code></a>:</p> <blockquote> <p>This repository contains interfaces to inspect, interact and manage various Parts of Akka, primarily Akka Cluster. Future additions may extend these concepts to other parts of Akka.</p> </blockquote> <p>There is a <a href="https://github.com/akka/akka-management/tree/master/bootstrap-joining-demo/kubernetes-api" rel="nofollow noreferrer">demo for kubernetes</a>.</p>
<p>I created a Mongodb service according to the Kubernetes <a href="http://kubernetes.io/docs/getting-started-guides/meanstack/" rel="noreferrer">tutorial</a>.</p> <p>Now my question is how do I gain access to the database itself, with a client like Robomongo or similar clients? Just for making backups or exploring what data have been entered. </p> <p>The mongo-pod and service only have an internal endpoint, and a single mount. </p> <p>Is there any way to safely access this instance with no public endpoint?</p> <p>Internally URI is <code>mongo:27***</code></p>
<p>You can use <code>kubectl port-forward mypod 27017:27017</code> and then just connect your mongodb client to <code>localhost:27017</code>.</p> <p>If you want to stop, just hit <code>Ctrl+C</code> on the same cmd window to stop the process.</p>
<p>Background : I am trying to learn and experiment a bit on docker and kubernetes in a "development/localhost" environment, that I could later replicate "for real" on some Cloud. But I'm running low on everything (disk capacity, memory, etc.) on my laptop. So I figured out "why not develop from the cloud ?"</p> <p>I know AWS has some Kubernetes service, but if my understanding is correct, this is mostly to deploy already well configured stacks, and it is not very suited for the development of the stack configuration itself. </p> <p>After searching a bit, I found out about Minikube, that helps us experiment our configs by running kubernetes deployments on a single machine. <strong>I'd like to setup a kubernetes + Minikube (or equivalent) development environment from an EC2 instance (ideally running Amazon Linux 2 OS).</strong></p> <p>I'm having a hard time figuring out </p> <ul> <li><strong>Is it actually possible to setup Minikube on EC2 ?</strong></li> <li>(If yes), how do I do it ? I tried following <a href="https://stackoverflow.com/a/46756411/2832282">this answer</a> but I'm getting stuck at registering the Virtualbox Repo and downloading Virtualbox command line tools</li> </ul>
<p>Heres how to do it</p> <p>Start an ec2 instance with 8gb of ram and a public ip, ensure you can ssh to this box in the normal ways. Ensure its an unbuntu instance (I'm using 16.04).</p> <p>once ssh'd into the instance run the following to update and install docker</p> <pre><code>sudo -i apt-get update -y &amp;&amp; apt-get install docker.io </code></pre> <p>Install minikube</p> <pre><code>curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 &amp;&amp; chmod +x minikube &amp;&amp; sudo mv minikube /usr/local/bin/ </code></pre> <p>Install kube cli </p> <pre><code>curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl &amp;&amp; chmod +x kubectl &amp;&amp; sudo mv kubectl /usr/local/bin/ </code></pre> <p>now verify the version just to make sure you can see it</p> <pre><code>/usr/local/bin/minikube version </code></pre> <p>Add autocompletion to the current shell with</p> <pre><code>source &lt;(kubectl completion bash) </code></pre> <p>Start the cluster with this (note the no vm driver line)</p> <pre><code>/usr/local/bin/minikube start --vm-driver=none </code></pre> <p>Check its up and running with this:</p> <pre><code>/usr/local/bin/minikube status </code></pre> <p>right that should have you a basic cluster running with no extra nodes :)</p> <p>If you want a nice dashboard do the following (I am using windows here making use of wsl on windows 10, you can do this on mac or linux if you like but the steps are slightly different but as long as you can follow basic steps like setting variables you will be cool)</p> <p>In order to see the gui on your local box you are going to need to run a dashboard and to do other useful stuff run kubectl locally</p> <p><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">Please follow this to install kubectl locally</a></p> <p>On windows you can use chocolatey like so:</p> <pre><code>choco install kubernetes-cli </code></pre> <p>Now download your admin.conf file from the ec2 instance using scp this is located in /etc/kubernetes.</p> <p>Now set a local variable called <code>KUBECONFIG</code> and point to the file you just downloaded.</p> <p>Go on to the ec2 instance and use this to install a dashboard.</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml </code></pre> <p>This dashboard is a dev dashboard do not use this in production :)</p> <p>run the following command to find out what ip address the dashboard is running on</p> <pre><code>/usr/local/bin/kubectl get svc --namespace kube-system </code></pre> <p>output should look a bit like this:</p> <pre><code>root@ip-172-31-39-236:~# /usr/local/bin/kubectl get svc --namespace kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 49m kubernetes-dashboard NodePort 10.109.248.81 &lt;none&gt; 80:30000/TCP 49m </code></pre> <p>now run this on your localbox to tunnel to the dashboard from the local machine</p> <pre><code>ssh -i ~/.ssh/keyfile.pem -L 8080:10.109.248.81:80 ubuntu@ec2-i-changed-this-for-this-post.eu-west-1.compute.amazonaws.com </code></pre> <p>now open a web browser at:</p> <pre><code>http://localhost:8080 </code></pre> <p>and you should now be able to see the dashboard. Which looks like this:</p> <p><a href="https://i.stack.imgur.com/L0QJZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L0QJZ.png" alt="enter image description here"></a></p> <p>Sorry the post is so long but its pretty involved. also please note this is really only a dev machine if you are going to need a prod instance you need to do this with better security and probably not run stuff as root :)</p> <p>One other thing, you may note kubectl locally isn't being used in this guide you can use it to hit the remote api if you use (locally)</p> <pre><code>kubectl proxy </code></pre> <p>There is a guide on this on kubernetes homepage <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/" rel="nofollow noreferrer">here</a> Also note the admin.conf probably has localhost as the server address, it needs to be the address of the ec2 instance and you'll need to make sure the port is accessible from your ip in your security group for the ec2 instance. </p> <p>If you curl or browser to <code>http://localhost:8001/api</code> you should see this or something like it :)</p> <pre><code>{ "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "192.168.4.60:6443" } ] } </code></pre>
<p>I am new to kubernetes. Is there anyway we can fix the name of pod ? If we are creating only one replica, then I want to be generated with same name all time. It's generating different name all time. If I want to see the logs of a container, each time I need to change the command with the newly generated pod name.</p> <p>Following is sample of YAML file.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nagendra-app-deploy1 spec: replicas: 1 template: metadata: name: nagendra-app-deploy1 labels: app: nagendra-app-deploy1 spec: containers: - name: nagendra-spring-app1 image: springbootapp:v1 ports: - containerPort: 8080 - name: nagendra-myImage image: myImage:v2 </code></pre>
<p>There is no way to generate the same name for a deployment produced pod. as far as the command is concerned, you can use <code>kubectl get po -l app=nagendra-app-deploy1 -o jsonpath={.items[0].metadata.name}</code> to get the pod's name.</p>