Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
βŒ€
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
βŒ€
<p>I got this structure:</p> <pre><code>β”œβ”€β”€ root β”‚Β Β  β”œβ”€β”€ app A β”‚Β Β  β”œβ”€β”€ app B β”‚Β Β  β”œβ”€β”€ app C </code></pre> <p>The root folder is initialized as git folder so I can easily push the whole project. Now I am wondering where to place the <code>kubernetes yaml</code> files. My idea would be something like this:</p> <pre><code> β”œβ”€β”€ root β”‚Β Β  β”œβ”€β”€ app A β”‚Β Β  β”œβ”€β”€ app B β”‚Β Β  β”œβ”€β”€ app C β”‚Β Β  β”œβ”€β”€ kubernetes.conf.d β”œβ”€β”€ appA.yaml </code></pre> <p><strong>Question</strong>: is there is a common location to place the config files for kubernetes?</p>
elp
<p>This is completely subjective and you should do whatever works for you. My preference, however, is:</p> <pre><code># Each application has - opt - kubernetes - deployment.yaml - helm - values.yaml </code></pre> <p>I like using <code>./opt</code>, as it is familiar to most with Unix experience.</p>
Rawkode
<p>I'm trying to pass the JVM settings to a Java application which is configured to use JKube Maven plugin to deploy on Openshift. I've added in deployment.yaml file the following settings:</p> <pre><code>spec: template: spec: containers: - env: - name: JAVA_OPTS value: '-Xms128m -Xmx1024m -XX:MetaspaceSize=128M -XX:MaxMetaspaceSize=256m' </code></pre> <p>When the Java process is started in a Pod, I can see that Java_OPTS are overwritten by the default values (see the second setting for <strong>-XX:MaxMetaspaceSize</strong>):</p> <pre><code>DEBUG [org.jboss.as.config] (MSC service thread 1-1) VM Arguments: -Xms128m -Xmx1024m -XX:MetaspaceSize=128M -XX:MaxMetaspaceSize=256m -XX:+UseParallelOldGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MaxMetaspaceSize=100m -XX:+ExitOnOutOfMemoryError </code></pre> <p>That eventually results in:</p> <pre><code>Terminating due to java.lang.OutOfMemoryError: Metaspace </code></pre> <p>Can you recommend a way to set correctly the JVM settings when using JKube Maven plugin? Thanks</p>
Carla
<p>You have to use the GC_MAX_METASPACE_SIZE environment variable to define the maximum size of the Metaspace. For example:</p> <pre><code>- name: GC_MAX_METASPACE_SIZE value: 256 </code></pre> <p>Specifically, there's an example here: <a href="http://www.mastertheboss.com/other/java-stuff/solving-java-lang-outofmemoryerror-metaspace-error" rel="nofollow noreferrer">how to set the METASPACE size on Kubernetes/OpenShift</a> using JKube</p>
Francesco Marchioni
<p>I'm trying to patch the nginx ingress controller that follows the minikube vm.</p> <p>Patching is successful using this command:</p> <pre><code>$ kubectl patch deployment nginx-ingress-controller --type 'json' --namespace kube-system -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--profiling"}]' #-&gt; deployment.extensions/nginx-ingress-controller patched </code></pre> <p>After patching, the previous state is rolled back automatically. I can see the configuration persisted if I check just after deployment (Like below)</p> <pre><code>$ kubectl describe deployment/nginx-ingress-controller --namespace kube-system #--- snip Args: /nginx-ingress-controller --default-backend-service=$(POD_NAMESPACE)/default-http-backend --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --udp-services-configmap=$(POD_NAMESPACE)/udp-services --annotations-prefix=nginx.ingress.kubernetes.io --report-node-internal-ip-address --profiling #--- </code></pre> <p>After rollback the config is reset:</p> <pre><code>$ kubectl describe deployment/nginx-ingress-controller --namespace kube-system #--- snip Args: /nginx-ingress-controller --default-backend-service=$(POD_NAMESPACE)/default-http-backend --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --udp-services-configmap=$(POD_NAMESPACE)/udp-services --annotations-prefix=nginx.ingress.kubernetes.io --report-node-internal-ip-address #--- </code></pre> <p>I cannot see any errors in logs, which should trigger the rollback. The only I can see before the rollback is the deployment triggering shutdown on the pods due to configuration change.</p>
leifcr
<p>Due to minikube running only 1 node, and the ingress using the hostPort, rolling updates will not work for the ingress deployment.</p> <p>After patching the ingress to use recreate instead, patching the ingress config works as expected.</p> <p>Command to set the ingress controller to 'recreate':</p> <pre><code>kubectl patch deployment nginx-ingress-controller --type 'json' --namespace kube-system -p '[{"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "replace", "path": "/spec/strategy/rollingUpdate", "value": null }]' </code></pre> <p>Command to set debug output logging on the nginx-ingress-controller:</p> <pre><code>kubectl patch deployment nginx-ingress-controller --type 'json' --namespace kube-system -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "-v=5"}]' </code></pre> <p>The ingress controller now has debug log output, and is set to recreate if the config or image changes.</p>
leifcr
<p>I'm working on kubernetes. Now I tried Digital Ocean's kubernetes which is very easy to install and access, but how can I install metric-server in it? how can I auto scale in kubernetes by DO? Please reply as soon as possible.</p>
AATHITH RAJENDRAN
<p>The Metrics Server can be installed to your cluster with Helm:</p> <p><a href="https://github.com/helm/charts/tree/master/stable/metrics-server" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/metrics-server</a></p> <pre><code>helm init helm upgrade --install metrics-server --namespace=kube-system stable/metrics-server </code></pre> <p>with RBAC enabled, see the more comprehensive instructions for installing Helm into your cluster:</p> <p><a href="https://github.com/helm/helm/blob/master/docs/rbac.md" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/rbac.md</a></p> <p>If you wish to deploy without Helm, the manifests are available from the GitHub repository:</p> <p><a href="https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B</a></p>
Rawkode
<p>I have a Django application running in a container that I would like to probe for readiness. The kubernetes version is 1.10.12. The settings.py specifies to only allow traffic from a specific domain:</p> <pre class="lang-py prettyprint-override"><code>ALLOWED_HOSTS = ['.example.net'] </code></pre> <p>If I set up my probe without setting any headers, like so: </p> <pre><code> containers: - name: django readinessProbe: httpGet: path: /readiness-path port: 8003 </code></pre> <p>then I receive a 400 response, as expected- the probe is blocked from accessing <code>readiness-path</code>:</p> <pre><code>Invalid HTTP_HOST header: '10.5.0.67:8003'. You may need to add '10.5.0.67' to ALLOWED_HOSTS. </code></pre> <p>I have tested that I can can successfully curl the readiness path as long as I manually set the host headers on the request, so I tried setting the Host headers on the httpGet, as <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes" rel="nofollow noreferrer">partially documented here</a>, like so: </p> <pre><code> readinessProbe: httpGet: path: /readiness-path port: 8003 httpHeaders: - name: Host value: local.example.net:8003 </code></pre> <p>The probe continues to fail with a 400.</p> <p>Messing around, I tried setting the httpHeader with a lowercase h, like so: </p> <pre><code> readinessProbe: httpGet: path: /django-admin port: 8003 httpHeaders: - name: host value: local.example.net:8003 </code></pre> <p>Now, the probe actually hits the server, but it's apparent from the logs that instead of overwriting the HTTP_HOST header with the correct value, it has been appended, and fails because the combined HTTP_HOST header is invalid:</p> <pre><code>Invalid HTTP_HOST header: '10.5.0.67:8003,local.example.net:8003'. The domain name provided is not valid according to RFC 1034/1035 </code></pre> <p>Why would it recognize the header here and add it, instead of replacing it? </p> <p>One suspicion I am trying to validate is that perhaps correct handling of host headers was only added to the Kubernetes httpHeaders spec after 1.10. I have been unable to find a clear answer on when host headers were added to Kubernetes- there are no specific headers described in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#httpheader-v1-core" rel="nofollow noreferrer">API documentation for 1.10</a>. </p> <p>Is it possible to set host headers on a readiness probe in Kubernetes 1.10, and if so how is it done? If not, any other pointers for getting this readiness probe to correctly hit the readiness path of my application?</p> <p><strong>Update</strong>:</p> <p>I have now tried setting the value without a port, as suggested by a commenter:</p> <pre><code> httpHeaders: - name: Host value: local.acmi.net.au </code></pre> <p>The result is identical to setting the value with a port. With a capital H the host header value is not picked up at all, with a lowercase h the host header value is appended to the existing host header value.</p>
Ali H
<p>This problem is fixed in Kubernetes 1.2.3+ β€” see <a href="https://github.com/kubernetes/kubernetes/issues/24288" rel="nofollow noreferrer">kubernetes/kubernetes#24288</a>. I've configured one of my deployments in a similar way (for similar reasons). This is for a Craft CMS instance which doesn't require authentication at the <code>/admin/login</code> url:</p> <pre><code> readinessProbe: httpGet: path: /admin/login port: 80 httpHeaders: - name: Host value: www.example.com timeoutSeconds: 5 </code></pre>
Steven Shaw
<p>Among a big stack of orchestrated k8s pods, I have following two pods of interest:</p> <ol> <li>Elasticsearch pod attached to a PV</li> <li>A tomcat based application pod that serves as administrator for all other pods</li> </ol> <p>I want to be able to query and display very minimal/basic disk availability and usage statistics of the PV (attached to pod #1) on the app running in pod #2</p> <p>Can this be achieved without having to run a web-server inside my ES pod? Since ES might be very loaded, I prefer not to add a web-server to it.</p> <p>The PV attached to ES pod also holds the logs. So I want to avoid any log-extraction-based solution to achieve getting this information over to pod #2.</p>
A.R.K.S
<p>You need get the PV details from kubernetes cluster API, where ever you are.</p> <h3>Accessing the Kubernetes cluster API from within a Pod</h3> <p>When accessing the API from within a Pod, locating and authenticating to the API server are slightly different to the external client case described above.</p> <p>The easiest way to use the Kubernetes API from a Pod is to use one of the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">official client libraries</a>. These libraries can automatically discover the API server and authenticate.</p> <h3>Using Official Client Libraries</h3> <p>From within a Pod, the recommended ways to connect to the Kubernetes API are:</p> <ul> <li><p>For a Go client, use the official <a href="https://github.com/kubernetes/client-go/" rel="nofollow noreferrer">Go client library</a>. The rest.InClusterConfig() function handles API host discovery and authentication automatically. See <a href="https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go" rel="nofollow noreferrer">an example here</a>.</p> </li> <li><p>For a Python client, use the official <a href="https://github.com/kubernetes-client/python/" rel="nofollow noreferrer">Python client library</a>. The config.load_incluster_config() function handles API host discovery and authentication automatically. See <a href="https://github.com/kubernetes-client/python/blob/master/examples/in_cluster_config.py" rel="nofollow noreferrer">an example here</a>.</p> </li> <li><p>There are a number of other libraries available, please refer to the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">Client Libraries</a> page.</p> </li> </ul> <p>In each case, the service account credentials of the Pod are used to communicate securely with the API server.</p> <h3>Reference</h3> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-within-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-within-a-pod</a></p>
BMW
<h1><strong>Failed to connect to localhost port 80: Connection refused</strong></h1> <p>I recently switched from Windows 10 to Ubuntu 20.04.2 LTS and now I wanted to keep on learning kubernetes. However currently I cannot acess my kubernetes deployment for some reason.</p> <h1></h1> <p><em><strong><code>Deployment.yml</code></strong></em></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello-deployment-c51e9e6b spec: replicas: 2 selector: matchLabels: app: hello-kubernetes template: metadata: labels: app: hello-kubernetes spec: containers: - image: paulbouwer/hello-kubernetes:1.7 name: hello-kubernetes ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: hello-service-9878228b spec: ports: - port: 80 targetPort: 8080 selector: app: hello-kubernetes type: LoadBalancer </code></pre> <p>I'm pretty sure that this deployment has worked before so I assume that kubernetes has maybe not permissions to expose the port or something like that?</p> <h3>Additional information</h3> <ul> <li>I'm running a minikube cluster.</li> <li>Output of kubectl get services: <a href="https://i.stack.imgur.com/evWbT.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/evWbT.jpg</a></li> </ul>
Chrizz
<p>why do you think you can access it via localhost directly?</p> <p>If you need, please do the <code>port forward</code>, such as:</p> <pre><code>kubectl port-forward service/hello-service-9878228b 80:80 </code></pre> <p>and you need keeping the console on.</p> <p>then you should be fine to access it via http://localhost</p>
BMW
<p>I have Kubernetes cluster and deployed kibana using Nginx as ingress controller. Could anyone tell me how to can I access kibana dashboard and how do I know that my deployment is right ??</p>
ashique
<p>The default kibana port is 5601. You may open kibana by opening <code>localhost:5601</code> or <code>IP_ADDRESS:5601</code> or <code>http://YOURDOMAIN.com:5601</code> in your browser. Ypu may find more details here: <a href="https://www.elastic.co/guide/en/kibana/current/access.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/kibana/current/access.html</a></p>
Alfred
<p>We are using kustomize for our kubernetes deployments in this way:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:${IMAGE_VERSION} ports: - containerPort: 80 </code></pre> <p>and deploy this yaml substituting the variable IMAGE_VERSION with 1.7.9</p> <pre><code>kustomize build ./nginx/overlays/dev/ | sed -e 's|${IMAGE_VERSION}'"|1.7.9|g" | kubectl apply -f - </code></pre> <p>Since kubectl 1.14 supports kustomize.</p> <p>now we can do something very nice like this</p> <pre><code>kubectl apply -k ./ </code></pre> <p>but how to substitute the IMAGE_VERSION variable with this new command?</p>
KiteUp
<p>You have to create a <code>kustomization.yaml</code> file containing the customizations.</p> <p>i.e:</p> <pre><code># kustomization.yaml bases: - ../base images: - name: nginx-pod newTag: 1.15 newName: nginx-pod-2 </code></pre> <p>And for the templates, you create a base folder containing the kustomization.yaml with reference to the deployment and dependencies, i.e:</p> <pre><code># ../base/kustomization.yaml resources: - deployment.yaml </code></pre> <p>and</p> <pre><code># ../base/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx-pod </code></pre> <p>Run the command:</p> <p><code>kubectl apply -k</code></p> <p>The above command will compile the customization and generate the following yaml to be applied to the cluster:</p> <pre><code># Modified Base Resource apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx-deployment spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: # The image image tag has been changed for the container - name: nginx image: nginx-pod-2:1.15 </code></pre>
Diego Mendes
<p>Where are documented the "types" of secrets that you can create in kubernetes?</p> <p>looking at different samples I have found "generic" and "docker-registry" but I have no been able to find a pointer to documentation where the different type of secrets are documented.</p> <p>I always end in the k8s doc: <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a> <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/" rel="noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/</a></p> <p>Thank you.</p>
Jxadro
<p>Here is a list of 'types' from the <a href="https://github.com/kubernetes/kubernetes/blob/7693a1d5fe2a35b6e2e205f03ae9b3eddcdabc6b/pkg/apis/core/types.go#L4394-L4478" rel="noreferrer">source code</a>:</p> <pre><code>SecretTypeOpaque SecretType = "Opaque" [...] SecretTypeServiceAccountToken SecretType = "kubernetes.io/service-account-token" [...] SecretTypeDockercfg SecretType = "kubernetes.io/dockercfg" [...] SecretTypeDockerConfigJson SecretType = "kubernetes.io/dockerconfigjson" [...] SecretTypeBasicAuth SecretType = "kubernetes.io/basic-auth" [...] SecretTypeSSHAuth SecretType = "kubernetes.io/ssh-auth" [...] SecretTypeTLS SecretType = "kubernetes.io/tls" [...] SecretTypeBootstrapToken SecretType = "bootstrap.kubernetes.io/token" </code></pre>
Eyal Levin
<p>Our GKE cluster is shared to multiple teams in company. Each team can have different public domain (and hence want to have different CA cert setup and also different ingress gateway controller). How to do that in Istio? All the tutorial/introduction articles in Istio's website are using a shared ingress gateway. See the example shared ingress gateway that comes installed by istio-1.0.0: <a href="https://istio.io/docs/tasks/traffic-management/secure-ingress/" rel="noreferrer">https://istio.io/docs/tasks/traffic-management/secure-ingress/</a> </p> <pre><code>spec: selector: istio: ingressgateway # use istio default ingress gateway </code></pre>
Agung Pratama
<p>Okay, I found the answer after looking at the code of Istio installation via helm. So, basically the istio have an official way (but not really documented in their readme.md file) to add additional gateway (ingress and egress gateway). I know that because I found this <a href="https://github.com/istio/istio/blob/3a0daf1db0bd8a98a414abf7c21e9506a0839848/install/kubernetes/helm/istio/values-istio-gateways.yaml" rel="nofollow noreferrer">yaml file</a> in their github repo and read the comment (also looking at the <code>gateway</code> chart template code for the spec and its logic).</p> <p>So, I solved this by, for example, defining this values-custom-gateway.yaml file:</p> <pre><code># Gateways Configuration # By default (if enabled) a pair of Ingress and Egress Gateways will be created for the mesh. # You can add more gateways in addition to the defaults but make sure those are uniquely named # and that NodePorts are not conflicting. # Disable specifc gateway by setting the `enabled` to false. # gateways: enabled: true agung-ingressgateway: namespace: agung-ns enabled: true labels: app: agung-istio-ingressgateway istio: agung-ingressgateway replicaCount: 1 autoscaleMin: 1 autoscaleMax: 2 resources: {} # limits: # cpu: 100m # memory: 128Mi #requests: # cpu: 1800m # memory: 256Mi loadBalancerIP: "" serviceAnnotations: {} type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be ports: ## You can add custom gateway ports - port: 80 targetPort: 80 name: http2 # nodePort: 31380 - port: 443 name: https # nodePort: 31390 - port: 31400 name: tcp secretVolumes: - name: ingressgateway-certs secretName: istio-ingressgateway-certs mountPath: /etc/istio/ingressgateway-certs - name: ingressgateway-ca-certs secretName: istio-ingressgateway-ca-certs mountPath: /etc/istio/ingressgateway-ca-certs </code></pre> <p>If you take a look at yaml file above, I specified the <code>namespace</code> other than <code>istio-system</code> ns. In this case, we can have a way to customize the TLS and ca cert being used by our custom gateway. Also the <code>agung-ingressgateway</code> as the holder of the custom gateway controller spec is used as the gateway controller's name.</p> <p>Then, i just install the istio via <code>helm upgrade --install</code> so that helm can intelligently upgrade the istio with additional gateway.</p> <pre><code>helm upgrade my-istio-release-name &lt;istio-chart-folder&gt; --install </code></pre> <p>Once it upgrades successfully, I can specify custom selector to my <code>Gateway</code>:</p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: agung-gateway namespace: agung-ns spec: selector: app: agung-istio-ingressgateway # use custom gateway # istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key hosts: - "*" </code></pre>
Agung Pratama
<p>I want to add envoy proxy to an existing Kubernetes deployment as a sidecar. I tried following multiple blog posts and that did not seem to help. I was wondering if anyone has done it, and if so, how to?</p> <p>Thank you!</p>
Parvathy Geetha
<p>To add <a href="https://stackoverflow.com/a/51777604/476917">Kun Li's anwer</a>, if your case is the kubernetes cluster already has many services running, it is safer to do that by set the <code>autoInjection</code> policy as <code>disabled</code> by default, and let the service owner set it explicitly that it wants to use istio side car.</p> <p>To do that, you have to:</p> <ul> <li>set the Istio (via helm installation) helm installation flag <code>--global.proxy.autoInject=disabled --sidecarInjectorWebhook.enabled=true</code>. </li> <li>then in your namespace, set <code>kubectl label namespace bar istio-injection=enabled</code></li> </ul> <p>The <code>--sidecarInjectorWebhook.enabled=true</code> and labeling your namespace: means that the istio sidecar injector webhook is activated for your namespace. But then, since you specify the <code>global.proxy.autoInject=disabled</code>, it won't inject any pods. So, the service owner have to define explicitly the pod's annotation (in your deployment yaml file) like below:</p> <pre><code> template: metadata: annotations: sidecar.istio.io/inject: "true" </code></pre> <p>To check your istio's sidecar injection policy, </p> <blockquote> <p>kubectl get cm istio-sidecar-injector -n istio-system -o yaml</p> </blockquote> <p>take a look at <code>data.config</code> value, it should contains <code>policy: disable</code> or <code>policy: enabled</code>.</p> <p>Reference: - <a href="https://istio.io/docs/setup/kubernetes/sidecar-injection/#policy" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/sidecar-injection/#policy</a> - personal hands on (I've tried it)</p>
Agung Pratama
<p>I'm trying to enable <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#yaml" rel="nofollow noreferrer">Workload Identity</a> in GKE and followed the entire linked how-to. I then went through the <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting/troubleshooting-security#workload-identity" rel="nofollow noreferrer">Troubleshooting</a> guide and verified all my settings were correct. However when I dynamically create a deployment in my ruby code using <code>kubeclient</code> I keep getting <code>PermissionDeniedError</code> as follows:</p> <p><code>/usr/lib/ruby/gems/3.1.0/gems/google-cloud-storage-1.43.0/lib/google/cloud/storage/service.rb:913:in 'rescue in execute': forbidden: openc3-sa@&lt;PROJECTID&gt;.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. (Google::Cloud::PermissionDeniedError)</code></p> <p>I've verified in the GCP IAM page that the <code>openc3-sa@&lt;PROJECTID&gt;.iam.gserviceaccount.com</code> does have the Storage Admin role which definitely has <code>storage.buckets.get</code> permissions. My original deployment uses the same default kubernetes service account and <em>does</em> have permission to access the buckets so it's something about the fact that I'm dynamically creating new deployments.</p>
Jason
<p>It turns out this was user error and the bucket I was trying to access was simply called 'config'. Since buckets have a global namespace I obviously do not have access to this bucket so the error was correct. HOWEVER, it would be nice if the bucket name was added to the error message to help with debugging. Something like:</p> <pre><code>forbidden: openc3-sa@&lt;PROJECT ID&gt;.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket named 'config'. (Google::Cloud::PermissionDeniedError) </code></pre> <p>With the bucket name I would have immediately found the issue. So if you're seeing permission denied errors, be sure you have the correct bucket name!</p>
Jason
<p>I install Jenkins via Helm charts on my Kubernetes Cluster. I follow the rules described in: <a href="https://www.jenkins.io/doc/book/installing/kubernetes/" rel="nofollow noreferrer">https://www.jenkins.io/doc/book/installing/kubernetes/</a></p> <p>When I look at the pods, I get the following error:</p> <pre><code>k get po NAME READY STATUS RESTARTS AGE jenkins-64d6449859-tgp7n 1/2 CrashLoopBackOff 3 101s k logs jenkins-64d6449859-tgp7n -c copy-default-config applying Jenkins configuration disable Setup Wizard download plugins /var/jenkins_config/apply_config.sh: 4: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/jenkins.install.UpgradeWizard.state: Permission denied /var/jenkins_config/apply_config.sh: 5: /var/jenkins_config/apply_config.sh: cannot create /var/jenkins_home/jenkins.install.InstallUtil.lastExecVersion: Permission denied cp: cannot create regular file '/var/jenkins_home/plugins.txt': Permission denied cat: /var/jenkins_home/plugins.txt: No such file or directory WARN: install-plugins.sh is deprecated, please switch to jenkins-plugin-cli Creating initial locks... Analyzing war /usr/share/jenkins/jenkins.war... Registering preinstalled plugins... Using version-specific update center: https://updates.jenkins.io/dynamic-2.248/... Downloading plugins... WAR bundled plugins: Installed plugins: *: Cleaning up locks copy plugins to shared volume cp: cannot stat '/usr/share/jenkins/ref/plugins/*': No such file or directory finished initialization </code></pre>
Stefan Papp
<p>If you use the default settings from the documentation, ensure that the PVC are correctly set and ensure that all objects are in the same namespace.</p> <p>The solution to my problem was:</p> <ul> <li>getting everything under the same namespace</li> <li>reverting to standard values</li> <li>when using an ingress resource, set the corresponding path in the helm config (jenkinsUriPrefix: &quot;/yourpath&quot;) and not the jenkinsOpts: &quot;--prefix=/yourpath&quot;</li> </ul>
Stefan Papp
<p>We are deploying Spring Cloud Data Flow v2.2.1.RELEASE in Kubernetes. Everything or almost seems to work but scheduling is not. In fact, even when running tasks by manual launch using the UI (or api) we see an error log. That same log is generated when trying to schedule but this time, it makes the schedule creation fails. Here is a stack trace extract:</p> <pre><code>java.lang.IllegalArgumentException: taskDefinitionName must not be null or empty at org.springframework.util.Assert.hasText(Assert.java:284) at org.springframework.cloud.dataflow.rest.resource.ScheduleInfoResource.&lt;init&gt;(ScheduleInfoResource.java:58) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.instantiateResource(TaskSchedulerController.java:174) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.instantiateResource(TaskSchedulerController.java:160) at org.springframework.hateoas.mvc.ResourceAssemblerSupport.createResourceWithId(ResourceAssemblerSupport.java:89) at org.springframework.hateoas.mvc.ResourceAssemblerSupport.createResourceWithId(ResourceAssemblerSupport.java:81) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.toResource(TaskSchedulerController.java:168) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.toResource(TaskSchedulerController.java:160) at org.springframework.data.web.PagedResourcesAssembler.createResource(PagedResourcesAssembler.java:208) at org.springframework.data.web.PagedResourcesAssembler.toResource(PagedResourcesAssembler.java:120) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController.list(TaskSchedulerController.java:85) at sun.reflect.GeneratedMethodAccessor180.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) </code></pre> <p>...,</p> <p>We've looked at the table content, the task does have a name.</p> <p>Any idea?</p>
Eric Giguere
<p>I've finally found the source of the error by debugging live Data Flow. The problem arises when CronJob that are not created by Data Flow are present in the namespace, which is by my evaluation a problem. The scheduler launches a process that loops on Kubernetes CronJob resources and tries to process them.</p> <p>Data Flow should certainly do its processing on those using labels, like all Kubernetes native tools, to select only the elements that concerns it. Any process could use CronJob.</p> <p>So Pivotal - Data Flow people, it would probably be a good idea to enhance that part and this way prevent that kind of "invisible" problems. I say invisible because the only error we get is the validation of the Schedule item, complaining about the fact that the name is empty and that is because that CronJob was not in any way linked to an SCDF task.</p> <p>Hope that can help someone in the future.</p> <p>Bug reported: <a href="https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/issues/347" rel="nofollow noreferrer">https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/issues/347</a></p> <p>PR issued: <a href="https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/pull/348" rel="nofollow noreferrer">https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/pull/348</a></p>
Eric Giguere
<p><strong>kubectl get all -n migration:</strong></p> <pre><code>NAME READY STATUS RESTARTS AGE pod/nginx2-7b8667968c-zxtq7 0/1 ImagePullBackOff 0 5m38s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx2 0/1 1 0 5m38s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx2-7b8667968c 1 1 0 5m38s </code></pre> <p><strong>kubectl describe pod nginx2-7b8667968c-zxtq7 -n migration:</strong></p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 44s default-scheduler Successfully assigned migration/nginx2-7b8667968c-zxtq7 to k8s-master01 Normal SandboxChanged 33s kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulling 18s (x2 over 43s) kubelet Pulling image &quot;nginx&quot; Warning Failed 14s (x2 over 34s) kubelet Failed to pull image &quot;nginx&quot;: rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Warning Failed 14s (x2 over 34s) kubelet Error: ErrImagePull Normal BackOff 3s (x4 over 32s) kubelet Back-off pulling image &quot;nginx&quot; Warning Failed 3s (x4 over 32s) kubelet Error: ImagePullBackOff </code></pre> <p><strong>Post-logging in with different docker account:</strong></p> <p>I can manually pull an image from docker using <code>docker pull nginx</code></p> <p>But while deploying a deployment, it again shows the same error.</p> <p><strong>Deployment yaml is as:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx2 namespace: migration spec: replicas: 1 selector: matchLabels: name: nginx2 template: metadata: labels: name: nginx2 spec: containers: - name: nginx2 imagePullPolicy: Always image: nginx ports: - containerPort: 3000 volumeMounts: - name: game-demo mountPath: /usr/src/app/config - name: secret-basic-auth mountPath: /usr/src/app/secret - name: site-data2 mountPath: /var/www/html volumes: - name: game-demo configMap: name: game-demo - name: secret-basic-auth secret: secretName: secret-basic-auth - name: site-data2 persistentVolumeClaim: claimName: demo-pvc-claim2 </code></pre> <p>Also, as the nginx image is present locally, I tried with modifying <code>imagePullPolicy</code> to <code>Never</code> as well as 'IfNotPresent'.</p> <p>But nothing works. Please guide.</p>
kkpareek
<p>Here is your errror:</p> <pre> Warning Failed 14s (x2 over 34s) kubelet Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit </pre> <p>Basically docker hub is now becoming a paid thing if you exceed their rate limits. You can use the publically hosted image of &quot;nginx&quot; from Amazon's ECR instead:</p> <pre><code>docker pull public.ecr.aws/nginx/nginx:latest </code></pre> <p>That should be the same as the one you use using, just double check over here: <a href="https://gallery.ecr.aws/nginx/nginx" rel="nofollow noreferrer">https://gallery.ecr.aws/nginx/nginx</a></p>
OpenBSDNinja
<p>Kubernetes kubeflow scaling is not working</p> <p>I have installed kubernetes, kubectl and ksonnet as per suggested.</p> <p>I have created kubeflow namespace and deployed kubeflow core components.</p> <p>Then, I have created ksonnet app and namespace and h2o3-scaling component.</p> <p>Then, I have tried to run some examples. Everything is working fine. </p> <p>I have followed all the stepes provided by this url <a href="https://github.com/h2oai/h2o-kubeflow" rel="nofollow noreferrer">https://github.com/h2oai/h2o-kubeflow</a></p> <p>But horizontal scaling is not working as expected.</p> <p>Thanks in advance. Please help anyone to solve this problem.</p>
Test admin
<p>I'm not sure about H2O3, but Kubeflow itself doesn't really support autoscaling. There are few components:</p> <ol> <li>Tf-operator - it doesn't run training itself, it runs pods that run training and you specify number of replicas in TFJob definition, so no autoscaling.</li> <li>Tf-serving - potentially could do autoscaling, but we don't right now, again you specify replicas.</li> <li>Jupyterhub - same as tf-operator, spawns pods, don't autoscale.</li> </ol> <p>What is exact use case you're aiming for?</p>
inc0
<p>I have an AKS deployed in its own vnet (using Azure CNI instead of Kubenet). In the cluster there is an ingress-nginx deployed by Helm/ArgoCD with the annotation:</p> <pre><code>service.beta.kubernetes.io/azure-load-balancer-internal: true </code></pre> <p>The LoadBalancer looks good:</p> <pre><code>kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP ingress.nginx-controller LoadBalancer 10.2.1.55 10.1.2.200 </code></pre> <ul> <li><p>I can curl any of those two IPs from within the cluster (any pod) and they work</p> </li> <li><p>From a VM on the same vnet, i.e. outside the cluster, I can curl any pod directly</p> </li> <li><p>I can curl any of the nginx daemonset pods directly on their internal IPs as well.</p> </li> <li><p><strong>BUT, I can't curl with the LoadBalancer IPs!</strong></p> </li> </ul> <p>There is no traffic flowing at all, ends in a network timeout and no requests logs from nginx.</p> <p>What am I missing? I am out of ideas.</p> <p>Here's my very simple Helm input.</p> <pre><code>ingress-nginx: controller: hostPort: enabled: true kind: DaemonSet metrics: enabled: true publishService: enabled: false extraArgs: default-ssl-certificate: &quot;ingress-nginx/ingress.local&quot; service: annotations: service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot; admissionWebhooks: enabled: false </code></pre>
Sven
<p>Thanks to the blazingly fast and skillful K8s people in the r/kubernetes Reddit group this problem was solved.</p> <p>When AKS is created there is a <em>new</em> Resource Group created for the cluster, even though you ask Azure to use a specific Resource Group. The new RG is named <code>MC_rg_[cluster name]</code>. Within this group there are among other things a health check created and it doesn't work. As long as the health probes aren't working, no traffic will be let through.</p> <p>The Health Probes are found in the menu <code>Insights -&gt; Networking -&gt; Load Balancer -&gt; Kubernetes-internal</code></p> <p>Just by replacing the http/https protocols to TCP in the probes, it started working. But a permanent solution is to set hard paths to the health URLs with the setting:</p> <pre><code>service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: &quot;healthz&quot; </code></pre> <p>Here's an issue documenting the same thing: <a href="https://github.com/kubernetes/ingress-nginx/issues/8501" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/8501</a></p> <p>Hint: All logs from the cluster will end up in the new RG as well. You can write KQL queries directly to the logs for application logs, metrics, what not.</p>
Sven
<p>Assume that Kubernetes has scheduled two pods A and B in a node N. If the resource capacity of the Node N is (30 CPU, 70 Memory) and pod A requests (5 CPU, 8 memory) and pod B requestes (4 CPU, 10 memory), is it possible to share the resources among two pods in a way that we maintain the efficieny of the cluster and maximize the allocation of pods? How can I change the codes to achieve this? Assuming that each pod maintains 1 container.</p>
HamiBU
<p>Kubernetes already does that.</p> <p><em>Resource Requests</em> are soft reservations, that means, the scheduler will consider it as a requirement when placing the pod in the node, it will allocate the resource to a POD but won't reserve the resources to be used exclusively by the pod that requested it.</p> <p>If the POD request 1Gb of memory and consumed only 500Mb, other pods will be able to consume the remainder.</p> <p>The main issue is when other PODs does not set limits, this will prevent the scheduler of controlling the load properly and other running pods might affect the pod. Other issue is when Limits are set too high and when being consumed will reach the node capacity.</p> <p>To have a proper balance and efficiency, requests and limits should be set appropriately and prevent over-commitment. </p> <p>This other SO shows a nice example of it: <a href="https://stackoverflow.com/questions/35620318/allocate-or-limit-resource-for-pods-in-kubernetes?rq=1">Allocate or Limit resource for pods in Kubernetes?</a></p>
Diego Mendes
<p>I am spinning up a container (pod/Job) from a GKE.</p> <p>I have set up the appropriate Service Account on the cluster's VMs.</p> <p>Therefore, when I <strong>manually</strong> perform a <code>curl</code> to a specific CloudRun service endpoint, I can perform the request (and get authorized and have <code>200</code> in my response)</p> <p>However, when I try to automate this by setting an image to run in a <code>Job</code> as follows, I get <code>401</code></p> <pre><code> - name: pre-upgrade-job image: "google/cloud-sdk" args: - curl - -s - -X - GET - -H - "Authorization: Bearer $(gcloud auth print-identity-token)" - https://my-cloud-run-endpoint </code></pre> <p>Here are the logs on <code>Stackdriver</code></p> <pre><code>{ httpRequest: { latency: "0s" protocol: "HTTP/1.1" remoteIp: "gdt3:r787:ff3:13:99:1234:avb:1f6b" requestMethod: "GET" requestSize: "313" requestUrl: "https://my-cloud-run-endpoint" serverIp: "212.45.313.83" status: 401 userAgent: "curl/7.59.0" } insertId: "29jdnc39dhfbfb" logName: "projects/my-gcp-project/logs/run.googleapis.com%2Frequests" receiveTimestamp: "2019-09-26T16:27:30.681513204Z" resource: { labels: { configuration_name: "my-cloud-run-service" location: "us-east1" project_id: "my-gcp-project" revision_name: "my-cloudrun-service-d5dbd806-62e8-4b9c-8ab7-7d6f77fb73fb" service_name: "my-cloud-run-service" } type: "cloud_run_revision" } severity: "WARNING" textPayload: "The request was not authorized to invoke this service. Read more at https://cloud.google.com/run/docs/securing/authenticating" timestamp: "2019-09-26T16:27:30.673565Z" } </code></pre> <p>My question is how can I see if an "Authentication" header does reach the endpoint (the logs do not enlighten me much) and if it does, whether it is appropriately rendered upon image command/args invocation.</p>
pkaramol
<p>In your Job, <code>gcloud auth print-identity-token</code> likely does not return any tocken. The reason is that locally, gcloud uses your identity to mint a token, but in a Job, you are not logged into gcloud.</p>
Steren
<p>I want exposing various services with a single ingress.</p> <pre><code>rules: - http: paths: # The path is the URL prefix for the service, e.g. /api/* or just /* # Note that the service will receive the entire URL with the prefix - path: /service1/* backend: serviceName: service1 servicePort: 5000 - path: /service2/* backend: serviceName: service2 servicePort: 5000 </code></pre> <p>The problem is the whole URL including the prefix is passed to the underlying services so all requests return 404 errors: <code>service1</code> and api don't respond on <code>/service1/some/path</code> but directly on <code>/some/path</code></p> <p>How can I specify a prefix to the underlying services?</p> <p><strong>UPDATE</strong></p> <p>I tried using rewrite-target as follows. Requests are sent to the <code>rasa-nlu</code> service, but they all trigger 404 because <code>rasa-nlu</code> still gets the <code>/nlu</code> </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /nlu backend: serviceName: rasa-nlu servicePort: 5000 </code></pre>
znat
<p>This might be what you are looking for;</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/rewrite-target: / name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: echoheaders servicePort: 80 path: /something </code></pre> <p>Note the <strong>annotation</strong> to rewrite-target. </p> <p>Found this <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/rewrite" rel="noreferrer">here</a></p>
Sage
<p>I'm trying to build a web app where each user gets their own instance of the app, running in its own container. I'm new to kubernetes so I'm probably not understanding something correctly.</p> <p>I will have a few physical servers to use, which in kubernetes as I understand are called nodes. For each node, there is a limitation of 100 pods. So if I am building the app so that each user gets their own pod, will I be limited to 100 users per physical server? (If I have 10 servers, I can only have 500 users?) I suppose I could run multiple VMs that act as nodes on each physical server but doesn't that defeat the purpose of containerization?</p>
dolgion
<p>The main issue in having too many pods in a node is because it will degrade the node performance and makes is slower(and sometimes unreliable) to manage the containers, each pod is managed individually, increasing the amount will take more time and more resources. </p> <p>When you create a POD, the runtime need to keep a constant track, doing probes (readiness and Liveness), monitoring, Routing rules many other small bits that adds up to the load in the node.</p> <p>Containers also requires processor time to run properly, even though you can allocate fractions of a CPU, adding too many containers\pod will increase the context switch and degrade the performance when the PODs are consuming their quota.</p> <p>Each platform provider also set their own limits to provide a good quality of service and SLAs, overloading the nodes is also a risk, because a node is a single point of failure, and any fault in high density nodes might have a huge impact in the cluster and applications.</p> <p>You should either consider:</p> <ul> <li>Smaller nodes and add more nodes to the cluster or </li> <li>Use Actors instead, where each client will be one Actor. And many actor will be running in a single container. To make it more balanced around the cluster, you partition the actors into multiple containers instances.</li> </ul> <p>Regarding the limits, <a href="https://github.com/kubernetes/kubernetes/issues/23349" rel="nofollow noreferrer">this thread</a> has a good discussion about the concerns</p>
Diego Mendes
<p>Are the metrics for vertical and horizontal scaling in kubernetes the same. So does the cpu,memory and custom metrics work with both concepts?</p>
Java
<p>Yes, Both use the CPU or Memory metrics provided by the metrics server.</p> <p>For CPU or Memory metrics you can use either VPA or HPA, not both together. Using both together will have undesirable behaviour, because they will be competing to scale up/down/in/out at same time.</p> <p>Using custom metrics is possible to have both enabled, one can be activated by the CPU or Memory, the other by custom metrics (like number of messages in a queue, active connections)</p>
Diego Mendes
<p>I'm having issues when accessing a service present in another namespace.</p> <p>I have 2 namespaces (in the same cluster) airflow-dev and dask-dev.</p> <p><img src="https://i.stack.imgur.com/UrA7u.jpg" alt="enter image description here" /></p> <p>In dask-dev namespace, I have dask cluster(dask scheduler and workers) deployed. Also, created a service (cluster IP) to dask-scheduler pod. I'm able to access dask-scheduler pod from chrome using 'kubectl port-forward' command.</p> <p><code>kubectl port-forward --namespace dask-dev svc/dask-dev-scheduler 5002:80</code></p> <p>However, am not able to access the service (or dask-scheduler pod) from a pod (airflow-scheduler) present in airflow-dev namespace. Getting '<strong>Host or service not found</strong>' error when trying to access it using the below</p> <p><code>dask-dev-scheduler.dask-dev.svc.cluster.local:8786</code></p> <p>Below is the service that I have created for dask-dev-scheduler. Could you please let me know how to access the service from airflow-dev namespace.</p> <pre><code>apiVersion: v1 metadata: name: dask-dev-scheduler namespace: dask-dev labels: app: dask-dev app.kubernetes.io/managed-by: Helm chart: dask-dev-4.5.7 component: scheduler heritage: Helm release: dask-dev annotations: meta.helm.sh/release-name: dask-dev meta.helm.sh/release-namespace: dask-dev spec: ports: - name: dask-dev-scheduler protocol: TCP port: 8786 targetPort: 8786 - name: dask-dev-webui protocol: TCP port: 80 targetPort: 8787 selector: app: dask-dev component: scheduler release: dask-dev clusterIP: 10.0.249.111 type: ClusterIP sessionAffinity: None status: loadBalancer: {} [1]: https://i.stack.imgur.com/UrA7u.jpg </code></pre>
Subba
<p>You can use a local service to reference an external service (a service in a different namespace) using the service <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">externalName Type</a>.</p> <p><code>ExternalName</code> services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service-b namespace: namespace-b spec: selector: app: my-app-b ports: - protocol: TCP port: 3000 targetPort: 3000 </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service-b-ref namespace: namespace-a spec: type: ExternalName externalName: service-b.namespace-b.svc.cluster.local </code></pre> <p>Any traffic in <code>namespace-a</code> that connects to <code>service-b-ref:&lt;port&gt;</code> will be routed to <code>service-b</code> in <code>namespace-b</code> (<code>service-b.namespace-b.svc.cluster.local</code>) Therefore, a call to <code>service-b-ref:3000</code> will route to our service-b.</p> <hr /> <p>In your example, you'd just need to create a service in <code>airflow-dev</code> that will route traffic to the <code>dask-dev-scheduler</code> in the <code>dask-dev</code> namespace:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: dask-dev-svc namespace: airflow-dev spec: type: ExternalName externalName: dask-dev-scheduler.dask-dev.svc.cluster.local </code></pre> <p>Therefore, all <code>airflow-dev</code> resources that need to connect to the <code>dask-dev-scheduler</code> would call: <code>dask-dev-svc:8786</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 metadata: name: dask-dev-scheduler namespace: dask-dev spec: ports: - name: dask-dev-scheduler protocol: TCP port: 8786 targetPort: 8786 # ... selector: app: dask-dev </code></pre>
Highway of Life
<p>If I have a set of deployments that are connected using a NetworkPolicy ingress. It's work! However, if I have to connect from outside (IP got from kubectl get ep), I have to set another ingress to the endpoint? or egress policy?</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: namespace: nginx annotations: kompose.cmd: ./kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.service: nginx name: nginx spec: replicas: 1 selector: matchLabels: io.kompose.service: nginx strategy: {} template: metadata: annotations: kompose.cmd: ./kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.network/nginx: &quot;true&quot; io.kompose.service: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 8000 resources: {} restartPolicy: Always status: {} --- apiVersion: apps/v1 kind: Deployment metadata: namespace: mariadb annotations: kompose.cmd: ./kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.service: mariadb name: mariadb spec: replicas: 1 selector: matchLabels: io.kompose.service: mariadb strategy: {} template: metadata: annotations: kompose.cmd: ./kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.network/nginx: &quot;true&quot; io.kompose.service: mariadb spec: containers: - image: mariadb name: mariadb ports: - containerPort: 5432 resources: {} restartPolicy: Always status: {} ... </code></pre> <p>You can see more code here <a href="http://pastie.org/p/2QpNHjFdAK9xj7SYuZvGPf" rel="nofollow noreferrer">http://pastie.org/p/2QpNHjFdAK9xj7SYuZvGPf</a></p> <p>Endpoints:</p> <pre><code>kubectl get ep -n nginx NAME ENDPOINTS AGE mariadb 192.168.112.203:5432 2d2h nginx 192.168.112.204:8000 42h </code></pre> <p>Services:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mariadb ClusterIP 10.99.76.78 &lt;none&gt; 5432/TCP 2d2h nginx NodePort 10.111.176.21 &lt;none&gt; 8000:31604/TCP 42h </code></pre> <p>Tests from server:</p> <pre><code>If I do curl 10.111.176.21:31604 -- No answer If I do curl 192.168.112.204:8000 -- No answer If I do curl 192.168.112.204:31604 -- No answer If I do curl 10.0.0.2:8000 or 31604 -- No answer 10.0.0.2 is a worker node IP. </code></pre> <p><strong>UPDATED</strong> If I do <code>kubectl port-forward nginx-PODXXX 8000:8000</code> I can access it from HTTP://localhost:8000</p> <p>So What's I am wrong in on?</p>
sincorchetes
<p>It looks like you're using the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policy</a> <em>as</em> an ingress for incoming traffic, but what you probably want to be using is an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a> to manage <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> traffic.</p> <p>Egress is for traffic flowing outbound from your services within your cluster to external sources. Ingress is for external traffic to be directed to specific services within your cluster.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: my-example.site.tld http: paths: - path: / backend: serviceName: nginx servicePort: 5432 </code></pre>
Highway of Life
<p>I have implemented a CLI using go and I display the status of kubernetese cells. The command is <code>cellery ps</code></p> <pre><code>func ps() error { cmd := exec.Command("kubectl", "get", "cells") stdoutReader, _ := cmd.StdoutPipe() stdoutScanner := bufio.NewScanner(stdoutReader) go func() { for stdoutScanner.Scan() { fmt.Println(stdoutScanner.Text()) } }() stderrReader, _ := cmd.StderrPipe() stderrScanner := bufio.NewScanner(stderrReader) go func() { for stderrScanner.Scan() { fmt.Println(stderrScanner.Text()) if (stderrScanner.Text() == "No resources found.") { os.Exit(0) } } }() err := cmd.Start() if err != nil { fmt.Printf("Error in executing cell ps: %v \n", err) os.Exit(1) } err = cmd.Wait() if err != nil { fmt.Printf("\x1b[31;1m Cell ps finished with error: \x1b[0m %v \n", err) os.Exit(1) } return nil } </code></pre> <p>However cells need time to get into ready state when they are deployed. Therefore I need to give a flag(wait) which would update the CLI output.</p> <p>The command would be <code>cellery ps -w</code> . However Kubernetese API have not implemented this yet. So I will have to come up with a command.</p>
Madhuka Wickramapala
<p>Basically what you want is to listen to the event of a cell become ready. You can register to the events in a cluster and act upon them. A good example can be found <a href="https://github.com/bitnami-labs/kubewatch" rel="nofollow noreferrer">here</a> </p>
avivl
<p>How to get more details about what is actually the problem?</p> <pre><code>kubtectl logs foo-app-5695559f9c-ntrqf Error from server (BadRequest): container &quot;foo&quot; in pod &quot;foo-app-5695559f9c-ntrqf&quot; is waiting to start: trying and failing to pull image </code></pre> <p>I would like to see the http traffic between K8s and the container registry.</p>
guettli
<p>If a container has not started, then there are no container logs from that pod to view, as appears to be the case.</p> <p>To get more information about the pod or why the container may not be starting, you can use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe" rel="nofollow noreferrer"><code>kubectl describe pod</code></a> which should show you both the pod status and the events relevant to the given pod:</p> <pre><code>kubectl describe pod &lt;pod-name&gt; --namespace &lt;namespace&gt; </code></pre> <p>The most common error is an access issue to the registry. Make sure you have an <code>imagePullSecrets</code> set for the registry that you're trying to pull from.</p> <p>See: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">How to pull image from a private registry.</a></p>
Highway of Life
<p>I sent 2000 short-lived jobs to my kube cluster very quickly, and I observed a couple of minutes delay between a job was created and a pod for the job started pending. Does anybody have any clue about what may be the bottleneck?</p> <p>Could etcd be the bottleneck?</p>
tiancai
<p>From a 10.000 foot view, the process is:</p> <ul> <li><p>Every time you schedule a pod/job, it gets added to a queue.</p></li> <li><p>The scheduler reads that queue and assign the POD to a node.</p></li> <li><p>When a node receives an assignment of a pod, it handles the creation by calling the runtime and requesting the creation.</p></li> </ul> <p>Given the above, the delay might be either:</p> <ul> <li>the scheduler waiting for a node to be available(status report) to schedule the pod</li> <li>the runtime scheduling the pods in the nodes</li> </ul> <p>ETCD bottleneck might also be an issue, but is less likely, if was ETCD you probably would notice that while creating the jobs.</p> <p>Also, worth mentioning that the nodes have a limit on how many pods each node can run at same time time, on <strong><a href="https://kubernetes.io/docs/setup/cluster-large/" rel="nofollow noreferrer">V1.14 <em>no more than 100 pods per node can run at same time</em></strong></a>, no matter how large is the node , in this case, you would need at least 21 nodes to run all at same time, 20 for the requested pods and 1 extra node to account for system pods. If you are running k8s in a cloud provider, the limit might be different for each provider.</p> <p>Without investigation is hard to say where is the problem.</p> <p>In summary:</p> <p>There is a work queue to guarantee the reliability of the cluster (API/scheduler/ETCD) and prevent burst calls to affect the availability of the services, after the pods are scheduled, the node runtime will download the images and make sure it runs the PODs as desired on their own time.</p> <p>If the issue is the limit of pods running at same time in a node, it is likely slowing down because the scheduler is waiting for a node to finish a pod before running another, adding more nodes will improve de results </p> <p><a href="https://coreos.com/blog/improving-kubernetes-scheduler-performance.html" rel="nofollow noreferrer">This link</a> details some examples of k8s scheduler performance issues.</p> <p><a href="https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/" rel="nofollow noreferrer">This link</a> describes in a bit of details the entire flow.</p>
Diego Mendes
<p>What is the significance of having this section - <code>spec.template.metadata</code>? It does not seem to be mandatory. However I would like to know where it would be very useful! Otherwise what is the point of repeating all the selectors?</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: selector: matchLabels: app: hello tier: backend track: stable replicas: 7 template: metadata: labels: app: hello tier: backend track: stable spec: containers: - name: hello image: "gcr.io/google-samples/hello-go-gke:1.0" ports: - name: http containerPort: 80 </code></pre>
KitKarson
<p>What makes you think it is not required?</p> <p>If you don't provide the <code>Metadata</code> for a deployment template, it will fail with a message like this:</p> <pre><code>The Deployment &quot;nginx&quot; is invalid: spec.template.metadata.labels: Invalid value: map[string]string(nil): `selector` does not match template `lab els` </code></pre> <p>Or if the metadata does not match the selector, will fail with a message like this:</p> <pre><code>The Deployment &quot;nginx&quot; is invalid: spec.template.metadata.labels: Invalid value: map[string]string{&quot;run&quot;:&quot;nginxv1&quot;}: `selector` does not match template `labels` </code></pre> <p>Also, if you do not provide the <code>selector</code> it will error with a message like this:</p> <pre><code>error validating &quot;STDIN&quot;: error validating data: ValidationError(Deployment.spec): missing required field &quot;selector&quot; in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>The yaml used is the below:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: run: nginx name: nginx spec: replicas: 2 selector: matchLabels: run: nginx strategy: {} template: metadata: labels: run: nginxv1 spec: containers: - image: nginx name: nginx </code></pre> <p>When you read the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">docs</a>, the description for the <code>selector</code> says:</p> <blockquote> <p>The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: nginx). However, more sophisticated selection rules are possible, <strong>as long as the Pod template itself satisfies the rule</strong>.</p> </blockquote> <h1><a href="https://kubernetes.io/docs/reference/federation/v1/definitions/#_v1_objectmeta" rel="noreferrer">Metadata</a></h1> <p>Most objects in Kubernetes have a metadata, it is responsible to store information about the resource like, name, labels, annotations and so on.</p> <p>When you create a deployment, the template is needed for creation\update of ReplicaSet and PODs, in this case, they need to match the selector, otherwise you would end up with orphan resources around your cluster, and the metadata store the data used to link them.</p> <p>This was designed this way to make the resources are loosely coupled from each other, if you make simple change to the label of a pod created by this deployment\replicaSet, you will notice that the old POD keep running, but a new one is created, because the old one does not attend the selector rule anymore and ReplicaSet create a new one to keep the number of desired replicas.</p>
Diego Mendes
<p>What is most simple way to start Kubernetes job with the http request (webhook)? I need to build docker image after push to github and have to do it inside cluster.</p>
Jonas
<p>I think you are looking for <a href="https://www.knative.dev/docs/" rel="nofollow noreferrer">KNative</a>. Mainly the <a href="https://github.com/knative/build" rel="nofollow noreferrer">Build</a> part of it.</p> <p>KNative is still on early stages, but is pretty much what you need. If the build features does not attend your needs, you can still use the other features like <a href="https://www.knative.dev/docs/serving/" rel="nofollow noreferrer">Serving</a> to trigger the container image from http calls and run the tools you need.</p> <p>Here is the description from the Build Docs:</p> <blockquote> <p>A <strong>Knative Build</strong> extends Kubernetes and utilizes existing Kubernetes primitives to provide you with the ability to run on-cluster container builds from source. For example, you can write a build that uses Kubernetes-native resources to obtain your source code from a repository, build a container image, then run that image.</p> <p>While Knative builds are optimized for building, testing, and deploying source code, you are still responsible for developing the corresponding components that:</p> <ul> <li>Retrieve source code from repositories.</li> <li>Run multiple sequential jobs against a shared filesystem, for example: <ul> <li>Install dependencies.</li> <li>Run unit and integration tests.</li> </ul></li> <li>Build container images.</li> <li>Push container images to an image registry, or deploy them to a cluster.</li> </ul> <p>The goal of a Knative build is to provide a standard, portable, reusable, and performance optimized method for defining and running on-cluster container image builds. By providing the β€œboring but difficult” task of running builds on Kubernetes, Knative saves you from having to independently develop and reproduce these common Kubernetes-based development processes.</p> </blockquote>
Diego Mendes
<p>I have started to learn Minikube using <a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">some of this tutorial</a> and <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">a bit of this one</a>. My plan is to use the "none" driver to use Docker rather than the standard Virtual Box.</p> <p>My purpose is to learn some infra/operations techniques that are more flexible than Docker Swarm. There are a few <code>docker run</code> switches that Swarm does not support, so I am looking at alternatives.</p> <p>When setting this up, I had a couple of false starts, as I did not specify the <code>--vm-driver=none</code> initially, and I had to do a <code>sudo -rf ~/.minikube</code> and/or a <code>sudo minikube delete</code> to not use VirtualBox. (Although I don't think it is relevant, I will mention anyway that I am working <em>inside</em> a VirtualBox Linux Mint VM as a matter of long-standing security preference).</p> <p>So, I think I have a mostly working installation of Minikube, but something is not right with the dashboard, and since the Hello World tutorial asks me to get that working, I would like to persist with this.</p> <p>Here is the command and error:</p> <pre><code>$ sudo minikube dashboard πŸ”Œ Enabling dashboard ... πŸ€” Verifying dashboard health ... πŸš€ Launching proxy ... πŸ€” Verifying proxy health ... πŸ’£ http://127.0.0.1:41303/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ is not responding properly: Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 Temporary Error: unexpected response code: 503 {snipped many more of these} </code></pre> <p>Minikube itself looks OK:</p> <pre><code>$ sudo minikube status host: Running kubelet: Running apiserver: Running kubectl: Correctly Configured: pointing to minikube-vm at 10.0.2.15 </code></pre> <p>However it looks like some components have not been able to start, but there is no indication why they are having trouble:</p> <pre><code>$ sudo kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-2br2c 0/1 CrashLoopBackOff 16 62m kube-system coredns-fb8b8dccf-nq4b8 0/1 CrashLoopBackOff 16 62m kube-system etcd-minikube 1/1 Running 2 60m kube-system kube-addon-manager-minikube 1/1 Running 3 61m kube-system kube-apiserver-minikube 1/1 Running 2 61m kube-system kube-controller-manager-minikube 1/1 Running 3 61m kube-system kube-proxy-dzqsr 1/1 Running 0 56m kube-system kube-scheduler-minikube 1/1 Running 2 60m kube-system kubernetes-dashboard-79dd6bfc48-94c8l 0/1 CrashLoopBackOff 12 40m kube-system storage-provisioner 1/1 Running 3 62m </code></pre> <p>I am assuming that a zero in the <code>READY</code> column means that something was not able to start.</p> <p>I have been issuing commands either with or without <code>sudo</code>, so that <em>might</em> be related. Sometimes there are config files in my non-root <code>~/.minikube</code> folder that are owned by root, and I have been forced to use <code>sudo</code> to progress further.</p> <p>This seems to look OK:</p> <pre><code>Kubernetes master is running at https://10.0.2.15:8443 KubeDNS is running at https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre> <p>Incidentally, I don't really know what these various status commands do, or whether they are relevant - I have found some similar posts here and on GitHub, and their respective authors used these commands to write questions and bug reports.</p> <p>This API status looks like it is in a pickle, but I don't know whether it is relevant (I found it through semi-random digging):</p> <pre><code>https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "services \"kube-dns:dns\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"", "reason": "Forbidden", "details": { "name": "kube-dns:dns", "kind": "services" }, "code": 403 } </code></pre> <p>I also managed to cause a Go crash too, seen in <code>sudo minikube logs</code>:</p> <pre><code>panic: secrets is forbidden: User "system:serviceaccount:kube-system:default" cannot create resource "secrets" in API group "" in the namespace "kube-system" goroutine 1 [running]: github.com/kubernetes/dashboard/src/app/backend/auth/jwe.(*rsaKeyHolder).init(0xc42011c2e0) /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:131 +0x35e github.com/kubernetes/dashboard/src/app/backend/auth/jwe.NewRSAKeyHolder(0x1367500, 0xc4200d0120, 0xc4200d0120, 0x1213a6e) /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/auth/jwe/keyholder.go:170 +0x64 main.initAuthManager(0x13663e0, 0xc420301b00, 0xc4204cdcd8, 0x1) /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:185 +0x12c main.main() /home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:103 +0x26b </code></pre> <p>I expect that would correspond to the 503 I am getting, which is a server error of some kind.</p> <p>Some versions:</p> <pre><code>$ minikube version minikube version: v1.0.0 $ docker --version Docker version 18.09.2, build 6247962 $ sudo kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} $ kubeadm version kubeadm version: &amp;version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Related links:</p> <ul> <li><a href="https://github.com/kubernetes/minikube/issues/3884" rel="nofollow noreferrer">503 dashboard errors on a Mac with Hyperkit</a> - I am on Linux Mint and not using Hyperkit.</li> <li><a href="https://github.com/kubernetes/minikube/issues/1682" rel="nofollow noreferrer">Lots of folks with 503 dashboard errors</a> - the main advice here is to delete the cluster with <code>minikube delete</code>, which I have already done for other reasons.</li> </ul> <p>What can I try next to debug this?</p>
halfer
<p>It looks like I needed the rubber-ducking of this question in order to find an answer. The Go crash was the thing to have researched, and is <a href="https://github.com/kubernetes/minikube/issues/2510" rel="nofollow noreferrer">documented in this bug report</a>.</p> <p>The commands to create a missing role is:</p> <pre><code>$ kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default clusterrolebinding.rbac.authorization.k8s.io/kube-system-cluster-admin created </code></pre> <p>Then we need to get the name of the system pod for the dashboard:</p> <pre><code>$ sudo kubectl get pods -n kube-system </code></pre> <p>Finally, use the ID of the dashboard pod instead of <code>kubernetes-dashboard-5498ccf677-dq2ct</code>:</p> <pre><code>$ kubectl delete pods -n kube-system kubernetes-dashboard-5498ccf677-dq2ct pod "kubernetes-dashboard-5498ccf677-dq2ct" deleted </code></pre> <p>I think this removes the misconfigured dashboard, leaving a new one to spawn in its place when you issue this command:</p> <pre><code>sudo minikube dashboard </code></pre> <p>To my mind, the Go error looks sufficiently naked and unhandled that it needs catching, but then I am not <em>au fait</em> with Go. The bug report has been auto-closed by a CI bot, and several attempts to reopen it seem to have failed.</p> <p>At a guess, I could have avoided this pain with setting the role config to start with. However, this is not noted in the Hello World tutorial, so it would not be reasonable to expect beginners not to step into this trap:</p> <pre><code>sudo minikube start --vm-driver=none --extra-config='apiserver.Authorization.Mode=RBAC' </code></pre>
halfer
<p>I have some dilemma to choose what should be the right request and limit setting for a pod in Openshift. Some data:</p> <ol> <li>during start up, the application requires at least 600 millicores to be able to fulfill the readiness check within 150 seconds.</li> <li>after start up, 200 millicores should be sufficient for the application to stay in idle state.</li> </ol> <p>So my understanding from documentation:</p> <p><strong>CPU Requests</strong></p> <blockquote> <p>Each container in a pod can specify the amount of CPU it requests on a node. The scheduler uses CPU requests to find a node with an appropriate fit for a container. The CPU request represents a minimum amount of CPU that your container may consume, but if there is no contention for CPU, it can use all available CPU on the node. If there is CPU contention on the node, CPU requests provide a relative weight across all containers on the system for how much CPU time the container may use. On the node, CPU requests map to Kernel CFS shares to enforce this behavior.</p> </blockquote> <p>Noted that the scheduler will refer to the request CPU to perform allocation on the node, and then it is a guarantee resource once allocated. Also on the other side, I might allocate extra CPU as the 600 millicores might be only required during start up.</p> <p>So should i go for</p> <pre><code>resources: limits: cpu: 1 requests: cpu: 600m </code></pre> <p>for guarantee resource or</p> <pre><code>resources: limits: cpu: 1 requests: cpu: 200m </code></pre> <p>for better cpu saving</p>
bLaXjack
<p>I think you didn't get the idea of <em>Requests vs Limits</em>, I would recommend you take a look on the <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="noreferrer">docs</a> before you take that decision.</p> <p>In a brief explanation,</p> <p><strong>Request</strong> is how much resource will be virtually allocated to the container, it is a guarantee that you can use it when you need, does not mean it keeps reserved exclusively to the container. With that said, if you request 200mb of RAM but only uses 100mb, the other 100mb will be "borrowed" by other containers when they consume all their Requested memory, and will be "claimed back" when your container needs it.</p> <p><strong>Limit</strong> is simple terms, is how much the container can consume, requested + borrow from other containers, before it is shutdown for consuming too much resources. </p> <ol> <li>If a Container exceeds its memory <strong><em>limit</em></strong>, it will <em>probably</em> be terminated.</li> <li>If a Container exceeds its memory <strong><em>request</em></strong>, it is <em>likely</em> that its Pod will be evicted whenever the <em>node runs out of memory</em>.</li> </ol> <p>In simple terms, <strong>the limit is an absolute value, it should be equal or higher than the request</strong>, and the good practice is to avoid having the limits higher than the request for all containers, only in cases while certain workloads might need it, this is because most of the containers can consume more resources (ie: memory) than they requested, suddenly the PODs will start to be evicted from the node in an unpredictable way that makes it worse than if had a fixed limit for each one.</p> <p>There is also a nice post in the <a href="https://docs.docker.com/config/containers/resource_constraints/" rel="noreferrer">docker docs</a> about resources limits.</p> <p>The <strong>scheduling</strong> rule is the same for CPU and Memory, K8s will only assign a POD to a the node if the node has enough CPU and Memory allocatable to fit all resources <strong>requested</strong> by the containers within a pod.</p> <p>The <strong>execution</strong> rule is a bit different:</p> <p>Memory is a limited resource in the node and the capacity is an absolute limit, the containers can't consume more than the node have capacity.</p> <p>The CPU on the other hand is measure as CPU time, when you reserve a CPU capacity, you are telling how much CPU time a container can use, if the container need more time than the requested, it can be throttled and go to an execution queue until other containers have consumed their allocated time or finished their work. In summary is very similar to memory, but is very unlikely the container being killed for consuming too much CPU. The container will be able to use more CPU when the other containers does not use the full CPU time allocated to them. The main issue is when a container uses more CPU than was allocated, the throttling will degrade de performance of the application and at certain point might stop working properly. If you do not provide limits, the containers will start affecting other resources in the node.</p> <p>Regarding the values to be used, there is no right value or right formula, each application requires a different approach, only measuring multiple times you can find the right value, the advice I give to you is to identify the min and the max and adjust somewhere in the middle, then keep monitoring to see how it behaves, if you feel is wasting\lacking resources you can reduce\increase to an optimal value. If the service is something crucial, start with higher values and reduce afterwards. </p> <p>For readiness check, you should not use it as parameters to specify these values, you can delay the readiness using <code>initialDelaySeconds</code> parameter in the probe to give extra time to start the POD containers.</p> <p>PS: I quoted the terms "Borrow" and "Claimed back" because the container is not actually borrowing from another container, in general, the node have a pool of memory and give you chunk of the memory to the container when they need it, so the memory is not technically borrowed from the container but from the Pool.</p>
Diego Mendes
<p>While architecting an application I have two constraints</p> <ol> <li>I have to use Microservice architecture</li> <li>I have to deploy using Kubernetes</li> </ol> <p>I was thinking to deploy in Serverless because scalability and availability is main drive for my application. As far as I know when I use Serverless deployment usually I need to purchase β€œFunctions as a Service” (FaaS) from service providers and there is no way to manage the internals of deployment. I wonder if I can use Kubernetes to control the deployment even when I deploy serverless.</p> <p>I am newbie on this area. Please let me be guided if I am missing any part.</p>
Sazzad Hissain Khan
<p>Disclaimer: I work on that project </p> <p>Have you taken a look at Knative <a href="https://github.com/knative/docs" rel="nofollow noreferrer">here</a></p> <p>Serverless on k8s is very much what Knative does. It extends Kubernetes through CRDs and provides more app/service developer friendly interface with autoscaling, config/routes management, and a growing list of event sources. Take a look</p>
mchmarny
<p>I have a Deployment with three replicas, everyone started on a different node, behing an ingress. For tests and troubleshooting, I want to see which pod/node served my request. How is this possible? </p> <p>The only way I know is to open the logs on all of the pods, do my request and search for the pod that has my request in the access log. But this is complicated and error prune, especially on productive apps with requests from other users. </p> <p>I'm looking for something like a HTTP Response header like this:</p> <pre><code>X-Kubernetes-Pod: mypod-abcdef-23874 X-Kubernetes-Node: kubw02 </code></pre>
Daniel
<p>AFAIK, there is no feature like that out of the box.</p> <p>The easiest way I can think of, is adding these information as headers yourself from your API.</p> <p>You technically have to <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Expose Pod Information to Containers Through Environment Variables</a> and get it from code to add the headers to the response.</p> <p>Would be something like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-envars-fieldref spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "sh", "-c"] args: - while true; do echo -en '\n'; printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE; printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT; sleep 10; done; env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName restartPolicy: Never </code></pre> <p>And from the API you get the information and insert into the header.</p>
Diego Mendes
<p>To be specific, here is the permalink to the relevant line of code: <a href="https://github.com/istio/istio/blob/e3a376610c2f28aef40296aac722c587629123c1/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml#L84" rel="nofollow noreferrer">https://github.com/istio/istio/blob/e3a376610c2f28aef40296aac722c587629123c1/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml#L84</a></p> <blockquote> <p>{{ "[[ .ProxyConfig.ZipkinAddress ]]" }}</p> </blockquote> <p>The <code>[[</code> and <code>]]</code> seems alien to me, and in the helm chart developer guide doc <a href="https://docs.helm.sh/chart_template_guide/#the-chart-template-developer-s-guide" rel="nofollow noreferrer">here</a>, it doesn't show any example or documentation about <code>[[</code> and <code>]]</code> syntax. </p> <p>Also, when I tried to render my istio installation (using <code>helm template</code> command), the <code>{{ "[[ .ProxyConfig.ZipkinAddress ]]" }}</code> part only rendered as <code>[[ .ProxyConfig.ZipkinAddress ]]</code>. So I guess that <code>[[</code> and <code>]]</code> is not part of helm template syntax. My guess it would be internal istio's related syntax, which I don't know what exactly it is.</p> <p>Any idea?</p>
Agung Pratama
<p>I got the answer after posting the same question on the Istio's google group <a href="https://groups.google.com/forum/#!topic/istio-users/0dfU_y06n1Q" rel="nofollow noreferrer">here</a>. Without discrediting the author who answering me in the google group, the answer is yes it is a template of a template. The template syntax is used by sidecar injection described in here: <a href="https://istio.io/docs/setup/kubernetes/sidecar-injection/#template" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/sidecar-injection/#template</a></p>
Agung Pratama
<p>I want to reject all docker registries except my own one. I'm looking for a some kind of policies for docker registries and their images.</p> <p>For example my registry name is <code>registry.my.com</code>. I want to make kubernetes pulling/running images only from <code>registry.my.com</code>, so:</p> <pre><code>image: prometheus:2.6.1 </code></pre> <p>or any another should be rejected, while:</p> <pre><code>image: registry.my.com/prometheus:2.6.1 </code></pre> <p>shouldn't.</p> <p>Is there a way to do that?</p>
Konstantin Vustin
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="noreferrer">Admission Controllers</a> is what you are looking for.</p> <p>Admission controllers intercept operations to validate what should happen before the operation is committed by the api-server.</p> <p>An example is the <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook" rel="noreferrer">ImagePolicyWebhook</a>, an admission controller that intercept Image operations to validate if it should be allowed or rejected.</p> <p>It will make a call to an REST endpoint with a payload like:</p> <pre><code>{ "apiVersion":"imagepolicy.k8s.io/v1alpha1", "kind":"ImageReview", "spec":{ "containers":[ { "image":"myrepo/myimage:v1" }, { "image":"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed" } ], "annotations":[ "mycluster.image-policy.k8s.io/ticket-1234": "break-glass" ], "namespace":"mynamespace" } } </code></pre> <p>and the API answer with <strong>Allowed</strong>:</p> <pre><code>{ "apiVersion": "imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", "status": { "allowed": true } } </code></pre> <p>or <strong>Rejected</strong>:</p> <pre><code>{ "apiVersion": "imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", "status": { "allowed": false, "reason": "image currently blacklisted" } } </code></pre> <p>The endpoint could be a Lambda function or a container running in the cluster.</p> <p>This github repo <a href="https://github.com/flavio/kube-image-bouncer" rel="noreferrer">github.com/flavio/kube-image-bouncer</a> implements a sample using <a href="https://github.com/flavio/kube-image-bouncer/blob/master/handlers/image_policy.go" rel="noreferrer"><strong>ImagePolicyWebhook</strong></a> to reject containers using the tag "Latest". </p> <p>There is also the option to use the flag <em><code>registry-whitelist</code></em> on startup to a pass a comma separated list of allowed registries, this will be used by the <a href="https://github.com/flavio/kube-image-bouncer/blob/master/handlers/validating_admission.go" rel="noreferrer"><strong>ValidatingAdmissionWebhook</strong></a> to validate if the registry is whitelisted.</p> <p>.</p> <p>The other alternative is the project <a href="https://github.com/open-policy-agent/kubernetes-policy-controller" rel="noreferrer">Open Policy Agent</a>[OPA].</p> <p>OPA is a flexible engine used to create policies based on rules to match resources and take decisions according to the result of these expressions. It is a mutating and a validating webhook that gets called for matching Kubernetes API server requests by the admission controller mentioned above. In summary, the operation would work similarly as described above, the only difference is that the rules are written as configuration instead of code. The same example above rewritter to use OPA would be similar to this:</p> <pre><code>package admission import data.k8s.matches deny[{ "id": "container-image-whitelist", # identifies type of violation "resource": { "kind": "pods", # identifies kind of resource "namespace": namespace, # identifies namespace of resource "name": name # identifies name of resource }, "resolution": {"message": msg}, # provides human-readable message to display }] { matches[["pods", namespace, name, matched_pod]] container = matched_pod.spec.containers[_] not re_match("^registry.acmecorp.com/.+$", container.image) # The actual validation msg := sprintf("invalid container registry image %q", [container.image]) } </code></pre> <p>The above translates to: <em>deny any pod where the container image does not match the following registry <code>registry.acmecorp.com</code></em></p>
Diego Mendes
<p>how can I enable the <strong><em>record</em></strong> parameter by default each time I want to create a new pod? My goal is change the default behaviour of the record parameter in order to avoid to use the --record=true eache time I want to instantiate new pod.</p> <p>This is an example:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/mhausenblas/kbe/master/specs/deployments/d09.yaml --record=true </code></pre> <p>Otherwise, if is not possible change the default behaviour of <strong><em>kubectl create</em></strong>, is there a possibility to add record option to my yaml configuration file?</p> <p>Thank you.</p>
carlo.zermo
<p>AFAIK, you can't define default values for commands parameters</p> <p>Your alternatives are:</p> <ul> <li><p>create a bash function with the default parameters and call it with the parameters you want</p> <p><strong><code>diego@PC:/$</code></strong><code>k8s() { kubectl $1 $2 $3 --record=true;}</code></p> <p><strong><code>diego@PC:/$</code></strong><code>k8s create -f https://test</code></p></li> <li><p>Create <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/" rel="nofollow noreferrer">kubectl plugins</a> and write your custom command to replace the <code>create</code> subcommand with your own parameter set and internally you call the kubectl create.</p> <p>The idea is similar to above, but you would still use the kubectl, </p> <p>i.e: <code>kubectl createrec -f https://raw.githubusercontent.com/../d09.yaml</code></p></li> <li><p>The other alternative is download the source and change the default value and compile a new version</p></li> </ul>
Diego Mendes
<p>I'm trying to use a directory config map as a mounted volume inside of my docker container running a spring boot application. I am passing some of the mounted paths to the things like the spring application.yaml, but it doesn't appear the mount is working as expected as it can't find the config. For example</p> <p>Create the configmap like so</p> <pre><code>kubectl create configmap example-config-dir \ --from-file=/example/config/ </code></pre> <p>Kubernetes yaml</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: example labels: app: example spec: replicas: 1 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: - name: example image: example:latest ports: - containerPort: 8443 volumeMounts: - name: config-vol mountPath: /config volumes: - name: config-vol configMap: name: example-config-dir </code></pre> <p>And the Dockerfile (there are other steps which copy the jar file in which I haven't detailed)</p> <pre><code>VOLUME /tmp RUN echo "java -Dspring.config.location=file:///config/ -jar myjarfile.jar" &gt; ./start-spring-boot-app.sh" CMD ["sh", "start-spring-boot-app.sh"] </code></pre>
PDStat
<p>As explained in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-directories" rel="nofollow noreferrer">Create ConfigMaps from Directories</a> and <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">Create ConfigMaps from files</a>, when you create a ConfigMap using <code>--from-file</code>, <strong>the filename becomes a key stored in the data section of the ConfigMap</strong>. The file contents become the key’s value.</p> <p>To do the way you want, a better way would be creating the yml like this</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: SPECIAL_LEVEL: very SPECIAL_TYPE: charm </code></pre> <p>and then apply like this:</p> <pre><code>kubectl create -f https://k8s.io/examples/configmap/configmap-multikeys.yaml </code></pre> <p>When the pod runs, the command <code>ls /config</code> produces the output below:</p> <pre><code>special.level special.type </code></pre> <p>The way you did, should generate a file with same name as your original files and within it the contents of the file.</p>
Diego Mendes
<p>I'm using config maps to inject env variables into my containers. Some of the variables are created by concatenating variables, for example:</p> <p>~/.env file</p> <pre><code>HELLO=hello WORLD=world HELLO_WORLD=${HELLO}_${WORLD} </code></pre> <p>I then create the config map</p> <p><code>kubectl create configmap env-variables --from-env-file ~/.env</code></p> <p>The deployment manifests reference the config map.</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-app spec: template: spec: containers: - name: my-image image: us.gcr.io/my-image envFrom: - configMapRef: name: env-variables </code></pre> <p>When I exec into my running pods, and execute the command</p> <p><code>$ printenv HELLO_WORLD</code></p> <p>I expect to see <code>hello_world</code>, but instead I see <code>${HELLO}_${WORLD}</code>. The variables aren't expanded, and therefore my applications that refer to these variables will get the unexpanded value.</p> <p>How do I ensure the variables get expanded?</p> <p>If it matters, my images are using alpine.</p>
Eric Guan
<p>I can't find any documentation on interpolating environment variables, but I was able to get this to work by removing the interpolated variable from the configmap and listing it directly in the deployment. It also works if all variables are listed directly in the deployment. It looks like kubernetes doesn't apply interpolation to variables loaded from configmaps.</p> <p>For instance, this will work:</p> <p>Configmap</p> <pre><code>apiVersion: v1 data: HELLO: hello WORLD: world kind: ConfigMap metadata: name: env-variables namespace: default </code></pre> <p>Deployment:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-app spec: template: spec: containers: - name: my-image image: us.gcr.io/my-image envFrom: - configMapRef: name: env-variables env: - name: HELLO_WORLD value: $(HELLO)_$(WORLD) </code></pre>
Grant David Bachman
<p>I am deploying sample springboot application using fabric8 maven deploy. The build fails with SSLHandshakeException.</p> <pre><code>F8: Cannot access cluster for detecting mode: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred. sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -&gt; [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred. </code></pre> <p>So, I downloaded the public certificate from the Openshift webconsole and added it to JVM using </p> <pre><code>C:\...\jdk.\bin&gt;keytool -import -alias rootcert -file C:\sample\RootCert.cer -keystore cacerts </code></pre> <p>and got message that its successfully added to the keystore and the list command shows the certificates added.</p> <pre><code> C:\...\jdk.\bin&gt;keytool -list -keystore cacerts Enter keystore password: Keystore type: JKS Keystore provider: SUN Your keystore contains 2 entries rootcert, May 18, 2018, trustedCertEntry, Certificate fingerprint (SHA1): XX:XX:XX:.......... </code></pre> <p>But the mvn:fabric8 deploy build still fails with the same exception.</p> <p>Can someone shed some light on this issue? Am I missing anything?</p>
jack
<p>The following works on MacOS:</p> <p>The certificate to install is the one found on the browser URL bar;<br> On Firefox (at least) click the padlock <a href="https://i.stack.imgur.com/5H4i9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5H4i9.png" alt="padlock icon"></a> to the left of the URL, then proceed down <em>Connection ></em> / <em>More Information</em> / <em>View Certificate</em> / <em>Details</em>, finally <strong>Export...</strong> allows you to save the certificate locally.</p> <p>On the command-line, determine which JRE maven is using:</p> <pre><code>$ mvn --version Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T19:33:14+01:00) Maven home: /Users/.../apache-maven-3.5.4 Java version: 1.8.0_171, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home/jre Default locale: en_GB, platform encoding: UTF-8 OS name: "mac os x", version: "10.12.6", arch: "x86_64", family: "mac" </code></pre> <p>You will likely need to be 'root' to update the cacerts file. </p> <pre><code>$ sudo keytool -import -alias my-openshift-clustername -file /Users/.../downloads/my-cluster-cert.crt -keystore $JAVA_HOME/jre/lib/security/cacerts Password: {your password for sudo} Enter keystore password: {JRE cacerts password, default is changeit} ... keytool prints certificate details... Trust this certificate? [no]: yes Certificate was added to keystore </code></pre> <p>Verify that the certificate was indeed added successfully:</p> <pre><code>$ keytool -list -keystore $JAVA_HOME/jre/lib/security/cacerts Enter keystore password: changeit Keystore type: JKS Keystore provider: SUN Your keystore contains 106 entries ...other entries, in no particular order... my-openshift-clustername, 25-Feb-2019, trustedCertEntry, Certificate fingerprint (SHA1): F4:17:3B:D8:E1:4E:0F:AD:16:D3:FF:0F:22:73:40:AE:A2:67:B2:AB ...other entries... </code></pre>
Ed Randall
<p>I am trying to write a CronJob for executing a shell script within a ConfigMap for Kafka.</p> <p>My intention is to reassign partitions at specific intervals of time.</p> <p>However, I am facing issues with it. I am very new to it. Any help would be appreciated.</p> <p>cron-job.yaml</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: partition-cron spec: schedule: "*/10 * * * *" startingDeadlineSeconds: 20 successfulJobsHistoryLimit: 5 jobTemplate: spec: completions: 2 template: spec: containers: - name: partition-reassignment image: busybox command: ["/configmap/runtimeConfig.sh"] volumeMounts: - name: configmap mountPath: /configmap restartPolicy: Never volumes: - name: configmap configMap: name: configmap-config </code></pre> <p>configmap-config.yaml</p> <pre><code>{{- if .Values.topics -}} {{- $zk := include "zookeeper.url" . -}} apiVersion: v1 kind: ConfigMap metadata: labels: app: {{ template "kafka.fullname" . }} chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" heritage: "{{ .Release.Service }}" release: "{{ .Release.Name }}" name: {{ template "kafka.fullname" . }}-config data: runtimeConfig.sh: | #!/bin/bash set -e cd /usr/bin until kafka-configs --zookeeper {{ $zk }} --entity-type topics --describe || (( count++ &gt;= 6 )) do echo "Waiting for Zookeeper..." sleep 20 done until nc -z {{ template "kafka.fullname" . }} 9092 || (( retries++ &gt;= 6 )) do echo "Waiting for Kafka..." sleep 20 done echo "Applying runtime configuration using {{ .Values.image }}:{{ .Values.imageTag }}" {{- range $n, $topic := .Values.topics }} {{- if and $topic.partitions $topic.replicationFactor $topic.reassignPartitions }} cat &lt;&lt; EOF &gt; {{ $topic.name }}-increase-replication-factor.json {"version":1, "partitions":[ {{- $partitions := (int $topic.partitions) }} {{- $replicas := (int $topic.replicationFactor) }} {{- range $i := until $partitions }} {"topic":"{{ $topic.name }}","partition":{{ $i }},"replicas":[{{- range $j := until $replicas }}{{ $j }}{{- if ne $j (sub $replicas 1) }},{{- end }}{{- end }}]}{{- if ne $i (sub $partitions 1) }},{{- end }} {{- end }} ]} EOF kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --execute kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --verify {{- end }} {{- end -}} </code></pre> <p>My intention is to run the runtimeConfig.sh script as a cron job at regular intervals for partition reassignment in Kafka.</p> <p>I am not sure if my approach is correct.</p> <p>Also, I have randomly put <strong>image: busybox</strong> in the cron-job.yaml file. I am not sure about what should I be putting in there.</p> <p>Information Part</p> <pre><code>$ kubectl get cronjobs NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE partition-cron */10 * * * * False 1 5m 12m $ kubectl get pods NAME READY STATUS RESTARTS AGE elegant-hedgehog-metrics-server-58995fcf8b-2vzg6 1/1 Running 0 5d my-kafka-0 1/1 Running 1 12m my-kafka-1 1/1 Running 0 10m my-kafka-2 1/1 Running 0 9m my-kafka-config-644f815a-pbpl8 0/1 Completed 0 12m my-kafka-zookeeper-0 1/1 Running 0 12m partition-cron-1548672000-w728w 0/1 ContainerCreating 0 5m $ kubectl logs partition-cron-1548672000-w728w Error from server (BadRequest): container "partition-reassignment" in pod "partition-cron-1548672000-w728w" is waiting to start: ContainerCreating </code></pre> <p>Modified Cron Job YAML</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: partition-cron spec: schedule: "*/5 * * * *" startingDeadlineSeconds: 20 successfulJobsHistoryLimit: 5 jobTemplate: spec: completions: 1 template: spec: containers: - name: partition-reassignment image: busybox command: ["/configmap/runtimeConfig.sh"] volumeMounts: - name: configmap mountPath: /configmap restartPolicy: Never volumes: - name: configmap configMap: name: {{ template "kafka.fullname" . }}-config </code></pre> <p>Now, I am getting Status of Cron Job pods as <strong>ContainerCannotRun</strong>.</p>
amankedia
<p>You've set the ConfigMap to <code>name: {{ template "kafka.fullname" . }}-config</code> but in the job you are mounting <code>configmap-config</code>. Unless you installed the Helm chart using <code>configmap</code> as the name of the release, that Job will never start. </p> <p>One way to fix it would be to define the volume as:</p> <pre><code> volumes: - name: configmap configMap: name: {{ template "kafka.fullname" . }}-config </code></pre>
Stefan P.
<p>I'm new to Kubernetes and Rancher, but have a cluster setup and a workload deployed. I'm looking at setting up an ingress, but am confused by what my DNS should look like.</p> <p>I'll keep it simple: I have a domain (example.com) and I want to be able to configure the DNS so that it's routed through to the correct IP in my 3 node cluster, then to the right ingress and load balancer, eventually to the workload.</p> <p>I'm not interested in this xip.io stuff as I need something real-world, not a sandbox, and there's no documentation on the Rancher site that points to what I should do.</p> <p>Should I run my own DNS via Kubernetes? I'm using DigitalOcean droplets and haven't found any way to get Rancher to setup DNS records for me (as it purports to do for other cloud providers).</p> <p>It's really frustrating as it's basically the first and only thing you need to do... "expose an application to the outside world", and this is somehow not trivial.</p> <p>Would love any help, or for someone to explain to me how fundamentally dumb I am and wha tI'm missing!</p> <p>Thanks.</p>
Steadman
<p>You aren't dumb, man. This stuff gets complicated. Are you using AWS or GKE? Most methods of deploying kubernetes will deploy an internal DNS resolver by default for intra-cluster communication. These URLs are only useful inside the cluster. They take the form of <code>&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local</code> and have no meaning to the outside world.</p> <p>However, exposing a service to the outside world is a different story. On AWS you may do this by setting the service's ServiceType to LoadBalancer, where kubernetes will automatically spin up an AWS LoadBalancer, and along with it a public domain name, and configure it to point to the service inside the cluster. From here, you can then configure any domain name that you own to point to that loadbalancer.</p>
Grant David Bachman
<p>Why am I getting an error when I try to change the <code>apiVersion</code> of a deployment via <code>kubectl edit deployment example</code> ? </p> <p>Do I have to delete and recreate the object?</p>
Chris Stryczynski
<p>You're getting this because there are only certain attributes of a resource that you may change once it's created. ApiVersion, Kind, and Name are some of the prime identifiers of a resource so they can't be changed without deleting/recreating them.</p>
Grant David Bachman
<p>Can we interact and troubleshoot containers inside kubernetes without command line access? Or reading logs will be sufficient for debugging? Is there any way for debugging the containers without command line (kubectl)?</p>
FNU
<p>Unfortunately the containers created <em>FROM Scratch</em> are not simple to debug, the best you can do is add logging and telemetry in the container so that you don't have to debug it. The other option is use minimal images like busybox.</p> <p>The K8s team has a <a href="https://github.com/verb/community/blob/473c49d6cb75aa8f9fe3017bada74fe69206e18e/contributors/design-proposals/node/troubleshoot-running-pods.md" rel="nofollow noreferrer">proposal</a> for a a <code>kubectl debug target-pod</code> command, but is not something you can use yet.</p> <p>In the worse scenarios you can try <a href="https://github.com/kubernetes/contrib/blob/master/scratch-debugger/README.md" rel="nofollow noreferrer">Scratch-debugger</a>, it will create a busybox pod in the same node your pod being debuged is and call docker to inject the filesystem into the existing container.</p>
Diego Mendes
<p>I am following this <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">concept guide</a> on the kubernetes docs to connect to a service in a different namespace using the fully qualified domain name for the service.</p> <p><strong>service.yml</strong></p> <pre><code>--- # declare front service kind: Service apiVersion: v1 metadata: name: traefik-frontend-service namespace: traefik spec: selector: k8s-app: traefik-ingress-lb tier: reverse-proxy ports: - port: 80 targetPort: 8080 type: NodePort </code></pre> <p><strong>ingress.yml</strong></p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-web-ui-ingress namespace: traefik annotations: kubernetes.io/ingress.class: traefik traefik.frontend.passHostHeader: "false" traefik.frontend.priority: "1" spec: rules: - host: example.com http: paths: - path: / backend: serviceName: traefik-frontend-service.traefik.svc.cluster.local servicePort: 80 </code></pre> <p>But I keep getting this error:</p> <blockquote> <p>The Ingress "traefik-web-ui-ingress" is invalid: spec.rules[0].http.backend.serviceName: Invalid value: "traefik-frontend-service.traefik.svc.cluster.local": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation is 'a-z?')</p> </blockquote> <p>The service name of <code>traefik-frontend-service.traefik.svc.cluster.local</code>:</p> <ul> <li>starts with an alphanumeric character</li> <li>ends with an alphanumeric character</li> <li>only contains alphanumeric numbers or <code>-</code></li> </ul> <p>Not sure what I'm doing wrong here... <strong>unless a new ingress has to be created for each namespace</strong>.</p>
Clement
<p>This is by design to avoid cross-namespace exposure, In this <a href="https://github.com/kubernetes/kubernetes/issues/17088" rel="nofollow noreferrer">thread</a> is explained why this limitation on the ingress specification was intentional.</p> <p>That means, the <strong>Ingress can only expose services within the same namespace</strong>.</p> <p><strong><em>The values provided should be the service name, not the FQDN.</em></strong></p> <p>If you really need to design this way, your other alternatives are:</p> <ul> <li>Expose Traefik as a LB Service and then create a data service to provide the routing rules to traefik.</li> <li><p>Use <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md#across-namespaces" rel="nofollow noreferrer">Contour Ingress (by heptio)</a> to delegate the routing to other namespaces.</p> <p>Using Contour would be something like this:</p> <pre><code># root.ingressroute.yaml apiVersion: contour.heptio.com/v1beta1 kind: IngressRoute metadata: name: namespace-delegation-root namespace: default spec: virtualhost: fqdn: ns-root.bar.com routes: - match: / services: - name: s1 port: 80 # delegate the subpath, `/blog` to the IngressRoute object in the marketing namespace with the name `blog` - match: /blog delegate: name: blog namespace: marketing ------------------------------------------------------------ # blog.ingressroute.yaml apiVersion: contour.heptio.com/v1beta1 kind: IngressRoute metadata: name: blog namespace: marketing spec: routes: - match: /blog services: - name: s2 port: 80 </code></pre></li> </ul>
Diego Mendes
<p>I would like to run Node-RED as a service on Kubernetes to be able to build a custom API using the HTTP IN nodes. The goal is to be able to push any number of different flows to an arbitrary container running Node-RED using the Node-RED API.</p> <p>I have tried running Node-RED as a service with 5 replicas and built a flow through the UI that has an HTTP in and HTTP out node. When I try hitting the service using curl on the minikube ip (e.g. curl <a href="http://192.168.64.2:30001/test" rel="nofollow noreferrer">http://192.168.64.2:30001/test</a>), it will only return the results if the load balancer happens to land on the container that has the flow. Otherwise, it will return an error with HTML.</p> <p>Any advice on how I should go about solving this issue? Thanks!</p>
kturcios
<p>This is working as expected. If you are interacting with the Node-RED editor via the load balancer you are only editing the flow on that instance.</p> <p>If you have 5 instances of Node-RED and only one of them is running a flow with the HTTP endpoints defined then calls to that endpoint will only succeed 1 time in 5.</p> <p>You need to make sure that all instances have the same endpoints defined in their flows.</p> <p>There are several ways you can do this, some examples would be:</p> <ul> <li>Use the Node-RED Admin API to push the flows to each of the Node-RED instances in turn. You will probably need to do this via the private IP Address of each instance to prevent the load balancer getting in the way.</li> <li>Use a custom Storage plugin to store the flow in a database and have all the Node-RED instances load the same flow. You would need to restart the instances to force the flow to be reloaded should you change it.</li> </ul>
hardillb
<p>I am just curious to know how k8s master/scheduler will handle this.</p> <p>Lets consider I have a k8s master with 2 nodes. Assume that each node has 8GB RAM and each node running a pod which consumes 3GB RAM.</p> <pre><code>node A - 8GB - pod A - 3GB node B - 8GB - pod B - 3GB </code></pre> <p>Now I would like to schedule another pod, say pod C, which requires 6GB RAM. </p> <p><strong>Question:</strong></p> <ol> <li>Will the k8s master shift pod A or B to other node to accommodate the pod C in the cluster or will the pod C be in the pending status? </li> <li>If the pod C is going to be in pending status, how to use the resources efficiently with k8s?</li> </ol> <p>Unfortunately I could not try this with my minikube. If you know how k8s scheduler assigns the nodes, please clarify.</p>
KitKarson
<p>Most of the Kubernetes components are split by responsibility and workload assignment is no different. We could define the workload assignment process as <strong><em>Scheduling</em></strong> and <strong><em>Execution</em></strong>.</p> <p>The <strong>Scheduler</strong> as the name suggests will be responsible for the <strong><em>Scheduling</em></strong> step, The process can be briefly described as, "<em>get a list of pods, if it is not scheduled to a node, assign it to one node with capacity to run the pod</em>". There is a nice blog post from Julia Evan <a href="https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/" rel="nofollow noreferrer">here</a> explaining Schedulers.</p> <p>And <strong>Kubelet</strong> is responsible for the <strong><em>Execution</em></strong> of pods scheduled to it's node. It will get a list of POD Definitions allocated to it's node, make sure they are running with the right configuration, if not running start then.</p> <p>With that in mind, the scenario you described will have the behavior expected, the POD will not be scheduled, because you don't have a node with capacity available for the POD.</p> <p>Resource Balancing is mainly decided at scheduling level, a nice way to see it is when you add a new node to the cluster, if there are no PODs pending allocation, the node will not receive any pods. A brief of the logic used to Resource balancing can be seen on <a href="https://github.com/kubernetes/kubernetes/pull/6150" rel="nofollow noreferrer">this PR</a></p> <p>The solutions,</p> <p>Kubernetes ships with a default scheduler. If the default scheduler does not suit your needs you can implement your own scheduler as described <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">here</a>. The idea would be implement and extension for the Scheduler to ReSchedule PODs already running when the cluster has capacity but not well distributed to allocated the new load.</p> <p>Another option is use tools created for scenarios like this, the <a href="https://github.com/kubernetes-incubator/descheduler" rel="nofollow noreferrer">Descheduler</a> is one, it will monitor the cluster and evict pods from nodes to make the scheduler re-allocate the PODs with a better balance. There is a nice blog post <a href="https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7" rel="nofollow noreferrer">here</a> describing these scenarios.</p> <p>PS: Keep in mind that the total memory of a node is not allocatable, depending on which provider you use, the capacity allocatable will be much lower than the total, take a look on this SO: <a href="https://stackoverflow.com/questions/54786341/cannot-create-a-deployment-that-requests-more-than-2gi-memory/54786781#54786781">Cannot create a deployment that requests more than 2Gi memory</a></p>
Diego Mendes
<p>I have a backend pod running a quite migration script on startup (1 minute or more). How to avoid K8s thinks that the pods failed to start and tries to re-launch it? </p>
pditommaso
<p>I'm assuming you have a liveness probe set on your Pod. This is what k8s is looking at when it decides whether a pod needs restarted. You can fix this by setting the <code>initialDelaySeconds</code> attribute on your script to something beyond the length of time the migration script needs to run. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</a></p> <p>An alternative is to use an initContainer for the Pod that runs the migration script. This is what init containers are used for. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
Grant David Bachman
<p>I'm new to using Helm and I'm not sure which is the best approach when you have two deployments. I've created a chart for my application. It contains two deployments:</p> <ol> <li>app-nginx-phpfpm.yaml</li> <li>app-mysql.yaml</li> </ol> <p>Should I keep them in the same chart or should I create a sub-chart for app-mysql.yaml? </p>
Agustin Castro
<p>You can have both, depending on how you want to structure your deployments.</p> <p>You should keep in mind the following</p> <h1>Considerations</h1> <h2>Single chart benefits</h2> <ul> <li>Easier to deploy: only deploy once, single diffing</li> <li>Single version, so rollback/upgrades happen on a single element</li> <li>You can uninstall parts by using feature flags</li> <li>Installing a new component without touching the rest of the elements may prove tricky</li> </ul> <h2>Single chart caveats</h2> <ul> <li>Harder to deploy uncoupled services, e.g., a mock service for data access while upgrading the database</li> <li>Harder to decouple and test each instance</li> <li>Harder to name and make sense of each component (in different releases your <code>{{.Release.Name}}</code> would already change for each "app").</li> <li>Harder to provide/keep track of different release cycles for different components</li> <li>Versions stored in a single ConfigMap, which may lead to size limit problems if you have charts which contain, for example, testing data embedded</li> </ul> <h1>Note on version control</h1> <p>You can have a master chart that you use for testing with all subcharts, and package the subcharts independently but still have everything on the same repo.</p> <p>For example, I usually keep things like either:</p> <pre><code>. / helm / charts / whatever / charts / subchart1 . / helm / charts / whatever / charts / subchart2 . / helm / charts / whatever / values.yaml </code></pre> <p>or</p> <pre><code>. / helm / charts / whatever-master / values.yaml . / helm / charts / whatever-master / requirements.yaml . / helm / charts / whatever-subchart1 / values.yaml . / helm / charts / whatever-subchart2 / values.yaml </code></pre> <p>And use the requirements.yaml on the master chart to pull from <code>file://../whatever-subchartx</code>.</p> <p>This way I can have <code>whatever-stress-test</code> and <code>whatever-subcomponent-unit-test</code> while still being flexible to deploy separately components that have different release cycles if so wanted.</p> <p>This will in the end also depend on your upgrade strategy. Canary upgrades will probably require you to handle stateful microservices in a more specific way than you can have with a single chart, so plan accordingly.</p>
ssice
<p>I have the following questions regarding request/limit quota for ns:</p> <p>Considering the following namespace resource setup: - request: 1 core/1GiB - limit: 2 core/2GiB</p> <ol> <li><p>Does it mean a namespace is guaranteed to have 1/1GiB? How is it achieved physically on cluster nodes? Does it mean k8s somehow strictly reserve these values for a ns (at a time it's created)? At which point of time reservation takes place?</p></li> <li><p>Limit 2 core/2GiB - does it mean it's not guaranteed for a ns and depends on current cluster's state? Like if currently cluster has only 100MiB of free ram available, but in runtime pod needs 200Mib more above a resource request - pod will be restarted? Where does k8s take this resource if pod needs to go above it's request? </p></li> <li><p>Regarding <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces" rel="nofollow noreferrer">namespace granularity</a> and k8s horizontal auto scaling: consider we have 2 applications and 2 namespaces - 1 ns per each app. We set both ns quota as such that there's some free buffer for 2 extra pods and horizontal auto scaling up to 2 pods with certain CPU threshold. So, is there really a point in doing such a set up? My concern is that if NS reserves it's resources and no other ns can utilize them - we can just create 2 extra pods in each ns replica set with no auto scaling, using these pods constantly. I can see a point in using auto scaling if we have more than 1 application in 1 ns, so that these apps could share same resource buffer for scaling. Is this assumption correct?</p></li> <li><p>How do you think is this a good practice to have 1 ns per app? Why?</p></li> </ol> <p>p.s. i know what resource request/limit are and difference between them. In most info sources there's just very high level explanation of the concept.</p> <p>Thanks in advance.</p>
Jan Lobau
<p>The <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">docs</a> clearly states the following:</p> <blockquote> <p>In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, there may be contention for resources. This is handled on a first-come-first-served basis.</p> </blockquote> <p>and</p> <blockquote> <p>ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consume more resources.</p> </blockquote> <p>and</p> <blockquote> <p>resource quota divides up aggregate cluster resources, but it creates no restrictions around nodes: pods from several namespaces may run on the same node</p> </blockquote> <p><strong><em>ResourceQuotas</em></strong> is a constraint set in the namespace and does not reserve capacity, it just set a limit of resources that can be consumed by each namespace.</p> <p>To effectively "reserve" the capacity, you have to set the restrictions to all namespaces, so that other namespaces does not use more resources than you cluster can provide. This way you can have more guarantees that a namespace will have available capacity to run their load.</p> <p>The docs suggests:</p> <ul> <li>Proportionally divide total cluster resources among several teams(namespaces).</li> <li>Allow each team to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.</li> <li>Detect demand from one namespace, add nodes, and increase quota.</li> </ul> <p>Given that, the answer for your questions are:</p> <ol> <li><p>it is not a reserved capacity, the reservation happens on resource(pod) creation. </p></li> <li><p>Running resources are not affected after reservation. New resources are rejected if the resource creation will over commit the quotas(limits)</p></li> <li><p>As stated in the docs, if the limit are higher than the capacity, the reservation will happen in a first-come-first-served basis.</p></li> <li><p>This question can make to it's own question in SO, in simple terms, for resource isolation and management.</p></li> </ol>
Diego Mendes
<p>I want to use a local folder for a container with kubernetes on docker for windows. When I used hostPath in nginx.yaml, it works well. But I used persistentVolumeClaim instead of hostPath, the container could not mount the volume.</p> <p>pv.yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: nfs-os012 labels: type: local spec: capacity: storage: 1Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle storageClassName: hostpath mountOptions: - hard # hostPath: # path: "/c/k" nfs: path: /c/k/share server: localhost </code></pre> <p>pvc.yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: hostpath </code></pre> <p>nginx.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: replicas: 2 template: metadata: labels: run: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - mountPath: "/usr/share/nginx/html" name: mydate volumes: - name: mydate persistentVolumeClaim: claimName: nfs-claim1 </code></pre> <p>log of "kubectl describe pod nginx"</p> <pre><code>Name: nginx-dep-6b46bc497f-cqkl7 Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: docker-desktop/192.168.65.3 Start Time: Wed, 27 Mar 2019 16:45:22 +0900 Labels: pod-template-hash=6b46bc497f run=nginx Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/nginx-dep-6b46bc497f Containers: nginx: Container ID: Image: nginx Image ID: Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /usr/share/nginx/html from mydate (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dccpv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: mydate: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nfs-claim11 ReadOnly: false default-token-dccpv: Type: Secret (a volume populated by a Secret) SecretName: default-token-dccpv Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedMount 23m (x41 over 173m) kubelet, docker-desktop MountVolume.SetUp failed for volume "nfs-os012" : mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs -o hard localhost:/c/k/share /var/lib/kubelet/pods/46d3fd62-5064-11e9-830a-00155d429c07/volumes/kubernetes.io~nfs/nfs-os012 Output: mount.nfs: Connection refused Warning FailedMount 3m54s (x76 over 173m) kubelet, docker-desktop Unable to mount volumes for pod "nginx-dep-6b46bc497f-cqkl7_default(46d3fd62-5064-11e9-830a-00155d429c07)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-dep-6b46bc497f-cqkl7". list of unmounted volumes=[mydate]. list of unattached volumes=[mydate default-token-dccpv] </code></pre> <p>Please tell me how to fix this problem.</p>
k_trader
<p>You can either try <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> set direct in the POD:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume hostPath: # directory location on host path: /data # this field is optional type: Directory </code></pre> <p>PS: I am not sure if you set <strong>hostpath</strong> as a class to a PersistentVolume, you should try the <strong>local-storage</strong> class intead</p> <p>or <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local</a>:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 100Gi # volumeMode field requires BlockVolume Alpha feature gate to be enabled. volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/disks/ssd1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node </code></pre>
Diego Mendes
<p>seeing an odd behaviour with kubernetes-dashboard where the exec option is not taking me into a shell. Instead it shows me a snippet of the dashboard UI in it? Has anyone noticed this? I cannot see any errors in the logs for the same.</p> <p>I am using the following dashboard yaml: <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml</a></p>
devops84uk
<p>It is a problem with the library <code>hterm</code> and Firefox that has been fixed in version 2 of the Dashboard (as it now uses <code>xterm</code>).</p> <p>You can read more in the <a href="https://github.com/kubernetes/dashboard/issues/3541" rel="nofollow noreferrer">github issue 3541</a>.</p> <p>If you are stick to a Dashboard with the problem, you can work around it using another browser like Chrome.</p>
PhoneixS
<p>I have actually a spring boot application with a MQTT client into it that is subscribed to topic.</p> <p>I encounter a problem when i put 2 instances of my application ( 2 containers/pods ) because it creates 2 connections to the publisher ! The problem is that I record things in a database for each message, so I receive the data 2 times ! One from a pod, and one from the second one..and so 2 record in database...</p> <p>This is my actual code :</p> <pre><code>. .. ... .... @Bean public MqttConnectOptions getReceiverMqttConnectOptions() { MqttConnectOptions mqttConnectOptions = new MqttConnectOptions(); mqttConnectOptions.setCleanSession(true); mqttConnectOptions.setConnectionTimeout(30); mqttConnectOptions.setKeepAliveInterval(60); mqttConnectOptions.setAutomaticReconnect(true); mqttConnectOptions.setUserName(bean.getProperty(&quot;username&quot;)); String password = bean.getProperty(&quot;password&quot;); String hostUrl = bean.getProperty(&quot;url&quot;); mqttConnectOptions.setPassword(password.toCharArray()); mqttConnectOptions.setServerURIs(new String[] { hostUrl }); return mqttConnectOptions; } @Bean public MqttPahoClientFactory mqttClientFactory() { DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory(); factory.setConnectionOptions(getReceiverMqttConnectOptions()); return factory; } @Bean public MessageChannel mqttInputChannel() { return new DirectChannel(); } @Bean public MessageProducer inbound() { String clientId = &quot;client-id&quot; + UUID.randomUUID().toString(); MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter(clientId, mqttClientFactory(), &quot;jenkins&quot;); adapter.setCompletionTimeout(20000); adapter.setConverter(new DefaultPahoMessageConverter()); ... .. . </code></pre> <p>If any of you have a solution to be able to use 2 pod of my application without creating 2 MQTT connection.. Thanks</p>
KΓ©vin
<p>You need to use a broker that supports Shared Subscriptions (this is feature added to MQTTv5 standard, but some brokers supported none standard versions at v3)</p> <p>Shared Subscriptions allow groups of clients to subscribe to the a topic (or wildcard topic) and any given message published to that topic will only be delivered to one of the group of clients.</p> <p>You can read more about Shared Subscriptions <a href="https://www.hivemq.com/blog/mqtt5-essentials-part7-shared-subscriptions/" rel="nofollow noreferrer">here</a></p>
hardillb
<p>If I have 2 pods is there a way for them to talk to each other without any other resource created and used?</p> <p>The question goes for the both situations - if they are in the same namespace or in different ones.</p>
LIvanov
<p>Yes, they can!</p> <p>Assuming you don't have any network policies restricting the calls, it just need to know its DNS name, this is how it works:</p> <ul> <li>the cluster must have DNS enabled</li> <li>if the pods are manually create on the same namespace(not via deployment), you just need make a call to the podname that act as the host. <ul> <li>POD1 running on namespace NS1 exposing the container port 31333</li> <li>POD2 running on namespace NS1 </li> <li>POD2 call POD1 via <a href="http://POD1:31333" rel="nofollow noreferrer">http://POD1:31333</a></li> </ul></li> <li>if the pods are on different namespaces, you need to include the namespace to the host. <ul> <li>POD1 running on namespace NS1 exposing the container port 31333</li> <li>POD2 running on namespace NS2 </li> <li>POD2 call POD1 via <a href="http://POD1.NS1:31333" rel="nofollow noreferrer">http://POD1.NS1:31333</a></li> </ul></li> <li>if the pod is being created by a deployment, it's name is dynamic, is hard to predic, in this case, you need a service to expose the pods to others by using a common name(the service) <ul> <li>DEPLOYMENT1 deployed to namespace NS1 will create a pod with following format deploymentname-hash(example: DEPLOYMENT1-f82hsh)</li> <li>DEPLOYMENT1-f82hsh is the pod created by the deployment and is running on namespace NS1, created exposing the container port 31333</li> <li>POD2 running on namespace NS2 </li> <li>POD2 could call DEPLOYMENT1-f82hsh via <a href="http://DEPLOYMENT1-f82hsh.NS1:31333" rel="nofollow noreferrer">http://DEPLOYMENT1-f82hsh.NS1:31333</a>, but because the name is dynamic, at any time it could change to something else and break POD2</li> <li>The solution is deploy service SVC1 that forward requests to DEPLOYMENT1 pods</li> <li>POD2 then call <a href="http://SVC1:31333" rel="nofollow noreferrer">http://SVC1:31333</a> that will forward the call to DEPLOYMENT1-f82hsh or whatever pod is available in the DEPLOYMENT1.</li> </ul></li> </ul> <p>The scenarios above assume you <strong>haven't</strong> set the hostname neither subdomain in the pod and is using the default configuration.</p> <p>In more advanced scenarios you would also use the cluster dns suffix to call these services. The following docs describes everything in more details <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
Diego Mendes
<p>I am in the process of migrating an application from Windows server to Kubernetes running on cloud.I was able to launch the application successfully in Kubernetes. Next is restoration and there is a script provided to restore the data to the new instance, but that should be run when the instance of application is shutdown. There is a script called stop.sh to shutdown the instance of that application. Both restore and shutdown script should be run inside the POD. I got into the POD using "Kubectl exec" , and then i tried to shutdown the instance using stop.sh. Then the instance will shutdown, but the POD will also exit along with it and i am not able run the restore.sh script inside the POD. So is there a way where i can keep the POD alive even after my application instance is shutdown to run the restore script.</p> <p>Regards, John </p>
John
<p>A POD is meant to stay alive while the main process is running, when the main process shutdown, the pod will be terminated.</p> <p>What you want is <strong>not</strong> how to keep the POD alive, but how to refactor your application to work with Kubernetes properly.</p> <p>The restore phase you previously had in a script, will likely run as an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a>. Init containers are specialized containers that run before other containers in a Pod, so you could run your restore logic before the main application start.</p> <p>The next phase is starting the application, the application should start by itself inside the container when the container is created, so the init container shouldn't affect it's lifecycle. The application will assume the init container has done it's work and the environment is setup properly.</p> <p>The next phase is termination, when a container is started or stopped, Kubernetes will trigger <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">container hooks</a> for <code>PostStart</code> and <code>PreStop</code>. You likely need to use the <code>PreStop</code> hook to execute the custom script.</p> <p>The scenario above, assumes minimal changes to the application, if you are willing to refactor the application to deploy it to Kubernetes, there are other ways it can be achieved, like using persistent volumes to store the data and re-use it when the container start, so you won't need to do the backup and restore all the time.</p>
Diego Mendes
<p>We want our Prometheus installation to scrape the metrics of both containers within a pod. One container exposes the metrics via HTTPS at port 443, whereas the other container exposes them via HTTP at port 8080. Both containers provide the metrics at the same path, namely <code>/metrics</code>.</p> <p>If we declare the <em>prometheus.io/scheme</em> to be either http or https, only one container will be scraped. For the other one we always receive: <code>server returned HTTP status 400 Bad Request</code> The same happens if we do not define the <em>prometheus.io/scheme</em> at all. Prometheus will then use http for both ports, and fail for the container that exposes the metrics at port 443 as it would expect HTTPS requests only.</p> <p><strong>Is there a way to tell prometheus how exactly it shall scrape the individual containers within our deployment? What are feasible workarounds to acquire the metrics of both containers?</strong></p> <h3>Versions</h3> <p>Kubernetes: 1.10.2</p> <p>Prometheus: 2.2.1</p> <h3>Deployment excerpt</h3> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: xxx namespace: xxx spec: selector: matchLabels: app: xxx template: metadata: labels: app: xxx annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" spec: containers: - name: container-1 image: xxx ports: - containerPort: 443 - name: container-2 image: xxx ports: - containerPort: 8080 </code></pre> <h3>Prometheus configuration:</h3> <pre><code>- job_name: kubernetes-pods scrape_interval: 1m scrape_timeout: 10s metrics_path: /metrics scheme: http kubernetes_sd_configs: - api_server: null role: pod namespaces: names: [] relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] separator: ; regex: "true" replacement: $1 action: keep - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] separator: ; regex: (.+) target_label: __metrics_path__ replacement: $1 action: replace - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] separator: ; regex: ([^:]+)(?::\d+)?;(\d+) target_label: __address__ replacement: $1:$2 action: replace - separator: ; regex: __meta_kubernetes_pod_label_(.+) replacement: $1 action: labelmap - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: kubernetes_namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: kubernetes_pod_name replacement: $1 action: replace </code></pre>
croeck
<p>I found a GIST snippet that takes the port from the container directly if it is named "metrics", instead of relying on a per-pod annotation. It also contains a comments to make this a regex for any port that starts with "metrics".</p> <p>Maybe you can extend it to also extract the schema from the port name, like "metrics-http" and "metrics-https". </p> <p><a href="https://gist.github.com/bakins/5bf7d4e719f36c1c555d81134d8887eb" rel="nofollow noreferrer">https://gist.github.com/bakins/5bf7d4e719f36c1c555d81134d8887eb</a></p> <pre><code># Example scrape config for pods # # The relabeling allows the actual pod scrape endpoint to be configured via the # following annotations: # # * `prometheus.io/scrape`: Only scrape pods that have a value of `true` # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. This # will be the same for every container in the pod that is scraped. # * this will scrape every container in a pod with `prometheus.io/scrape` set to true and the port is name `metrics` in the container # * note `prometheus.io/port` is no longer honored. You must name the port(s) to scrape `metrics` # Also, in some of the issues I read, there was mention of a container role, but I couldn't get # that to work - or find any more info on it. - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_container_port_name] action: keep regex: metrics - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [ __address__, __meta_kubernetes_pod_container_port_number] action: replace regex: (.+):(?:\d+);(\d+) replacement: ${1}:${2} target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name </code></pre>
ahus1
<p>We are using <code>Mosquitto</code> MQTT broker, an IoT server implementation and devices which connect to the Broker as an Client. Both device and IoT server will be publishing to the broker.</p> <p>The devices publish requests to the topic : <code>/req/&lt;device-id&gt;/&lt;server-id&gt;</code></p> <p>The devices subscribe to responses : <code>/resp/&lt;device-id&gt;/&lt;server-id&gt;</code></p> <p>And if IoT server sends a notification to a particular device, it publish to notification topic to which device also subscribes:</p> <pre><code>/req/&lt;server-id&gt;/&lt;device-id&gt; </code></pre> <p><a href="https://i.stack.imgur.com/zLI0M.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zLI0M.jpg" alt="enter image description here" /></a></p> <p>So this scenario is working fine uptill now, but we want to shift our implementation on <code>Kubernetes</code> such that both MQTT broker and IoT server will have multiple pods running.</p> <p>So now any device can connect to any <code>mosquitto</code> pod instance and any <code>IoT server</code> pod would be connecting to <code>mosquitto</code> pod instance.</p> <p>So a device connected to <code>mosquitto</code> pod1 and <code>IoT server</code> pod1 might not receive notifications if pod2 of <code>IoT server</code> generates notification and sends it to <code>mosquitto</code> pod instance to which our device wasn't connected to.</p> <p>So <code>IoT server</code> will need the awareness that which <code>device</code> is connected to which pod instance to send notifications.</p> <p>How to achieve this in <code>Kubernetes</code> environment??</p>
Siddharth Trikha
<p>The short answer to this is: You don't easily.</p> <p>The longer answer is:</p> <p>You will probably need to pick a different MQTT broker than mosquitto, mosquitto does not support clustering, so there is no (simple) way to run multiple instances that sessions and messages can be distributed across.</p> <p>You can setup bridging between multiple broker instances to ensure messages end up on all instances, but the best way to do this is with a star formation, with a &quot;central&quot; broker that then redistributes all the messages to the points of the star. The devices would then connect to the these star instances. This does not solve any of the problems with distributed sessions.</p> <p>You will also probably need to look at shared subscriptions so that messages are only consumed by a single instance of the IoT Server.</p> <p>There are several other MQTT broker implementations that support proper clustering, iirc things like HiveMQ and emqx</p>
hardillb
<p>After some readings, it seems there is no sustainable solution for <strong>auto-scaling</strong> Redis on Kubernetes without adding a controller like <a href="https://github.com/adenda/maestro/wiki/Kubernetes-Redis-controller-for-autoscaling-a-Redis-cluster" rel="nofollow noreferrer">Maestro</a>. Unfortunatly the project seems a bit dead. </p> <p>What are some alternatives for autoscaling Redis ? </p> <p>Edit: Redis is a statefull app.</p>
LoΓ―c Guzzetta
<p>If you want to autoscale anything on Kubernetes, it requires some type of controller. For general autoscaling, the community is rallying around the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>. By default, you configure it to scale based on CPU utilization.</p> <p>If you want to scale based on metrics other than CPU utilization and you're using the <a href="https://github.com/helm/charts/tree/master/stable/redis" rel="nofollow noreferrer">Redis helm chart</a>, you can easily configure it to run a Prometheus metric sidecar and can set the autoscaler to scale based on one of those values.</p>
Grant David Bachman
<p>I'm looking for a command like "gcloud config get-value project" that retrieves a project's name, but for a pod (it can retrieve any pod name that is running). I know you can get multiple pods with "kubectl get pods", but I would just like one pod name as the result.</p> <p>I'm having to do this all the time: </p> <pre><code>kubectl get pods # add one of the pod names in next line kubectl logs -f some-pod-frontend-3931629792-g589c some-app </code></pre> <p>I'm thinking along the lines of "gcloud config get-value pod". Is there a command to do that correctly?</p>
sjsc
<p>There are many ways, here are some examples of solutions:</p> <p><code>kubectl get pods -o name --no-headers=true </code></p> <p><code>kubectl get pods -o=name --all-namespaces | grep kube-proxy</code></p> <p><code>kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}'</code></p> <p>For additional reading, please take a look to these links:</p> <p><a href="https://stackoverflow.com/questions/35797906/kubernetes-list-all-running-pods-name">kubernetes list all running pods name</a></p> <p><a href="https://stackoverflow.com/questions/35773731/kubernetes-list-all-container-id?noredirect=1&amp;lq=1">Kubernetes list all container id</a></p> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/</a></p>
Diego Mendes
<p>I installed kubectl and tried enable shell autocompletion for zsh. When I'm using <code>kubectl</code> autocompletion works fine. Howewer when I'm trying use autocompletion with alias <code>k</code> then shell return me</p> <pre><code>k g...(eval):1: command not found: __start_kubectl ξ‚² 8:45 (eval):1: command not found: __start_kubectl (eval):1: command not found: __start_kubectl </code></pre> <p>In my <code>.zshrc</code> file I have:</p> <pre><code>source &lt;(kubectl completion zsh) alias k=kubectl compdef __start_kubectl k </code></pre>
Dawid Krok
<p>If you use Oh My Zsh, what fixed it for me was updating:</p> <pre><code>omz update ... lots of output source ~/.zshrc </code></pre>
Jerry
<p>While I was running kubectl command in my ubuntu 16.04 os which is a 32 bit machine, I was getting</p> <blockquote> <p>cannot execute binary file: Exec format error</p> </blockquote> <p>Can some one tell me whether Kubernetes works on 32 bit machine or not ?</p>
Mammu yedukondalu
<p>Currently there are no ready-made binaries for 32bit systems at: <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.7.md#downloads-for-v1710" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.7.md#downloads-for-v1710</a></p> <p>You can build kubernetes from source though: <a href="https://kubernetes.io/docs/getting-started-guides/binary_release/#building-from-source" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/binary_release/#building-from-source</a></p> <p>As a commenter mentioned, there is support for 32bit systems for the client tool, kubectl: <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.7.md#client-binaries" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.7.md#client-binaries</a></p>
vascop
<p>If I want to save credential information in K8s and then retrieve it to use out of k8s, can I do it? and how?</p>
Ya He
<p>Yes you can, <em>but you probably shouldn't</em>.</p> <p>When you run <code>kubectl get secret</code> command, what it does behind the scenes is an api call to kubernetes api server.</p> <p>To access the secret outside the cluster you will need to:</p> <ul> <li>Have the Kubernetes api exposed to the client(if not in same network)</li> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Setup authentication</a> in order to create credentials used by external clients</li> <li>Call the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#secret-v1-core" rel="nofollow noreferrer">secrets endpoint</a>. The endpoint is something like this <code>/api/v1/namespaces/{namespace}/secrets</code></li> </ul> <p>As said previous, you probably shouldn't do it, there are many tools available in the market to do secret management, they would be better suited for this kind of situation.</p>
Diego Mendes
<p>My Kubernetes application uses an Ingress to proxy requests to different servers, according to the URL given: I want a fanout configuration. I want the URLs of the requests <strong>not to be rewritten</strong> when forwarded to the servers. How do I do that?</p> <p>I want all the <code>/api</code> URLs forwarded to the <code>be</code> service, and all others forwarded to the <code>fe</code> service. But I want the URLs forwarded unchanged. For example</p> <ul> <li>a request for <code>/api/users</code> should be forwarded to the <code>be</code> service as a request for <code>/api/users</code>.</li> <li>a request for <code>/foo</code> should be forwarded to the <code>fe</code> service as a request for <code>/foo</code>.</li> </ul> <p>My current Ingress resource is like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: ... spec: ... rules: - host: ... - http: paths: - path: /api backend: serviceName: be servicePort: 8080 - path: / backend: serviceName: fe servicePort: 80 </code></pre> <p>but that does not work; it gives 404 Not Found for requests.</p>
Raedwald
<p>The Kubernetes ingress isn't rewriting your request URLs, the ingress controller is doing this (whatever you happen to be using). For instance, if your ingress controller is Nginx, you can control this behavior with <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">annotations</a> on the ingress.</p>
Grant David Bachman
<p>Need to deploy cluster to GCP and setup, helm, ingress and some other stuff without manually running gcloud command. Tried many ways google_container_cluster with and without certs and user/pass. I get two kind of results:</p> <p><code>Error: serviceaccounts is forbidden: User "system:anonymous" cannot create resource "serviceaccounts" in API group "" in the namespace "kube-system"</code> or <code>Error: serviceaccounts is forbidden: User "client" cannot create resource "serviceaccounts" in API group "" in the namespace "kube-system"</code>.</p> <p>What I managed to understand is if I generate certs gke will have default user "client" corresponding to cert it will create otherwise it will keep default user "anonymous" - no user.</p> <p>My issue is I cannot find way to tell <code>google_container_cluster</code> to use specific account nor tell <code>provider "kubernetes"</code> to take any user. Also cannot find a way to apply RBAC file to cluster without authenticating via <code>gcloud</code>.</p>
Aram
<p>I solved this issue by updating how Terraform connect to the Kubernetes cluster. As I change the backend to use "remote" (Terraform Cloud) it does not work anymore and I have the same kind of error message. It is because with "remote" backend Terraform doesn't use the local kubectl config.</p> <p>see for example : <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/347" rel="nofollow noreferrer">https://github.com/terraform-providers/terraform-provider-kubernetes/issues/347</a></p> <p>So I add a block to get the config</p> <pre><code>data "google_client_config" "default" { } </code></pre> <p>Then I update from using "client_certificate" and "client_key" to "token":</p> <pre><code>provider "kubernetes" { load_config_file = false host = data.google_container_cluster.gke-cluster.endpoint token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(data.google_container_cluster.gke-cluster.master_auth.0.cluster_ca_certificate) } </code></pre> <p>Hope this can be useful for someone else.</p>
Neoh59
<p>I have deployed <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html" rel="nofollow noreferrer">ECK</a> on my kubernetes cluster(all vagrant VMs). The cluster has following config.</p> <pre><code>NAME STATUS ROLES AGE VERSION kmaster1 Ready control-plane,master 27d v1.21.1 kworker1 Ready &lt;none&gt; 27d v1.21.1 kworker2 Ready &lt;none&gt; 27d v1.21.1 </code></pre> <p>I have also setup a loadbalancer with HAProxy. The loadbalancer config is as following(created my own private cert)</p> <pre><code>frontend http_front bind *:80 stats uri /haproxy?stats default_backend http_back frontend https_front bind *:443 ssl crt /etc/ssl/private/mydomain.pem stats uri /haproxy?stats default_backend https_back backend http_back balance roundrobin server kworker1 172.16.16.201:31953 server kworker2 172.16.16.202:31953 backend https_back balance roundrobin server kworker1 172.16.16.201:31503 check-ssl ssl verify none server kworker2 172.16.16.202:31503 check-ssl ssl verify none </code></pre> <p>I have also deployed an nginx ingress controller and 31953 is the http port of the nginx controller 31503 is the https port of nginx controller</p> <pre><code>nginx-ingress nginx-ingress-controller-service NodePort 10.103.189.197 &lt;none&gt; 80:31953/TCP,443:31503/TCP 8d app=nginx-ingress </code></pre> <p>I am trying to make the kibana dashboard available outside of the cluster on https. It works fine and I can access it within the cluster. However I am unable to access it via the loadbalancer.</p> <p>Kibana Pod:</p> <pre><code>default quickstart-kb-f74c666b9-nnn27 1/1 Running 4 27d 192.168.41.145 kworker1 &lt;none&gt; &lt;none&gt; </code></pre> <p>I have mapped the loadbalancer to the host</p> <pre><code>172.16.16.100 elastic.kubekluster.com </code></pre> <p>Any request to <a href="https://elastic.kubekluster.com" rel="nofollow noreferrer">https://elastic.kubekluster.com</a> results in the following error(logs from nginx ingress controller pod)</p> <pre><code> 10.0.2.15 - - [20/Jun/2021:17:38:14 +0000] &quot;GET / HTTP/1.1&quot; 502 157 &quot;-&quot; &quot;Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0&quot; &quot;-&quot; 2021/06/20 17:38:14 [error] 178#178: *566 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.15, server: elastic.kubekluster.com, request: &quot;GET / H TTP/1.1&quot;, upstream: &quot;http://192.168.41.145:5601/&quot;, host: &quot;elastic.kubekluster.com&quot; </code></pre> <p>HAproxy logs are following</p> <pre><code>Jun 20 18:11:45 loadbalancer haproxy[18285]: 172.16.16.1:48662 [20/Jun/2021:18:11:45.782] https_front~ https_back/kworker2 0/0/0/4/4 502 294 - - ---- 1/1/0/0/0 0/0 &quot;GET / HTTP/1.1&quot; </code></pre> <p>The ingress is as following</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kubekluster-elastic-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/default-backend: quickstart-kb-http nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/proxy-connect-timeout: &quot;600s&quot; nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;600s&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;600s&quot; nginx.ingress.kubernetes.io/proxy-body-size: 20m spec: tls: - hosts: - elastic.kubekluster.com rules: - host: elastic.kubekluster.com http: paths: - path: / pathType: Prefix backend: service: name: quickstart-kb-http port: number: 5601 </code></pre> <p>I think the request is not reaching the kibana pod because I don't see any logs in the pod. Also I don't understand why Haproxy is sending the request as HTTP instead of HTTPS. Could you please point to any issues with my configuration?</p>
bluelurker
<p>I hope this helps ... Here is how I set a &quot;LoadBalancer&quot; using nginx and forward traffic to HTTPS services:</p> <pre><code> kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME asd-master-1 Ready master 72d v1.19.8 192.168.1.163 213.95.154.199 Ubuntu 20.04.2 LTS 5.8.0-45-generic docker://20.10.6 asd-node-1 Ready &lt;none&gt; 72d v1.19.8 192.168.1.101 &lt;none&gt; Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15 asd-node-2 Ready &lt;none&gt; 72d v1.19.8 192.168.0.5 &lt;none&gt; Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15 asd-node-3 Ready &lt;none&gt; 15d v1.19.8 192.168.2.190 &lt;none&gt; Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15 </code></pre> <p>This is the service for nginx:</p> <pre><code># kubectl get service -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx NodePort 10.101.161.113 &lt;none&gt; 80:30337/TCP,443:31996/TCP 72d </code></pre> <p>And this is the LoadBalancer configuration:</p> <pre><code># cat /etc/nginx/nginx.conf ... trimmed ... stream { upstream nginx_http { least_conn; server asd-master-1:30337 max_fails=3 fail_timeout=5s; server asd-node-1:30337 max_fails=3 fail_timeout=5s; server asd-node-2:30337 max_fails=3 fail_timeout=5s; } server { listen 80; proxy_pass nginx_http; proxy_protocol on; } upstream nginx_https { least_conn; server 192.168.1.163:31996 max_fails=3 fail_timeout=5s; server 192.168.1.101:31996 max_fails=3 fail_timeout=5s; server 192.168.0.5:31996 max_fails=3 fail_timeout=5s; } server { listen 443; proxy_pass nginx_https; proxy_protocol on; } } </code></pre> <p>The relevant part is that I am sending the proxy protocol. You will need to configure nginx ingress (in the configuration map) to accept this, and maybe add the correct syntax to haproxy configuration.</p> <p>This might be something like:</p> <pre><code>backend https_back balance roundrobin server kworker1 172.16.16.201:31503 check-ssl ssl verify none send-proxy-v2 server kworker2 172.16.16.202:31503 check-ssl ssl verify none send-proxy-v2 </code></pre> <p>Nginx Ingress configuration should be:</p> <pre><code># kubectl get configmap -n ingress-nginx nginx-configuration -o yaml apiVersion: v1 data: use-proxy-protocol: &quot;true&quot; kind: ConfigMap metadata: ... </code></pre> <p>I hope this puts you on the right track.</p>
oz123
<p>We have hundreds of deployment and in the config we have imagePullPolicy set as β€œifnotpresent” for most of them and for few it is set to β€œalways” now I want to modify all deployment which has <strong>ifnotpresent</strong> to <strong>always</strong>.</p> <p>How can we achieve this with at a stroke?</p> <p>Ex:</p> <pre><code>kubectl get deployment -n test -o json | jq β€˜.spec.template.spec.contianer[0].imagePullPolicy=β€œifnotpresent”| kubectl -n test replace -f - </code></pre> <p>The above command helps to reset it for one particular deployment.</p>
Nishanth
<p>Kubernetes doesn't natively offer mass update capabilities. For that you'd have to use other CLI tools. That being said, for modifying existing resources, you can also use the <code>kubectl patch</code> function.</p> <p>The script below isn't pretty, but will update all deployments in the namespace.</p> <pre><code>kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]' </code></pre> <p>Note: I used <code>sed</code> to strip the resource type from the name as kubectl doesn't recognize operations performed on resources of type <code>deployment.extensions</code> (and probably others).</p>
Grant David Bachman
<p>Looking if the below scenario is possible or not -</p> <p>Lets say user(<code>user1</code>) have access only to namespaces <code>default</code> and <code>marketing</code>. </p> <p>When we perform <code>kubectl get ns</code> it should display both namespaces.</p> <p>No other namespaces should be displayed even if they exists because the <code>user1</code> does not have access to any other namespaces.</p> <p>We could relate this scenario with the databases where a user can see only the databases they have access to when <code>show databases</code> is performed</p>
Avinash Reddy
<p>This isn't possible in Kubernetes. Namespaces are the resources providing the scoping mechanism to limit visibility into other resources. There's no meta-namespace that provides scoping rules for namespaces.</p>
Grant David Bachman
<p>When we run <code>helm install ./ --name release1 --namespace namespace1</code>, it creates the chart only if none of the deployments exist then it fails saying that the deployment or secret or any other objects already exist. </p> <p>I want the functionality to create Kubernetes deployment or objects as part of helm install only those objects or deployments already not exists if exists helm should apply the templates instead of creating.</p> <p>I have already tried 'helm install' by having a secret and the same secret is also there in the helm templates, so helm installs fail.</p>
Chandra
<p>In recent helm versions you can run <code>helm upgrade --install</code> which does an upgrade-or-install.</p> <p>Another alternative is you can use <code>helm template</code> to generate a template and pipe it to <code>kubectl apply -f -</code>. This way you can install or upgrade with the same command.</p>
Jonathan
<p>I have an Airflow 1.10.15 (a.k.a. Bridge Release) in my AWS Kubernetes cluster. It uses KubernetesExecutor.</p> <p>I have a Hello World KubernetesExecutor DAG which should print Hello World. When triggering the DAG, it creates a pod but it never prints the Hello World.</p> <p>Here are all the logs after the pod has been completed running: <a href="https://i.stack.imgur.com/LP51M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LP51M.png" alt="enter image description here" /></a></p> <p>Describing the pod will give me logs which has no errors or failures: <a href="https://i.stack.imgur.com/2ptQu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ptQu.png" alt="enter image description here" /></a></p>
Felix Labayen
<p>You should check Task logs, not Kubernetes logs. Kubernetes logs keep information about &quot;atempting to run&quot; the task (and looks that it's all ok here).</p> <p>Now, when you log anything in the running tasks, it does not go to the k8S logs - it goes to task logs. By default, when you configure Airflow, the logs for tasks are stored separately - basically every task has its own log. This is in order so that you can pull the logs and see them in Airflow UI when you click on &quot;logs&quot; for this particular task execution.</p> <p>Just check it in the UI or in the &quot;${AIRFLOW_HOME}/logs&quot; folder.</p>
Jarek Potiuk
<p>I am very new creating CD pipeline to grape image from Azure Container Registry(ACR) and push it into the Azure Kubernetes(AKS), In first part like in CI pipeline I am able to push my .netcore api image into the ACR, now my aim is to </p> <blockquote> <p>Create CD pipeline to grape that image and deploy it to Kubernetes</p> </blockquote> <p>Although I have created Kubernetes cluster in Azure with running 3 agents. I want to make it very simple without involving any deployment.yaml file etc, Can any one help me out how i can achieve this goal and </p> <blockquote> <p>What is the exact tasks in my CD pipeline ?</p> </blockquote> <p>Thanks for the help in advance</p>
Saad Awan
<p>Creating the YAML file is critical for being able to redeploy and track what is happening. If you don't want to create YAML then you have limited options. You could execute the imperative command from Azure DevOps by using a kubectl task.</p> <pre><code>kubectl create deployment &lt;name&gt; --image=&lt;image&gt;.azureacr.io </code></pre> <p>Or you can use the Kubernetes provider for Terraform to avoid creating YAML directly. </p> <p>Follow up:</p> <p>So if you are familiar with the Kubernetes imperative commands you can use that to generate your YAML by using the --dry-run and --output options. Like so:</p> <pre><code>kubectl create deployment &lt;name&gt; --image=&lt;image&gt;.azureacr.io --dry-run --output yaml &gt; example.yaml </code></pre> <p>That would produce something like looks like this which you can use to bootstrap creating your manifest file.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: example name: example spec: replicas: 1 selector: matchLabels: app: example strategy: {} template: metadata: creationTimestamp: null labels: app: example spec: containers: - image: nginx name: nginx resources: {} status: {} </code></pre> <p>Now you can pull that repo or an artifact that contains that manifest into your Azure DevOps Release Pipeline and add the "Deploy to Kubernetes Cluster" task.</p> <p><a href="https://i.stack.imgur.com/feHUv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/feHUv.png" alt="enter image description here"></a></p> <p>This should get you pretty close to completing a pipeline.</p>
Jamie
<p>I am using <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">kubernetes/client-go</a> to retrieve some pod logs. I am able to retrieve logs should the pod have one container as such</p> <pre class="lang-golang prettyprint-override"><code>req := client.CoreV1().Pods("namespace").GetLogs("mypod", &amp;corev1.PodLogOptions{}) logs, err := req.Stream() [...] </code></pre> <p>This works well, until I encounter a pod that has <em>more than one container,</em> to which I get the following error</p> <blockquote> <p>a container name must be specified for pod xxx, choose one of: [aaa bbb] or one of the init containers: [aaa bbb]</p> </blockquote> <p>I was hoping to find an accommodating field on the <a href="https://godoc.org/k8s.io/api/core/v1#PodLogOptions" rel="nofollow noreferrer"><code>corev1.PodLogOptions</code></a> object, but am only finding a specific <code>Container</code> field.</p> <p>I'm searching for an <code>--all-containers</code> equivalent as offered with the REST client. </p> <pre class="lang-sh prettyprint-override"><code>$ kubectl logs mypod --all-containers </code></pre> <p>Is this possible? Any alternatives?</p>
scniro
<p>If you take a look in <code>kubectl</code> code they just get all relevant containers in a pod and then iterate over them and gather logs container by container. So I don't think there's REST API endpoint that would do that for you.</p> <p>See here: <a href="https://github.com/kubernetes/kubectl/blob/19fd05792d8c806a5024d6bbbdd7d66d3234cbcb/pkg/polymorphichelpers/logsforobject.go#L86" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/blob/19fd05792d8c806a5024d6bbbdd7d66d3234cbcb/pkg/polymorphichelpers/logsforobject.go#L86</a> </p>
blami
<p>I have a multistage pipeline with the following</p> <p>Stage build:</p> <ol> <li>build docker image</li> <li>push image to ACR</li> <li>package helm chart</li> <li>push helm chart to ACR</li> </ol> <p>Stage deployment:</p> <ol> <li>helm upgrade</li> </ol> <p><strong>Push helm chart to AKS:</strong></p> <pre><code> task: HelmDeploy@0 displayName: 'helm publish' inputs: azureSubscriptionForACR: '$(azureSubscription)' azureResourceGroupForACR: '$(resourceGroup)' azureContainerRegistry: '$(containerRegistry)' command: 'save' arguments: '--app-version $(Version)' chartNameForACR: 'charts/$(imageRepository):$(Version)' chartPathForACR: $(chartPath) </code></pre> <p><strong>Deploy helm chart to AKS:</strong></p> <pre><code> task: HelmDeploy@0 inputs: connectionType: 'Kubernetes Service Connection' kubernetesServiceConnection: '$(kubernetesServiceConnection)' command: 'upgrade' chartType: 'Name' chartName: '$(containerRegistry)/charts/$(imageRepository):$(Version)' chartVersion: '$(Version)' azureSubscriptionForACR: '$(azureSubscription)' azureResourceGroupForACR: '$(resourceGroup)' azureContainerRegistry: '$(containerRegistry)' install: true releaseName: $(Version) </code></pre> <p><strong>Error:</strong></p> <pre><code>failed to download &quot;&lt;ACR&gt;/charts/&lt;repository&gt;:0.9.26&quot; at version &quot;0.9.26&quot; (hint: running `helm repo update` may help) </code></pre> <p><strong>ACR:</strong> <code>az acr repository show-manifests --name &lt;org&gt; --repository helm/charts/&lt;repository&gt; --detail</code></p> <pre><code> { &quot;changeableAttributes&quot;: { &quot;deleteEnabled&quot;: true, &quot;listEnabled&quot;: true, &quot;readEnabled&quot;: true, &quot;writeEnabled&quot;: true }, &quot;configMediaType&quot;: &quot;application/vnd.cncf.helm.config.v1+json&quot;, &quot;createdTime&quot;: &quot;2021-02-02T11:54:54.1623765Z&quot;, &quot;digest&quot;: &quot;sha256:fe7924415c4e76df370630bbb0248c9296f27186742e9272eeb87b2322095c83&quot;, &quot;imageSize&quot;: 3296, &quot;lastUpdateTime&quot;: &quot;2021-02-02T11:54:54.1623765Z&quot;, &quot;mediaType&quot;: &quot;application/vnd.oci.image.manifest.v1+json&quot;, &quot;tags&quot;: [ &quot;0.9.26&quot; ] } </code></pre> <p>What am I doing wrong? Do I have to <code>export</code> the helm chart from ACR before I can deploy it?</p>
Michael
<p>The answer from @sshepel actually helped somewhat, you need to login to the registry before being able to pull. However, it is sufficient with a simple AzureCLI login.</p> <pre><code> - task: AzureCLI@2 displayName: Login to Azure Container Registry inputs: azureSubscription: &lt;Azure Resource Manager service connection to your subscription and resource group&gt; scriptType: bash scriptLocation: inlineScript inlineScript: | az acr login --name &lt;container registry name&gt;.azurecr.io </code></pre> <p>After that it worked perfectly with the undocumented HelmDeploy task.</p>
JMag
<p>I have a container that has a ping endpoint (returns pong) and I want to probe the ping endpoint and see if I get a pong back. If it was just to check 200 , I could have added a liveliness check in my pod like this -></p> <pre><code>livenessProbe: initialDelaySeconds: 2 periodSeconds: 5 httpGet: path: /ping port: 9876 </code></pre> <p>How do I modify this to check to see if I get a <code>pong</code> response back? </p>
Illusionist
<p>As the HTTP probe only checks the status code of the response, you need to use the exec probe to run a command on the container. Something like this, which requires <code>curl</code> being installed on the container:</p> <pre><code>livenessProbe: initialDelaySeconds: 2 periodSeconds: 5 exec: command: - sh - -c - curl -s http://localhost:9876/ping | grep pong </code></pre>
doelleri
<p>airflow 1.10.10 <br /> minikube 1.22.0 <br /> amazon emr</p> <p>I am running airflow on kubernetes(minikube). Dags are synced from github. spark-submit on Amazon EMR as a CLI mode.</p> <p>In order to do that, I attach EMR pem key. So, I get pem key from AWS S3 while ExtraInitContainer is getting image awscli and mount the volume at airlfow/sshpem</p> <p>error is reported when I make a connection from airflow WebUI as &quot;con_type&quot;: &quot;ssh&quot; &quot;key_file&quot;: &quot;/opt/sshepm/emr.pem&quot;</p> <pre><code>SSH operator error: [Errno 2] No such file or directory: '/opt/airflow/sshpem/emr.pem' </code></pre> <p>it is there. I think it is related to some PATH or permission issue since I get emr.pem on ExtraInitContainer and it's permission was root. Although I temporarily changed a user as 1000:1000 there is some issue airflow WebUI can't get this directory while getting a key.</p> <p>Full log is below</p> <pre><code>&gt; Traceback (most recent call last): File &gt; &quot;/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/operators/ssh_operator.py&quot;, &gt; line 108, in execute &gt; with self.ssh_hook.get_conn() as ssh_client: File &quot;/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/hooks/ssh_hook.py&quot;, &gt; line 194, in get_conn &gt; client.connect(**connect_kwargs) File &quot;/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py&quot;, &gt; line 446, in connect &gt; passphrase, File &quot;/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py&quot;, &gt; line 677, in _auth &gt; key_filename, pkey_class, passphrase File &quot;/home/airflow/.local/lib/python3.6/site-packages/paramiko/client.py&quot;, &gt; line 586, in _key_from_filepath &gt; key = klass.from_private_key_file(key_path, password) File &quot;/home/airflow/.local/lib/python3.6/site-packages/paramiko/pkey.py&quot;, &gt; line 235, in from_private_key_file &gt; key = cls(filename=filename, password=password) File &quot;/home/airflow/.local/lib/python3.6/site-packages/paramiko/rsakey.py&quot;, &gt; line 55, in __init__ &gt; self._from_private_key_file(filename, password) File &quot;/home/airflow/.local/lib/python3.6/site-packages/paramiko/rsakey.py&quot;, &gt; line 175, in _from_private_key_file &gt; data = self._read_private_key_file(&quot;RSA&quot;, filename, password) File &gt; &quot;/home/airflow/.local/lib/python3.6/site-packages/paramiko/pkey.py&quot;, &gt; line 307, in _read_private_key_file &gt; with open(filename, &quot;r&quot;) as f: FileNotFoundError: [Errno 2] No such file or directory: '/opt/airflow/sshpem/emr-pa.pem' &gt; &gt; During handling of the above exception, another exception occurred: &gt; &gt; Traceback (most recent call last): File &gt; &quot;/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py&quot;, &gt; line 979, in _run_raw_task &gt; result = task_copy.execute(context=context) File &quot;/opt/airflow/class101-airflow/plugins/operators/emr_ssh_operator.py&quot;, &gt; line 107, in execute &gt; super().execute(context) File &quot;/home/airflow/.local/lib/python3.6/site-packages/airflow/contrib/operators/ssh_operator.py&quot;, &gt; line 177, in execute &gt; raise AirflowException(&quot;SSH operator error: {0}&quot;.format(str(e))) airflow.exceptions.AirflowException: SSH operator error: [Errno 2] No &gt; such file or directory: '/opt/airflow/sshpem/emr-pa.pem' [2021-07-14 &gt; 05:40:31,624] Marking task as UP_FOR_RETRY. dag_id=test_staging, &gt; task_id=extract_categories_from_mongo, execution_date=20210712T190000, &gt; start_date=20210714T054031, end_date=20210714T054031 [2021-07-14 &gt; 05:40:36,303] Task exited with return code 1 </code></pre> <p>airflow home: /opt/airflow <br /> dags : /opt/airflow//dags <br /> pemkey : /opt/sshpem/ <br /> airflow.cfg: /opt/airflow <br /> airflow_env: export PATH=&quot;/home/airflow/.local/bin:$PATH&quot;</p> <p>my yaml file</p> <pre><code>airflow: image: repository: airflow executor: KubernetesExecutor extraVolumeMounts: - name: sshpem mountPath: /opt/airflow/sshpem extraVolumes: - name: sshpem emptyDir: {} scheduler: extraInitContainers: - name: emr-key-file-download image: amazon/aws-cli command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;aws s3 cp s3://mykeyfile/path.my.pem&amp;&amp; \ chown -R 1000:1000 /opt/airflow/sshpem/&quot; volumeMounts: - mountPath: /opt/airflow/sshpem name: sshpem </code></pre>
Yong Rhee
<p>Are you using KubernetesExecutor or CeleryExecutor?</p> <p>If the former, then you have to make sure the extra init container is added to the pod_template you are using (tasks in KubernetesExecutor) run as separate PODs.</p> <p>If the latter, you should make sure the extra init container is also added for workers, not only for scheduler).</p> <p>BTW. Airflow 1.10 reached end-of-life on June 17th, 2021 and it will not receive even critical security fixes. You can watch our talk from the recent Airflow Summit &quot;Keep your Airflow Secure&quot; - <a href="https://airflowsummit.org/sessions/2021/panel-airflow-security/" rel="nofollow noreferrer">https://airflowsummit.org/sessions/2021/panel-airflow-security/</a> to learn why it is important to upgrade to Airflow 2.</p>
Jarek Potiuk
<p>I have setup a Kubernetes cluster using kops on aws.</p> <p>It has 4 worker nodes and one master node.</p> <p>It has deployments for each microservice i.e. customer deployment has two pods</p> <p>I need to make calls to some API from these pods.</p> <p>Whenever I make request from these pods the source ip is by default the node's ip.</p> <p>I want a unified ip address for any outgoing request from the cluster.</p> <p>I am already using <code>internet gateway</code> and <code>ingress nginx controller</code> for incoming requests.</p> <p>Someone suggest to create a <code>NAT gateway</code>.</p> <p>I created and allocated a elastic ip address. Still it's not working and using the node's ip only on which the pod is deployed.</p>
confusedWarrior
<p>I think the tool you want here is an egress IP. I don't know the specifics for AWS/kops, but an egress IP has worked for Azure Kubernetes Service for the same situation.</p>
Danny Staple
<p>I'm running SystemTap on CentOS Linux release 7.6.1810. The version of SystemTap is:</p> <pre class="lang-sh prettyprint-override"><code>$ stap -V Systemtap translator/driver (version 4.0/0.172/0.176, rpm 4.0-11.el7) Copyright (C) 2005-2018 Red Hat, Inc. and others This is free software; see the source for copying conditions. tested kernel versions: 2.6.18 ... 4.19-rc7 enabled features: AVAHI BOOST_STRING_REF DYNINST BPF JAVA PYTHON2 LIBRPM LIBSQLITE3 LIBVIRT LIBXML2 NLS NSS READLINE $ uname -rm 3.10.0-957.21.3.el7.x86_64 x86_64 $ rpm -qa | grep kernel-devel kernel-devel-3.10.0-957.21.3.el7.x86_64 $ rpm -qa | grep kernel-debuginfo kernel-debuginfo-3.10.0-957.21.3.el7.x86_64 kernel-debuginfo-common-x86_64-3.10.0-957.21.3.el7.x86_64 </code></pre> <p>I have a systemTap script named sg.stp, which use to monitor why k8s pods of a rabbitmq cluster terminated with exit code 137 occasionally:</p> <pre class="lang-c prettyprint-override"><code>global target_pid = 32719 probe signal.send{ if (sig_pid == target_pid) { printf(&quot;%s(%d) send %s to %s(%d)\n&quot;, execname(), pid(), sig_name, pid_name, sig_pid); printf(&quot;parent of sender: %s(%d)\n&quot;, pexecname(), ppid()) printf(&quot;task_ancestry:%s\n&quot;, task_ancestry(pid2task(pid()), 1)); } } </code></pre> <p>When I run the script, it reported an error after a while:</p> <pre class="lang-sh prettyprint-override"><code>$ stap sg.stp ERROR: read fault [man error::fault] at 0x4a8 near operator '@cast' at /usr/share/systemtap/tapset/linux/task.stpm:2:5 epmd(29073) send SIGCHLD to rabbitmq-server(32719) parent of sender: rabbitmq-server(32719) WARNING: Number of errors: 1, skipped probes: 0 WARNING: /usr/bin/staprun exited with status: 1 Pass 5: run failed. [man error::pass5] </code></pre>
visionken
<p><code>pid2task()</code> can return NULL</p> <p>Check for <code>pid2task(pid())</code> or <code>current_task()</code> returning NULL like that:</p> <pre><code>task = pid2task(pid()); if (task) { printf(&quot;task_ancestry:%s\n&quot;, task_ancestry(task, 1)); } else { printf(&quot;task_ancestry more available\n&quot;); } </code></pre> <hr /> <p>Note that I am not completely sure about following explanation:</p> <p>It can happen that the task_struct is no more available, even when you are in the context of the running <code>pid()</code>, because the process has already died and the task_struct is cleaned up because it is no more needed.</p> <p>In that case <code>pid2task()</code> returns NULL. AFAICS this can happen to <code>pid()</code> in following two situations (and perhaps more):</p> <ul> <li><p>Your probe is asynchronous to the running process - in your case with signal probes this seems to be the case here.</p> </li> <li><p>The <code>.return</code> probe executes too late, perhaps because it was stuck in the kernel too long (like for blocking calls).</p> </li> </ul> <p>For the latter there seems to be some easy workaround:</p> <p>Instead of <code>task_ancestry(current_task())</code> use <code>@entry(task_ancestry(current_task()))</code>. This way the data is gathered at the entry point of the syscall, where it is very likely that the process is still perfectly alive.</p> <p>However in your Signal case I do not see such simple workaround, hence you must check for NULL.</p> <hr /> <p>Note that I am not completely sure that this is your problem and that checking for NULL without some page locking is the perfect solution. Because even if you get a pointer to some structure, the pages which contain the structure might go away in the middle of the probe, thanks to SMP. Perhaps <code>stap</code> somehow protects against this. But I doubt. Race conditions like this are really weird to debug and avoid.</p>
Tino
<p>I am new to K8s and trying to create a Helm chart to setup my application.</p> <p>I want a frictionless experience for users setting up the application without much manual intervention.</p> <p>Creating the helm chart i was pleased with the provided templating functionallity but missing one essential thing: Creating passwords.</p> <p>I don't want the user to have to create the passwords for my API to talk to redis etc.</p> <p>Setting up Vault is also one of the more difficult parts as its key has to be initially created it then needs to be unlocked and resources like userpass and other engines and resources have to be created.</p> <p>For a docker-compose setup of the same app i have a &quot;install container&quot; that generates the passwords, creates resources on Vault with its API etc.</p> <p>Is there another possibility using kubernetes/helm?</p> <p>Thanks</p>
firstdorsal
<p>You could try <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">Sealed Secrets</a>. It stores secrets encrypted using assimetric keys, so they secrets can be only restored having the proper keys.</p>
Gonzalo Matheu
<p>I updated my k8 cluster to 1.18 recently. Afterwards I had to recreate a (previously functional) loadBalancer service. It seemed to come up properly but I was unable to access the external ip afterwards. Looking at the dump from <code>kubectl describe service</code> I don't see a field for &quot;loadbalancer ingress&quot; that I see on other services that didn't get restarted.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: search-master labels: app: search role: master spec: selector: app: search role: master ports: - protocol: TCP port: 9200 targetPort: 9200 name: serviceport - port: 9300 targetPort: 9300 name: dataport type: LoadBalancer loadBalancerIP: 10.95.96.43 </code></pre> <p>I tried adding this (to no avail):</p> <pre><code>status: loadBalancer: ingress: - ip: 10.95.96.43 </code></pre> <p>What have I missed here?</p> <hr /> <h1>Updates:</h1> <ul> <li>Cluster is running in a datacenter. 10 machines + 1 master (vm)</li> <li>&quot;No resources found&quot;</li> </ul> <p>Another odd thing: when I dump the service as yaml I get this entry at the top:</p> <pre><code>apiVersion: v1 items: - apiVersion: v1 kind: Service ... spec: clusterIP: &lt;internal address&gt; ... type: LoadBalancer status: loadBalancer: {} kind: List metadata: resourceVersion: &quot;&quot; selfLink: &quot;&quot; </code></pre> <p>Something wrong with my yml?</p>
ethrbunny
<p>For a distant observer - this is likely due to metallb version conflict. Note that 1.17-&gt; 1.18 introduces some breaking changes.</p>
ethrbunny
<p>I configured an automatic build of my Angular 6 app and deployment in Kubernetes each time is push to my code repository (Google Cloud Repository).</p> <p>Dev environment variables are classically store in a environment.ts file like this:</p> <pre><code>export const environment = { production: false, api_key: "my_dev_api_key" }; </code></pre> <p>But I don't want to put my Prod secrets in my repository so I figured I could use Kubernetes secrets.</p> <p>So, I create a secret in Kubernetes:</p> <pre><code>kubectl create secret generic literal-token --from-literal api_key=my_prod_api_key </code></pre> <p>But how to use it in my Angular app?</p>
Manuel RODRIGUEZ
<p>Nevertheless what you do, your Angular app is a <em>client</em> application i.e. the user's browser downloads the source code of the app (a bunch of CSS/JS/HTML files, images etc.) and it executes it on the user's machine. So you can't hide anything like you do when implementing a <em>client/server</em> app. In client/server applications all the secrets will reside in the server part. If you put the secret in a k8s secret you will not commit it in the repository, but you will expose it to all of your users anyway.</p> <p>If you still want to populate a configuration based on environment variables (which is a legit use-case), I've seen and used the following approach. The app is Angular 6 and is served to the browser by an <code>nginx</code> server. The startup script in the docker container is a bit odd and looks similar to those lines below:</p> <pre><code>envsubst &lt; /usr/share/nginx/html/assets/config.json.tpl &gt; /usr/share/nginx/html/assets/config.json rm /usr/share/nginx/html/assets/config.json.tpl echo "Configuration:" cat /usr/share/nginx/html/assets/config.json nginx -g 'daemon off;' </code></pre> <p>As you see we've used <code>envsubst</code> to substitute a config template in the assets folder. The <code>config.json.tpl</code> may look like this:</p> <pre><code>{ "apiUrl": "${API_URL}" } </code></pre> <p><code>envsubst</code> will substitute the environment variables with their real values and you will have a valid JSON configuration snippet in your assets. Then <code>nginx</code> will then startup.</p>
Lachezar Balev
<p>I have a pod running on Google Cloud Kubernetes and I have a MongoDB cluster running on Atlas. The issue is quite simple:</p> <p><strong>If I allow IP from ANYWHERE on Atlas MongoDB, I can connect. If I add the IP of the pod (so not from ANYWHERE anymore), it doesn't work.</strong></p> <p>I also tried locally and from a docker running locally as well, it works.</p> <p>I got the IP (YY.YYY.YYY.YY) of my pod using:</p> <pre><code>MacBook-Pro-de-Emixam23:plop-service emixam23$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE polop-service LoadBalancer XX.XX.X.XXX YY.YYY.YYY.YY ZZZZZ:32633/TCP,ZZZZZ:32712/TCP 172m kubernetes ClusterIP XX.X.X.X &lt;none&gt; 443/TCP 3h24m </code></pre> <p>But by the behavior I get.. I feel like this EXTERNAL-IP isn't the IP from where my requests are sent from.</p> <p>Can anyone explain to me what can be the issue?</p>
Emixam23
<p>The IP exposed to Mongo Atlas should be Internet accessible IP (or called it, public IP). </p> <p>Normally it should be the net gateway IPs (or proxy server's IPs, if you go with proxy). </p> <p>One quick way to check the IP by running below command in pods</p> <pre><code>curl ifconfig.me </code></pre> <p>If your pod doesn't support this command, you can <code>kubectl exec -ti &lt;pod_name&gt; -- sh</code> in it and install this command.</p> <p><strong>Remember</strong>: normally the IPs are not only one, there should be 3 or more public facing IPs via net gateway, you need find them all and add to Mongo Atlas whitelists</p>
BMW
<p>I use minikube on windows 10 and try to generate Persistent Volume with minikube dashboard. Belows are my PV yaml file contents.</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: blog-pv labels: type: local spec: storageClassName: manual capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: blog-pv-claim spec: storageClassName: manual volumeName: blog-pv accessModes: - ReadWriteOnce resources: requests: storage: 500Mi </code></pre> <p>But minikube dashboard throw the following errors.</p> <pre><code>## Deploying file has failed the server could not find the requested resource </code></pre> <p>But I can generate PV with kubectl command as executing the following command</p> <pre><code>kubectl apply -f pod-pvc-test.yaml </code></pre> <p>For your information, the version of kubectl.exe is</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>How can I generate the Persistent Volume with minikube dashboard as well as kubectl command?</p> <p><strong>== Updated Part==</strong></p> <pre><code>&gt; kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE blog-pv 1Gi RWO Recycle Bound default/blog-pv-claim manual 5m1s </code></pre>
Joseph Hwang
<p>First, apply the resource one by one. So make sure this problem can be isolated to PV(PersistentVolume) or PVC (PersistentVolumeClaim)</p> <p>Second, please adjust the hostPath to others, <code>/mnt/data</code> normally is a mounted or NFS folder, maybe that's the issue, you can adjust to some other real path for testing.</p> <p>After you applied them, please show the output </p> <pre><code>kubectl get pv,pvc </code></pre> <p>You should be fine to know the root cause now. </p>
BMW
<p>I am trying to build a kubernetes master with kubelet and kube-api server running as a static pod.</p> <p>My unit for kubelet is:</p> <pre><code>[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/bin/kubelet \ --cloud-provider=external \ --config=/var/lib/kubelet/config.yaml \ --network-plugin=cni \ --register-node=false \ --kubeconfig=/var/lib/kubelet/kubeconfig.yaml \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target </code></pre> <p>When I start the kubelet I see the following errors:</p> <pre><code>.0.1:6443/api/v1/nodes/master-3-tm?resourceVersion=0&amp;timeout=10s: dial tcp 127.0.0.1:6443: connect: connection refused Nov 25 15:40:14 master-3-tm kubelet[2584]: E1125 15:40:14.254850 2584 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "master-3-tm": Get https://127.0.0.1:6443/api/v1/nodes/master-3-tm?timeout=10s: dial tcp 127.0.0.1:6443: connect: connection refused Nov 25 15:40:14 master-3-tm kubelet[2584]: E1125 15:40:14.255466 2584 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "master-3-tm": Get https://127.0.0.1:6443/api/v1/nodes/master-3-tm?timeout=10s: dial tcp 127.0.0.1:6443: connect: connection refused Nov 25 15:40:14 master-3-tm kubelet[2584]: E1125 15:40:14.255956 2584 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "master-3-tm": Get https://127.0.0.1:6443/api/v1/nodes/master-3-tm?timeout=10s: dial tcp 127.0.0.1:6443: connect: connection refused Nov 25 15:40:14 master-3-tm kubelet[2584]: E1125 15:40:14.256403 2584 kubelet_node_status.go:391] Error updating node status, will retry: error getting node "master-3-tm": Get https://127.0.0.1:6443/api/v1/nodes/master-3-tm?timeout=10s: dial tcp 127.0.0.1:6443: connect: connection refused Nov 25 15:40:14 master-3-tm kubelet[2584]: E1125 15:40:14.256696 2584 kubelet_node_status.go:379] Unable to update node status: update node status exceeds retry count Nov 25 15:40:14 master-3-tm kubelet[2584]: W1125 15:40:14.604686 2584 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d Nov 25 15:40:14 master-3-tm kubelet[2584]: E1125 15:40:14.604828 2584 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized </code></pre> <p>Which make sense, because the kube-api server is still not running. But the question is how do I get it to running?</p> <p>I have the following manifests:</p> <pre><code>root@master-3-tm:/home/ubuntu# cat /etc/kubernetes/manifests/kube-api-server.yaml apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --authorization-mode=Node,RBAC - --advertise-address=10.32.192.20 - --allow-privileged=true - --audit-log-maxage=30 - --audit-log-maxbackup=3 - --audit-log-maxsize=100 - --audit-log-path=/var/log/kubernetes/audit.log - --bind-address=10.32.192.20 - --client-ca-file=/var/lib/kubernetes/ca.pem - --cloud-config=/etc/kubernetes/cloud.conf - --cloud-provider=openstack - --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/api-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/api-etcd-client.key - --etcd-servers=master-1-tm=https://10.32.192.69:2380,master-3-tm=https://10.32.192.20:2380,master-2-tm=https://10.32.192.76:2380 - --insecure-port=0 - --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem - --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem - --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem - --kubelet-https=true - --secure-port=6443 - --service-account-key-file=/var/lib/kubernetes/service-accounts.pem - --service-cluster-ip-range=10.32.0.0/16 - --service-node-port-range=30000-32767 - --runtime-config=api/all - --tls-cert-file=/var/lib/kubernetes/api.cert - --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem - --token-auth-file=/var/lib/kubernetes/token.csv - --v=2 - --insecure-bind-address=127.0.0.1 image: k8s.gcr.io/kube-apiserver-amd64:v1.11.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 10.32.192.20 path: /healthz port: 6443 scheme: HTTPS initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /var/lib/kubernetes readOnly: true name: var-lib-kubernetes - mountPath: /var/log/kubernetes name: var-log-kubernetes hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /var/lib/kuberentes type: DirectoryOrCreate - hostPath: path: /var/log/kuberentes type: DirectoryOrCreate status: {} root@master-3-tm:/home/ubuntu# cat /etc/kubernetes/manifests/etcd.yml apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-system spec: containers: - command: - etcd - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --key-file=/etc/kubernetes/pki/etcd/server.key - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt env: - name: ETCD_NAME value: master-3-tm - name: ETCD_DATA_DIR value: /var/lib/data - name: ETCD_INITIAL_CLUSTER_STATE value: new - name: ETCD_INITIAL_CLUSTER_TOKEN value: k8s-cluster - name: ETCD_INITIAL_CLUSTER value: master-1-tm=https://10.32.192.69:2380,master-3-tm=https://10.32.192.20:2380,master-2-tm=https://10.32.192.76:2380 - name: ETCD_ADVERTISE_CLIENT_URLS value: https://10.32.192.20:2379 - name: ETCD_LISTEN_PEER_URLS value: https://10.32.192.20:2380 - name: ETCD_LISTEN_CLIENT_URLS value: https://10.32.192.20:2379 - name: ETCD_INITIAL_ADVERTISE_PEER_URLS value: https://10.32.192.20:2380 image: quay.io/coreos/etcd:v3.3.10 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - -ec - ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd resources: {} volumeMounts: - mountPath: /var/lib/etcd name: etcd-data - mountPath: /etc/kubernetes/pki/etcd name: etcd-certs hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /etc/kubernetes/pki/etcd type: DirectoryOrCreate name: etcd-certs - hostPath: path: /var/lib/etcd type: DirectoryOrCreate name: etcd-data status: {} </code></pre> <p>Oddly, kubelet will start etcd, but not the api server...</p> <p>Also worth noting:</p> <pre><code> * the kubelet isn't trying to register, at least according to the flag: Nov 25 15:50:43 master-3-tm kubelet[3440]: I1125 15:50:43.578457 3440 flags.go:27] FLAG: --register-node="false" Nov 25 15:50:43 master-3-tm kubelet[3440]: I1125 15:50:43.578464 3440 flags.go:27] FLAG: --register-schedulable="true" Nov 25 15:50:43 master-3-tm kubelet[3440]: I1125 15:50:43.578471 3440 flags.go:27] FLAG: --register-with-taints="" </code></pre> <p>How do I start the api pod before the kubelet service? or how do I find why kubelet won't start that specific pod?</p> <h3>update</h3> <p>The kubelet should find the static pods in the correct place:</p> <pre><code>ubuntu@master-3-tm:~$ grep manifests /var/lib/kubelet/config.yaml staticPodPath: /etc/kubernetes/manifests </code></pre>
oz123
<p>As all too often, the problem is in detail ...</p> <p>The faulty line is:</p> <pre><code>- --cloud-config=/etc/kubernetes/cloud.conf </code></pre> <p>Without this file, which is missing, because I forgot to mount the proper volume, kube-apiserver will fail to start.</p> <p>This will show in the kubelet logs, but is very easy to miss since there are so many messages.</p> <p>The error is:</p> <pre><code>Nov 29 11:43:08 master-1-test3 kubelet[2645]: F1129 11:43:08.602166 2645 plugins.go:122] Couldn't open cloud provider configuration /etc/kubernetes/cloud.conf: &amp;os.PathError{Op:"open", Path:"/etc/kubernetes/cloud.conf", Err:0x2} </code></pre>
oz123
<p><strong>Context</strong></p> <p>I'm using <strong>GCP</strong>, more specifically <strong>GKE</strong> to deploy my app in container/pod. The app I'm trying to deploy is in node js (express js). This app connects to <strong>MongoDB Atlas</strong> <em>(free tier M0)</em>. </p> <p><strong>No issue when running the project locally.</strong> It connects to the database and I can add/remove documents without any problems. </p> <p>I allowed my MongoDB Atlas Cluster to get access by anyone (0.0.0.0/0) to make it easier debugging. </p> <p>When I deploy my project with my CI/CD Deploy to GKE every thing goes smooth.</p> <p><strong>Problem</strong></p> <p>Things starts to get tricky once I've deployed my project. I get a <strong>CrashLoopBackOff</strong>. It keeps crashing after checking the logs here is what I've found: </p> <p><code>error: Server selection timed out after 30000 ms {"name":"MongooseTimeoutError","reason":{"name":"MongoNetworkError"}}</code></p> <p><strong>Leads</strong></p> <p>I believe the issue is that <strong>my pod can't connect to MongoDB Atlas</strong> through its regular port 27017 sending me a timeout error.</p> <p>Here is what I've tried:</p> <ul> <li><p>Adding in my VPC Network in GCP a new <strong>firewall rule</strong>: <code>gcloud compute firewall-rules create allow-mongodb --allow tcp:27017</code></p></li> <li><p>Adding in my deployment.yml the following key/value: <code>dnsPolicy: Default</code></p></li> </ul> <p><strong>Conclusion</strong></p> <p>After spending hours on this problem, I still didn't find any solutions and I'm running out of ideas. FYI, I'm new to GCP and to Kubernetes so I might be missing something big here but not sure what.</p> <p>If some kind person end up on that post and knows the answer I would be glad if he/she could help me out here.</p> <p>Have a good one.</p> <p>Cheers,</p>
iji
<p>If you think the network policy has been set properly with GCP firewall rules, let's work out it in kubernetes step by step.</p> <ol> <li>check if you can connect to mongodb Atlas from the containers themselves. </li> </ol> <pre><code>kubectl exec &lt;node_app_pod&gt; --command -- curl &lt;mongo_url&gt;:27017 </code></pre> <ol start="2"> <li>check the network policy in kubernetes. To make it simplified, allow all egress</li> </ol> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all spec: podSelector: {} egress: - {} policyTypes: - Egress </code></pre> <p>If you want to set port 27017 only, you can adjust with this document : </p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a></p> <p>Let me know if it is better now. </p>
BMW
<p>Can you help me? I want to deploy Ingres for nodeport. But I can't understand if this possible? </p> <p>I tried to find some information in Google but I got Ingress for load balancing or some difficult examples Ingress Ruby on rails and etc. </p>
noute
<p>I'll try to provide the simplest example that I can think of below. I will use the <code>nginxdemos/hello</code> docker image for my example. Locally this works as this:</p> <pre><code>$docker run -p 80:80 -d nginxdemos/hello ... $curl -I http://localhost HTTP/1.1 200 OK Server: nginx/1.13.8 Date: Tue, 08 Oct 2019 06:14:52 GMT Content-Type: text/html Connection: keep-alive Expires: Tue, 08 Oct 2019 06:14:51 GMT Cache-Control: no-cache </code></pre> <p>Cool. Here is our backend deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-backend namespace: java2days spec: replicas: 2 selector: matchLabels: app: nginx-server template: metadata: labels: app: nginx-server spec: containers: - name: nginx-server image: nginxdemos/hello ports: - containerPort: 80 name: server-port livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 15 periodSeconds: 15 timeoutSeconds: 3 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 15 periodSeconds: 15 timeoutSeconds: 3 </code></pre> <p>Shortly we will have 2 replicas of an nginx server. Up and running on nodes somewhere in the cloud:</p> <pre><code>$kubectl get pods NAME READY STATUS RESTARTS AGE nginx-backend-dfcdb9797-dnx7x 1/1 Running 0 21m nginx-backend-dfcdb9797-pnrhn 1/1 Running 0 21m </code></pre> <p>Let's create a NodePort service now. Here is the service yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx-service namespace: java2days spec: ports: - port: 80 protocol: TCP targetPort: 80 name: http selector: app: nginx-server type: NodePort </code></pre> <p>Note the selector, it matches the backend service. Now it is time for the ingress controller.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress namespace: java2days spec: rules: - http: paths: - backend: serviceName: nginx-service servicePort: 80 path: /* </code></pre> <p>You will have to wait for about 5-10 minutes until GCloud provisions an IP address for this Ingress. Finally it is done, something like this:</p> <pre><code>$kubectl get ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * x.y.z.p 80 15m </code></pre> <p>From my local machine now:</p> <pre><code>$curl -I http://x.y.z.p HTTP/1.1 200 OK </code></pre> <p>Great it is working. If you open it in the browser and refresh multiple times you will see that the server ID changes and load balancing works. Additional entry point for reading - <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">here</a>.</p> <p>Do not forget to clean up the resources when you finish experimenting.</p>
Lachezar Balev
<p>We have deployed etcd of k8s using static pod, it's 3 of them. We want to upgrade pod to define some labels and readiness probe for them. I have searched but found no questions/article mentioned. So I'd like to know the best practice for upgrading static pod.</p> <p>For example, I found modifying yaml file directly may result pod unscheduled for a long time, maybe I should remove the old file and create a new file?</p>
xudifsd
<p>You need to recreate the pod if you want to define readiness probe for it, for labels an edit should suffice.</p> <p>Following error is thrown by Kubernetes if editing readinessProbe:</p> <pre><code># * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations) </code></pre> <p>See also <a href="https://stackoverflow.com/a/40363057/499839">https://stackoverflow.com/a/40363057/499839</a></p> <p>Have you considered using DaemonSets? <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/</a></p>
Mika Vatanen
<p>I have added mysql in requirements.yaml. Helm dependency downloads the mysql chart</p> <pre><code>helm dependency update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "nginx" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈ Saving 1 charts Downloading mysql from repo &lt;our private repository&gt; Deleting outdated charts </code></pre> <p>But when I do helm install my_app_chart ../my_app_chart It gives error </p> <pre><code>Error: found in Chart.yaml, but missing in charts/ directory: mysql </code></pre>
Komal Kadam
<p>You don't have to add it to the control version system, you just download them again if for some reason you have lost them (for example when you clone the repository). To do this, execute the command:</p> <p><code>helm dependency update</code></p> <p>The above command will download the dependencies you've defined in the <code>requirements.yaml</code> file or <code>dependencies</code> entry in <code>Chart.yaml</code> to the <code>charts</code> folder. This way, requirements are updated and you'll have the correct dependencies without worrying about if you updated them also in the control version system.</p>
PhoneixS
<p>I'm setting up an InnoDB Cluster using <code>mysqlsh</code>. This is in Kubernetes, but I think this question applies more generally.</p> <p>When I use <code>cluster.configureInstance()</code> I see messages that includes:</p> <blockquote> <p>This instance reports its own address as node-2:3306</p> </blockquote> <p>However, the nodes can only find <em>each other</em> through DNS at an address like <code>node-2.cluster:3306</code>. The problem comes when adding instances to the cluster; they try to find the other nodes without the qualified name. Errors are of the form:</p> <pre><code>[GCS] Error on opening a connection to peer node node-0:33061 when joining a group. My local port is: 33061. </code></pre> <p>It is using <code>node-n:33061</code> rather than <code>node-n.cluster:33061</code>.</p> <p>If it matters, the &quot;DNS&quot; is set up as a headless service in Kubernetes that provides consistent addresses as pods come and go. It's very simple, and I named it &quot;cluster&quot; to created addresses of the form <code>node-n.cluster</code>. I don't want to cloud this question with detail I don't think matters, however, as surely other configurations require the instances in the cluster to use DNS as well.</p> <p>I thought that setting <code>localAddress</code> when creating the cluster and adding the nodes would solve the problem. Indeed, after I added that to the <code>createCluster</code> options, I can look in the database and see</p> <pre><code>| group_replication_local_address | node-0.cluster:33061 | </code></pre> <p>After I create the cluster and look at the topology, it seems that the local address setting has no effect whatsoever:</p> <pre><code>{ &quot;clusterName&quot;: &quot;mycluster&quot;, &quot;defaultReplicaSet&quot;: { &quot;name&quot;: &quot;default&quot;, &quot;primary&quot;: &quot;node-0:3306&quot;, &quot;ssl&quot;: &quot;REQUIRED&quot;, &quot;status&quot;: &quot;OK_NO_TOLERANCE&quot;, &quot;statusText&quot;: &quot;Cluster is NOT tolerant to any failures.&quot;, &quot;topology&quot;: { &quot;node-0:3306&quot;: { &quot;address&quot;: &quot;node-0:3306&quot;, &quot;memberRole&quot;: &quot;PRIMARY&quot;, &quot;mode&quot;: &quot;R/W&quot;, &quot;readReplicas&quot;: {}, &quot;replicationLag&quot;: null, &quot;role&quot;: &quot;HA&quot;, &quot;status&quot;: &quot;ONLINE&quot;, &quot;version&quot;: &quot;8.0.29&quot; } }, &quot;topologyMode&quot;: &quot;Single-Primary&quot; }, &quot;groupInformationSourceMember&quot;: &quot;node-0:3306&quot; } </code></pre> <p>And adding more instances continues to fail with the same communication errors.</p> <p>How do I convince each instance that the address it needs to advertise is different? I will try other permutations of the <code>localAddress</code> setting, but it doesn't look like it's intended to fix the problem I'm having. How do I reconcile the address the instance reports for itself with the address that's actually useful for other instances to find it?</p> <p>Edit to add: Maybe it is a Kubernetes thing? Or a Docker thing at any rate. There is an environment variable set in the container:</p> <pre><code>HOSTNAME=node-0 </code></pre> <p>Does the containerized MySQL use that? If so, how do I override it?</p>
Jerry
<p>Apparently this value has to be set at startup. The option for my setup was</p> <pre><code>--report-host=${HOSTNAME}.cluster </code></pre> <p>when starting the MySQL instances resolved the issue.</p> <p>Specifically for Kubernetes, an example is at <a href="https://github.com/adamelliotfields/kubernetes/blob/master/mysql/mysql.yaml" rel="nofollow noreferrer">https://github.com/adamelliotfields/kubernetes/blob/master/mysql/mysql.yaml</a></p>
Jerry
<p>I'm creating a configuration to host some apps in a Kubernetes cluster on AWS. I have two different apps, with separate service/pod/selector but I want to expose them with a single ingress for the moment.</p> <p>So I created the following ingress controller</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /foo backend: serviceName: foo servicePort: 8080 - path: /bar backend: serviceName: bar servicePort: 8080 </code></pre> <p>and the ingress obtain the ELB from AWS without any problem, but when I try to browse the app (Java application using Tomcat appserver) I always receive the following page</p> <p><a href="https://i.stack.imgur.com/rLaHu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rLaHu.png" alt="Tomcat"></a></p> <p>It's the classic old Tomcat welcome page but every request always returns the index.html (no css/img loaded) and also if I try to use the correct context path for the application I receive this page.</p> <p>If I expose the apps using a Service (LoadBalancer) I can use it without these problems, so I think there is something wrong with ingress configuration.</p> <p>Any ideas?</p> <hr> <p>UPDATE</p> <p>If I use an ingress with a single path like this</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / backend: serviceName: foo servicePort: 8080 </code></pre> <p>Using INGRESSHOST url I can see the Tomcat home with img/css and if I browse to INGRESSHOST/APPCONTEXT I can use the app without problem</p>
Federico Paparoni
<p>If you have recently changed the version of your nginx-ingress controller then maybe the cause can be a recent change done to it. Now it uses regex rewrite rules and maybe your rewrite target is just always being rewritten to "/". I think the changes were introduced in version 0.22 in January.</p> <p>The new correct syntax for your ingress would be:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /foo(.*) backend: serviceName: foo servicePort: 8080 - path: /bar(.*) backend: serviceName: bar servicePort: 8080 </code></pre>
Yervand Aghababyan
<p><a href="https://www.docker.com/" rel="nofollow noreferrer">Docker</a> provides a way to run the container using <code>docker run</code></p> <p>Or just pull the container image using <code>docker pull</code></p> <p>Found a <a href="https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/" rel="nofollow noreferrer">doc</a> showing mapping between docker commands and kubectl.</p> <p>Can't find <code>docker pull</code> equivalent in this doc.</p> <p>If there is no any such equivalent to <code>docker pull</code>, then is there any way to just pull an image using <code>kubectl</code> cli.</p>
mchawre
<p><code>crictl pull &lt;image name&gt;</code></p>
Metalstorm
<p>I'm setting up a Prometheus exporter for my ASP.NET Core 3.1 app.</p> <p>I've imported</p> <p><code>&lt;PackageReference Include=&quot;prometheus-net.AspNetCore&quot; Version=&quot;4.1.1&quot; /&gt;</code></p> <p>And this is what I have configured:</p> <pre><code>public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { ... app.UseRouting(); app.UseHttpMetrics(); app.UseEndpoints(endpoints =&gt; { endpoints.MapControllers(); endpoints.MapMetrics(); }); } </code></pre> <p>This will expose the metrics endpoint on the same port as the rest of the ASP.NET Core application, for example: <code>my.api.com:80/metrics</code>.</p> <p>What do I need to do to expose the <code>/metrics</code> endpoint on another port? I would like to have my API running on port 80, and the <code>/metrics</code> endpoint on port 9102.</p> <p>Can't really find any docs about that.</p> <p><strong>Edit</strong></p> <p>I'm deploying this into Kubernetes</p>
Joel
<p>I ended up doing like this:</p> <pre><code>public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { ... app.UseMetricServer(9102); app.UseRouting(); app.UseHttpMetrics(); ... } </code></pre> <p>And then for my Kubernetes <code>Deployment</code> I had to add both port 80 and 9102 to <code>containerPort</code>s under <code>ports</code>.</p> <p>Additionally I had to set the <code>ASPNETCORE_URLS</code> environment variable to <code>http://+:80;http://+:9102</code></p> <p>That way, <code>/metrics</code> is only exposed on port 9102. (However the rest of my API is exposed on both port 80 and 9102).</p>
Joel
<p>I have a Kubernetes cluster running in AWS. I used <code>kops</code> to setup and start the cluster. </p> <p>I defined a minimum and maximum number of nodes in the nodes instance group: </p> <pre><code>apiVersion: kops/v1alpha2 kind: InstanceGroup metadata: creationTimestamp: 2017-07-03T15:37:59Z labels: kops.k8s.io/cluster: k8s.tst.test-cluster.com name: nodes spec: image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02 machineType: t2.large maxSize: 7 minSize: 5 role: Node subnets: - eu-central-1b </code></pre> <p>Currently the cluster has 5 nodes running. After some deployments in the cluster, pods/containers cannot start because there are no nodes available with enough resources. </p> <p>So I thought, when there is a resource problem, k8s scales automatically the cluster and start more nodes. Because the maximum number of nodes is 7.</p> <p>Do I miss any configuration? </p> <p><strong>UPDATE</strong></p> <p>As @kichik mentioned, the autoscaler addon is already installed. Nevertheless, it doesn't work. Kube-dns is also often restarting because of resource problems. </p>
CPA
<p>Someone opened a <a href="https://github.com/kubernetes/kops/issues/341" rel="nofollow noreferrer">ticket for this on GitHub</a> and it suggests you have to install the <a href="https://github.com/kubernetes/kops/tree/master/addons/cluster-autoscaler" rel="nofollow noreferrer">autoscaler addon</a>. Check if it's already installed with:</p> <pre><code>kubectl get deployments --namespace kube-system | grep autoscaler </code></pre> <p>If it's not, you can install it with the following script. Make sure <code>AWS_REGION</code>, <code>GROUP_NAME</code>, <code>MIN_NODES</code> and <code>MAX_NODES</code> have the right values.</p> <pre><code>CLOUD_PROVIDER=aws IMAGE=gcr.io/google_containers/cluster-autoscaler:v0.5.4 MIN_NODES=5 MAX_NODES=7 AWS_REGION=us-east-1 GROUP_NAME="nodes.k8s.example.com" SSL_CERT_PATH="/etc/ssl/certs/ca-certificates.crt" # (/etc/ssl/certs for gce) addon=cluster-autoscaler.yml wget -O ${addon} https://raw.githubusercontent.com/kubernetes/kops/master/addons/cluster-autoscaler/v1.6.0.yaml sed -i -e "s@{{CLOUD_PROVIDER}}@${CLOUD_PROVIDER}@g" "${addon}" sed -i -e "s@{{IMAGE}}@${IMAGE}@g" "${addon}" sed -i -e "s@{{MIN_NODES}}@${MIN_NODES}@g" "${addon}" sed -i -e "s@{{MAX_NODES}}@${MAX_NODES}@g" "${addon}" sed -i -e "s@{{GROUP_NAME}}@${GROUP_NAME}@g" "${addon}" sed -i -e "s@{{AWS_REGION}}@${AWS_REGION}@g" "${addon}" sed -i -e "s@{{SSL_CERT_PATH}}@${SSL_CERT_PATH}@g" "${addon}" kubectl apply -f ${addon} </code></pre>
kichik
<p>I am new to istio. Istio intercepts all traffic between two services through istio-proxy/envoy. Is it possible to configure istio so that it ignores certain type of traffic</p> <ul> <li>when serviceA makes an https call directly to serviceB on a certain port</li> <li>UDP traffic</li> </ul> <p>Thanks</p>
user674669
<p>As per Istio <a href="https://github.com/istio/istio/blob/master/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml" rel="nofollow noreferrer">sidecar injection configuration</a> you can exclude ports from Envoy &amp; iptables rules using the <code>includeInboundPorts</code> and <code>excludeInboundPorts</code> annotations.</p> <p>Example:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: podinfo namespace: test labels: app: podinfo spec: selector: matchLabels: app: podinfo template: metadata: annotations: traffic.sidecar.istio.io/includeInboundPorts: "*" traffic.sidecar.istio.io/excludeInboundPorts: "9999,9229" labels: app: podinfo spec: containers: - name: podinfod image: quay.io/stefanprodan/podinfo:1.4.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9898 name: http protocol: TCP - containerPort: 9999 # &lt;- excluded port protocol: UDP - containerPort: 9229 # &lt;- excluded port protocol: TCP command: - ./podinfo - --port=9898 - --level=info </code></pre>
Stefan P.
<p>I have a docker image for a Spring Boot app with the log file location as <code>--logging.config=/conf/logs/logback.xml</code> and the log file is as follows.</p> <p>I am able to get the logs as</p> <blockquote> <p>kubectl log POD_NAME</p> </blockquote> <p>But, unable to find the log file when I log in to the pod. Is there any default location where the log file is placed as I haven't mentioned the logging location in the logback.xml file.</p> <p>Logback file:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; ?&gt; &lt;configuration&gt; &lt;property name=&quot;server.encoder.pattern&quot; value=&quot;%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} %-5level : loggerName=&amp;quot;%logger{36}&amp;quot; threadName=&amp;quot;%thread&amp;quot; txnId=&amp;quot;%X{txnId}&amp;quot; %msg%n&quot; /&gt; &lt;property name=&quot;metrics.encoder.pattern&quot; value=&quot;%d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} %-5level : %msg%n&quot; /&gt; &lt;!-- Enable LevelChangePropagator for jul-to-slf4j optimization --&gt; &lt;contextListener class=&quot;ch.qos.logback.classic.jul.LevelChangePropagator&quot; /&gt; &lt;appender name=&quot;METRICS&quot; class=&quot;ch.qos.logback.core.ConsoleAppender&quot;&gt; &lt;encoder&gt; &lt;pattern&gt;${metrics.encoder.pattern}&lt;/pattern&gt; &lt;/encoder&gt; &lt;/appender&gt; &lt;logger name=&quot;appengAluminumMetricsLogger&quot; additivity=&quot;false&quot;&gt; &lt;appender-ref ref=&quot;METRICS&quot; /&gt; &lt;/logger&gt; &lt;appender name=&quot;SERVER&quot; class=&quot;ch.qos.logback.core.ConsoleAppender&quot;&gt; &lt;encoder&gt; &lt;pattern&gt;${server.encoder.pattern}&lt;/pattern&gt; &lt;/encoder&gt; &lt;/appender&gt; &lt;root level=&quot;INFO&quot;&gt; &lt;appender-ref ref=&quot;SERVER&quot; /&gt; &lt;/root&gt; &lt;/configuration&gt; </code></pre>
user1578872
<p>What you see from kubectl logs is console log from your service. Only console log can be seen like that and this is via docker logs support.</p>
manojlds
<p>I have a problem with Istio Request Routing directly behind the Istio Ingress Gateway: </p> <p><a href="https://i.stack.imgur.com/YBfhx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YBfhx.png" alt="request routing"></a></p> <p>I have simple node.js app (web-api) in 2 versions (v1, v2) with an Istio Ingress Gateway directly in frontand an Istio VirtualService that is supposed to do a 80/20 distribution between version 1 and 2 but it doesn't. Kiali shows a 50/50 distribution.</p> <p>When I add a simple frontend service that just passes the request through, everything works as expected. <a href="https://i.stack.imgur.com/n2o8h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n2o8h.png" alt="frontend added"></a> According to the Istio documentation using an Istio ingress allows for request routing rules in user-facing services. But for me it doesn't and I don't understand why?</p> <p>deployment.yaml:</p> <pre><code>apiVersion: apps/v1beta2 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: web-api-v1 spec: selector: matchLabels: app: web-api project: istio-test version: v1 replicas: 1 strategy: type: Recreate template: metadata: labels: app: web-api project: istio-test version: v1 spec: containers: - image: web-api:1 name: web-api-v1 env: - name: VERS value: "=&gt; Version 1" ports: - containerPort: 3000 name: http restartPolicy: Always --- apiVersion: apps/v1beta2 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: web-api-v2 spec: selector: matchLabels: app: web-api project: istio-test version: v2 replicas: 1 strategy: type: Recreate template: metadata: labels: app: web-api project: istio-test version: v2 spec: containers: - image: web-api:1 name: web-api-v1 env: - name: VERS value: "=&gt; Version 2" ports: - containerPort: 3000 name: http restartPolicy: Always --- </code></pre> <p>service.yaml </p> <pre><code>apiVersion: v1 kind: Service metadata: name: web-api labels: app: web-api project: istio-test spec: type: NodePort ports: - port: 3000 name: http protocol: TCP selector: app: web-api --- </code></pre> <p>istio-ingress.yaml:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: default-gateway-ingress spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtualservice-ingress spec: hosts: - "*" gateways: - default-gateway-ingress http: - match: - uri: exact: /test route: - destination: host: web-api port: number: 3000 --- </code></pre> <p>istio-virtualservice.yaml:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: web-api spec: hosts: - web-api http: - route: - destination: host: web-api subset: v1 weight: 80 - destination: host: web-api subset: v2 weight: 20 --- </code></pre> <p>I have put this example on <a href="https://github.com/Harald-U/istio-test" rel="nofollow noreferrer">https://github.com/Harald-U/istio-test</a></p>
Harald Uebele
<p>You have to attach the web-api virtual service to the gateway and delete the virtualservice-ingress object.</p> <p>Here is how the web-api virtual service should look like:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: web-api spec: hosts: - "*" gateways: - default-gateway-ingress http: - route: - destination: host: web-api subset: v1 weight: 80 - destination: host: web-api subset: v2 weight: 20 </code></pre>
Stefan P.
<p>I have added the following command-line arguments to <code>kube-apiserver</code> to enable audit logging:</p> <pre><code>- --audit-log-path=/tmp/k8s-audit.log - --audit-policy-file=/etc/kubernetes/audit.yaml - --audit-log-maxage=1 - --audit-log-maxsize=100 - --audit-log-maxbackup=1 </code></pre> <p>The contents of <code>/etc/kubernetes/audit.yaml</code> is:</p> <pre><code>apiVersion: audit.k8s.io/v1 kind: Policy omitStages: - "ResponseStarted" - "ResponseComplete" rules: - level: RequestResponse </code></pre> <p>I have run a command with verbose logging, so that I can see the request body:</p> <pre><code>$ kubectl --v=10 uncordon cluster-worker2 </code></pre> <p>And the kubectl command logs the request body as follows:</p> <pre><code>I0328 09:00:07.591869 47228 request.go:942] Request Body: {"spec":{"unschedulable":null}} </code></pre> <p>But I don't see this request body anywhere in the audit log file on the kubernetes server. What's wrong with my configuration?</p>
Robin Green
<p>The request is actually only logged in the <code>ResponseComplete</code> stage, somewhat unexpectedly. Even though Kubernetes <em>could</em> theoretically log the request as soon as it receives it, it doesn't.</p> <p>So it's necessary to remove the <code>ResponseComplete</code> line from the <code>omitstages</code> in the policy configuration file (<code>audit.yaml</code>).</p>
Robin Green
<p>I am trying to implement CI/CD pipeline using Kubernetes and Jenkins. Now I am exploring about the CI part using Jenkins file, SVN repository with Docker Hub. After pushing the Docker image into registry docker hub , I need to deploy this into a Kubernetes cluster having 3 cluster master and 15 worker machine/node.</p> <p>When I am reading deployment into Kubernetes cluster I have several doubts:</p> <p>Every deployment definition within the Jenkins is writes using the shell script. If I need to create my Deployment and Services for those deployments, How I can define in Jenkins? Where I can create the YAML/YML files for ReplicaSet, Deployment and Services? Do I need to use shell scripting for this? Or in any other method?</p>
Mr.DevEng
<p>You can use <a href="https://helm.sh/" rel="nofollow noreferrer">Kubernetes Helm</a> to define what you want to spin up, in parameterisable modules called Helm charts. Many charts are available for common software like nginx and postgresql. This allows an "infrastructure as code" way of working - declaratively specifying what you want, instead of using a script to manually get the cluster into the desired state from whatever state it is currently in, just rely on Helm to do that for you! This is a good use case for Helm.</p>
Robin Green
<p>I am quite confused how IPs and ports work in kubernetes.</p> <p>Here is my yaml for a very basic service:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: httpbin-pod labels: app.kubernetes.io/name: httpbin spec: containers: - name: httpbin-container image: kennethreitz/httpbin ports: - containerPort: 8080 name: backend-port --- apiVersion: v1 kind: Service metadata: name: httpbin-service spec: selector: app.kubernetes.io/name: httpbin ports: - name: httpbin-port protocol: TCP port: 8090 targetPort: backend-port nodePort: 30001 type: NodePort </code></pre> <p>I deploy it on minikube.</p> <p>So my understaning is that when user sends a request at port 30001, then svc would direct it to httpbin service running at targetPort(ie 8080) in container.</p> <p>But what ip or host name should I use to access it? I tried with localhost, minikube ip, and even svc endpoint url. But none works.</p>
Mandroid
<p>At least part of the problem you have here is that the image you're using <code>kennethreitz/httpbin</code> listens on port 80 rather than 8080.</p> <p>If you change the value of <code>containerPort: 8080</code> in the pod definition to <code>containerPort: 80</code> that'll work fine in terms of getting the traffic to the node.</p> <p>You might find minikube's <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#getting-the-nodeport-using-the-service-command" rel="nofollow noreferrer">instructions on service ports</a> useful here as well to get the exact destination.</p>
Rory McCune
<p>I'm trying to enable 'auditing'. <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/audit/</a> mentions:</p> <blockquote> <p>You can pass a file with the policy to kube-apiserver using the --audit-policy-file flag. If the flag is omitted, no events are logged.</p> </blockquote> <p>I've used kubeadm to configure the cluster (running in 3 VMs in total).</p> <p>However where is this set when using kubeadm ? I don't see where it interacts with kube-apiserver.</p>
Chris Stryczynski
<p>For a recent version of Kubernetes, add this to the <code>kind: ClusterConfiguration</code> section:</p> <pre><code> apiServer: extraArgs: audit-log-path: /tmp/k8s-audit.log audit-policy-file: /etc/kubernetes/audit.yaml audit-log-maxage: "1" audit-log-maxsize: "100" audit-log-maxbackup: "1" extraVolumes: - name: audit hostPath: /host/audit.yaml mountPath: /etc/kubernetes/audit.yaml readOnly: true pathType: File </code></pre> <p>Example <code>/host/audit.yaml</code> file which logs all request and response bodies:</p> <pre><code>apiVersion: audit.k8s.io/v1 kind: Policy omitStages: - "ResponseStarted" rules: - level: RequestResponse </code></pre>
Robin Green
<p>We are having issues when we run an import in our system. He have a pod with 6 replicas that calls a service that is backed by a 5-replica pod. Nevertheless, our metrics indicate that, under load, that only one of the 5 pods is getting the request. We are using a ClusterIP service to "route" the calls.</p> <p>We found <a href="https://github.com/kubernetes/kubernetes/issues/38456" rel="nofollow noreferrer">this</a>, more specifically <a href="https://github.com/kubernetes/kubernetes/issues/38456#issuecomment-266104862" rel="nofollow noreferrer">this post</a> but it is not clear if the problems he is reporting are only for long living connections.</p> <p>We disabled the connection pool and we are still facing this unbalanced behavior under load.</p> <p>We are running out of alternatives, so this is my question: Is it a known behavior (limitation?) with k8s services that we are having? It is documented somewhere?</p> <p>PS: this service is only reachable from inside the cluster. PS2: Service definition</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: "2019-05-16T16:29:46Z" name: my-service namespace: my-ns ..... spec: clusterIP: &lt;MyIp&gt; ports: - port: 8080 protocol: TCP targetPort: 8080 selector: app: &lt;my-selector&gt; sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre>
JSBach
<p>I found out the answer. I deployed an app that would call an endpoint through the service and would get back the node that answered. I noticed that during a time frame around 30s, only the same node would answer, then it would switch to another node for another time frame and then back. It seems weird, but seems to be the behavior of the service load balancing. At the end of the day, you will have a balanced load if you have a consistent amount of requests. Our request profile was "bursts" of requests in a short time, so this is the reason we got unbalanced loads.</p>
JSBach
<p>I noticed some of my clusters were reporting a CPUThrottlingHigh alert for metrics-server-nanny container (image: gke.gcr.io/addon-resizer:1.8.11-gke.0) in GKE. I couldn't see a way to configure this container to give it more CPU because it's automatically deployed as part of the metrics-server pod, and Google automatically resets any changes to the deployment/pod resource settings.</p> <p>So out of curiosity, I created a small kubernetes cluster in GKE (3 standard nodes) with autoscaling turned on to scale up to 5 nodes. No apps or anything installed. Then I installed the kube-prometheus monitoring stack (<a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>) which includes the CPUThrottlingHigh alert. Soon after installing the monitoring stack, this same alert popped up for this container. I don't see anything in the logs of this container or the related metrics-server-nanny container.</p> <p>Also, I don't notice this same issue on AWS or Azure because while they do have a similar metrics-server pod in the kube-system namespace, they do not contain the sidecar metrics-server-nanny container in the pod.</p> <p>Has anyone seen this or something similar? Is there a way to give this thing more resources without Google overwriting config changes?</p>
pgier
<p><a href="https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-on-metrics-server-(Prometheus-alert)" rel="nofollow noreferrer">This is a known issue with GKE metrics-server.</a></p> <p>You can't fix the error on GKE as GKE controls the metric-server configuration and any changes you make are reverted.</p> <p><a href="https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-on-metrics-server-(Prometheus-alert)#recommended-remediation-for-gke-clusters" rel="nofollow noreferrer">You should silence the alert on GKE</a> or update to a GKE cluster version that fixes this.</p>
Natan Yellin