Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I have a deployment of jenkins in kubernetes with 2 replicas, exposed as a service under the nginx-ingress. After creating a project, the next refresh would yield no result for it, as if it was never created, the third refresh would show the created project again.</p> <p>New to jenkins and kubernetes so not really sure what is happening.</p> <p>Maybe each time the service is routing to different pods and so just one of the have the project created and other none. If this is the case how could i fix it??</p> <p>PD: I reduce the replica to 1 and it work as intended but I am trying to make this a failure tolerant project. </p>
Diego Ramirez
<p>To my knowledge Jenkins doesn't support HA by design. You can't scale it up just by adding more replicas. <a href="https://stackoverflow.com/questions/36173214/how-to-setup-jenkins-with-ha">Here is simmilar question to yours on stack overflow</a>.</p> <p>Nginx is load balancing between two replicas of jenkins instances you created. These two instances are not aware of each other and have separate data so you alternate between two totally separate jenkins instances.</p> <p>One way you can try solving this is by setting <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#session-affinity" rel="nofollow noreferrer">session affinity</a> on the ingress object:</p> <pre><code>nginx.ingress.kubernetes.io/affinity-mode: cookie </code></pre> <p>so in this way your browser session sticks to one pod.</p> <p>Also remember to share <code>$JENKINS_HOME</code> dir between these pods e.g. using NFS volumes.</p> <p>And let me know if you find this helpful.</p>
Matt
<p>I've setup a "hello world" ingress on Minikube as explained in <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">the tutorial</a>. The only difference - I removed the specific hostname to use '*' instead. However, it only seems to work with the ingress controller provided by minikube (<code>minikube addons enable ingress</code>). When I try to disable it and use <code>helm install nginx-ingress stable/nginx-ingress</code> instead, I can no longer access the Hello World sample website. I'm getting a "connection refused" error instead:</p> <pre><code>$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE example-ingress * 192.168.64.6 80 6m23s $ minikube ip 192.168.64.6 $ curl -iv "192.168.64.6" * Rebuilt URL to: 192.168.64.6/ * Hostname was NOT found in DNS cache * Trying 192.168.64.6... * connect to 192.168.64.6 port 80 failed: Connection refused * Failed to connect to 192.168.64.6 port 80: Connection refused * Closing connection 0 curl: (7) Failed to connect to 192.168.64.6 port 80: Connection refused </code></pre> <p>If I switch back to the built-in addon, it works again:</p> <pre><code>$ helm uninstall nginx-ingress release "nginx-ingress" uninstalled $ minikube addons enable ingress ✅ ingress was successfully enabled $ curl -iv "192.168.64.6" * Rebuilt URL to: 192.168.64.6/ * Hostname was NOT found in DNS cache * Trying 192.168.64.6... * Connected to 192.168.64.6 (192.168.64.6) port 80 (#0) &gt; GET / HTTP/1.1 &gt; User-Agent: curl/7.38.0 &gt; Host: 192.168.64.6 &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK HTTP/1.1 200 OK * Server openresty/1.15.8.2 is not blacklisted &lt; Server: openresty/1.15.8.2 Server: openresty/1.15.8.2 &lt; Date: Sun, 09 Feb 2020 07:06:58 GMT Date: Sun, 09 Feb 2020 07:06:58 GMT &lt; Content-Type: text/plain; charset=utf-8 Content-Type: text/plain; charset=utf-8 &lt; Content-Length: 59 Content-Length: 59 &lt; Connection: keep-alive Connection: keep-alive &lt; Hello, world! Version: 1.0.0 Hostname: web-9bbd7b488-wsvsw * Connection #0 to host 192.168.64.6 left intact </code></pre> <p>Is it possible to install to use this helm chart on minikube correctly?</p>
Sagi Mann
<p>I disabled ingress addon and installed nginx ingress with a helm chart you mentioned. I tested it and have a solution for you.</p> <p>When you run:</p> <pre><code>$ kubectl get services nginx-ingress-controller </code></pre> <p>you should see this output:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.96.245.213 &lt;pending&gt; 80:30240/TCP,443:31224/TCP 50s </code></pre> <p>Notice that EXTERNAL-IP is in pending state.</p> <p>minikube won't assign this IP by itself, you need to do it manually.</p> <p>Run <code>kubectl edit svc nginx-ingress-controller</code> and add <code>externalIPs</code> field under <code>spec:</code> like following:</p> <pre><code>spec: externalIPs: - 192.168.39.241 # minikube ip </code></pre> <p>Now let's see why this works. Normally when running kubernetes in cloud environemnt when creating service of type LoadBalancer, cloud controller would create a load balancer and update the IP of the service, but because you are running it on minikube where no cloud specific features are working you need to add the address manually.</p> <p>This can by any IP of any interface associated with your cluster so it should also work when having more nodes. You can add IP of any interface of your nodes and kuberentes will bind port on this interface and from now on you can send traffic to it and it will get forwarded to appropriate service/pod.</p> <p>Let me know it it was helpful.</p>
Matt
<p>I'm trying to connect to Hyperkit to check containers running on this VM.</p> <p>All I'm getting now is <code>[screen is terminating]</code></p> <p>Here is what I do:</p> <pre><code>MacBook-Pro-Karol: ~ → minikube start --driver=hyperkit 😄 minikube v1.12.3 na Darwin 10.15.6 ✨ Using the hyperkit driver based on user configuration 👍 Starting control plane node minikube in cluster minikube 🔥 Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... 🐳 preparing Kubernetes v1.18.3 on Docker 19.03.12... 🔎 Verifying Kubernetes components... 🌟 Enabled addons: default-storageclass, storage-provisioner 🏄 Ready! kubectl is configured to be used with &quot;minikube&quot;. MacBook-Pro-Karol: ~ → sudo screen /Users/karol/.minikube/machines/minikube/tty Password: [screen is terminating] MacBook-Pro-Karol: ~ → screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty [screen is terminating] Cannot exec '/Users/karol/Library/Containers/com.docker.docker/Data/vms/0/tty': Permission denied → sudo screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty Password: [screen is terminating] Cannot exec '/Users/karol/Library/Containers/com.docker.docker/Data/vms/0/tty': Operation not permitted </code></pre> <p>Any help would be appreciated.</p>
Karol Gasienica
<p>You can use <code>minikube ssh</code> to <a href="https://minikube.sigs.k8s.io/docs/commands/ssh/" rel="nofollow noreferrer">login</a> in to VM that minikube runs in:</p> <blockquote> <p>Log into or run a command on a machine with SSH; similar to ‘docker-machine ssh’.</p> </blockquote> <pre><code>minikube ssh [flags] </code></pre> <p>and then use <code>docker ps</code> to check the running containers inside this VM:</p> <pre><code>$ docker ps | grep kube-api f53aebd26287 7e28efa976bd &quot;kube-apiserver --ad…&quot; 16 minutes ago k8s_kube-apiserver_kube-apiserver-minikube_kube-system_8009646ba816631d0677c2668886baad_1 12188a523d12 k8s.gcr.io/pause:3.2 &quot;/pause&quot; 16 minutes ago k8s_POD_kube-apiserver-minikube_kube-system_8009646ba816631d0677c2668886baad_1 </code></pre>
acid_fuji
<p>I am new to kubernetes and I finally realized how to launch the metrics-server as documented <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">kubernetes-sigs/metrics-server</a>. In case that someone else wonders you need to deploy on Master node and also have minimum one worker in the cluster.</p> <p>So I get this error:</p> <pre class="lang-sh prettyprint-override"><code>E0818 15:25:22.835094 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:&lt;hostname-master&gt;: unable to fetch metrics from Kubelet &lt;hostname-master&gt; (&lt;hostname-master&gt;): Get https://&lt;hostname-master&gt;:10250/stats/summary?only_cpu_and_memory=true: x509: certificate signed by unknown authority, unable to fully scrape metrics from source kubelet_summary:&lt;hostname-worker&gt;: unable to fetch metrics from Kubelet &lt;hostname-worker&gt; (&lt;hostname-worker&gt;): Get https://&lt;hostname-worker&gt;:10250/stats/summary?only_cpu_and_memory=true: x509: certificate signed by unknown authority] </code></pre> <p>I am using my own CAs (not self signed) and I have modified the components.yml file (sample):</p> <pre class="lang-sh prettyprint-override"><code>args: - --cert-dir=/tmp/metricsServerCas - --secure-port=4443 - --kubelet-preferred-address-types=Hostname </code></pre> <p>I know that I can disable the tls by using this flag <code>--kubelet-insecure-tls</code> I have already tried it. I want to use my own CAs for extra security.</p> <p>I have see other many relevant questions (few samples) e.g.:</p> <p><a href="https://stackoverflow.com/questions/36939381/x509-certificate-signed-by-unknown-authority-kubernetes">x509 certificate signed by unknown authority- Kubernetes</a> and <a href="https://stackoverflow.com/questions/46234295/kubectl-unable-to-connect-to-server-x509-certificate-signed-by-unknown-authori">kubectl unable to connect to server: x509: certificate signed by unknown authority</a></p> <p>Although that I have applied chown already my <code>$HOME/.kube/config</code> still I see this error.</p> <p>Where am I going wrong?</p> <p><strong>Update:</strong> On the worker I am creating a directory e.g. <code>/tmp/ca</code> and I add the ca file(s) in the directory.</p> <p>I am not really good yet with the mounting points and I assume that I am doing something wrong. The default syntax of the images can be found here <a href="https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.3.7" rel="nofollow noreferrer">kubernetes-sigs/metrics-server/v0.3.7</a> (see components.yml file).</p> <p>I tried to create a directory on my worker e.g. /tmp/ca and I modified the flag <code>--cert-dir=/tmp/ca</code> and <code>mountPath: /tmp/ca</code></p> <p>When I am deploying the file e.g.:</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml </code></pre> <p>I keep getting the error from the metrics-server-xxxx:</p> <pre class="lang-sh prettyprint-override"><code>panic: open /tmp/client-ca-file805316981: read-only file system </code></pre> <p>Although I have given full access to the directory e.g.:</p> <pre class="lang-sh prettyprint-override"><code>$ ls -la /tmp/ca total 8 drwxr-xr-x. 2 user user 20 Aug 19 16:59 . drwxrwxrwt. 18 root root 4096 Aug 19 17:34 .. -rwxr-xr-x. 1 user user 1025 Aug 19 16:59 ca.crt </code></pre> <p>I am not sure where I am going wrong.</p> <p>How is meant to be configured so someone can use non self signed certificates? I can see that most people are using non SSL which I would like to avoid.</p> <p>Sample of my args in the image:</p> <pre class="lang-sh prettyprint-override"><code>spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - name: metrics-server image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7 imagePullPolicy: IfNotPresent args: - --cert-dir=/tmp/ca - --secure-port=4443 - --kubelet-preferred-address-types=Hostname ports: - name: main-port containerPort: 4443 protocol: TCP securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - name: tmp-dir mountPath: /tmp/ca nodeSelector: kubernetes.io/os: linux kubernetes.io/arch: &quot;amd64&quot; </code></pre> <p><strong>Update 2:</strong> Adding curl command from Master to Worker including error output:</p> <pre class="lang-sh prettyprint-override"><code>$ curl --cacert /etc/kubernetes/pki/ca.crt https://node_hostname:10250/stats/summary?only_cpu_and_memory=true curl: (60) Peer's certificate issuer has been marked as not trusted by the user. More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a &quot;bundle&quot; of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. </code></pre>
Thanos
<p>Posting this answer as a community wiki to give better visibility as the solution was posted in the comments.</p> <blockquote> <p>The version that I used before was 1.18.2 and metrics server v0.3.6. Deployment was through kubeadm. Yes all requirements was exactly as the metrics-server/requirements. The good news is that I got it running by upgrading my k8s version on 1.19.0 and using the latest version v0.3.7. It works with self signed certificates.</p> </blockquote> <p>The issue was resolved by upgrading:</p> <ul> <li><code>Kubernetes</code>: <code>1.18.2</code> -&gt; <code>1.19.0</code></li> <li><code>Metrics-server</code>: <code>0.3.6</code> -&gt; <code>0.3.7</code></li> </ul> <p>This upgrade allowed to run <code>metrics-server</code> with <code>tls</code> enabled (self-signed certificates).</p> <hr /> <p>Additional resources that could help when deploying <code>metrics-server</code> with <code>tls</code>:</p> <ul> <li><em><a href="https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-to-run-metrics-server-securely" rel="nofollow noreferrer">Github.com: Kubernetes-sigs: Metrics-server: FAQ: How to run metrics-server-securely</a></em></li> </ul> <blockquote> <p>How to run metrics-server securely? Suggested configuration:</p> <ul> <li>Cluster with <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> enabled</li> <li>Kubelet <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options" rel="nofollow noreferrer">read-only port</a> port disabled</li> <li>Validate kubelet certificate by mounting CA file and providing --kubelet-certificate-authority flag to metrics server</li> <li>Avoid passing insecure flags to metrics server (--deprecated-kubelet-completely-insecure, --kubelet-insecure-tls)</li> <li>Consider using your own certificates (--tls-cert-file, --tls-private-key-file)</li> </ul> </blockquote> <ul> <li><em><a href="https://github.com/kubernetes-sigs/metrics-server/issues/146" rel="nofollow noreferrer">Github.com: Metrics-server: x509: certificate signed by unknown authority</a></em></li> <li><em><a href="https://ftclausen.github.io/general/setting_up_k8s_with_metrics_server/" rel="nofollow noreferrer">Ftclausen.github.io: Setting up K8S with metrics-server</a></em></li> </ul>
Dawid Kruk
<p>I am trying to configure a Redis cluster onto Kubernetes with an Istio Mesh installed. The Redis Cluster is able to be created without Istio and each Pods are auto-injected with an Istio Proxy (Envoy). However, with Istio installed and the Istio proxy attached to each Redis Pods, the Redis cluster is not able to "meet" correctly through the CLUSTER MEET command from the CLI.</p> <p>For instance, I have Redis Pod A (slot 0 - 10919) and Redis Pod B (slot 10920 - 16383). This is the result after attempting a CLUSTER MEET command between them (cluster meet ClusterIPForRedisPodB 6379). </p> <p>For Redis Pod A, the cluster info is updated and includes Redis Pod B:</p> <p><a href="https://i.stack.imgur.com/OkRU4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OkRU4.png" alt="Redis Pod A"></a></p> <p>On the contrary, for Redis Pod B, the cluster info is not updated and does not include Redis Pod A:</p> <p><a href="https://i.stack.imgur.com/UYnp0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UYnp0.png" alt="Redis Pod B"></a></p> <p>I am able to send curl and netcat responses between the two Pods for Port 16379 and 6379. In addition, Envoy appears to have these ports opened as well. </p>
Kevin Chow
<p>I have replicated your issue and found a solution to your problem.</p> <p>Let me start by explaining what was the cause of your problem.</p> <p>Redis gossip protocol works this way: when you type <code>cluster meet &lt;ip&gt; &lt;port&gt;</code> on <em>redis1</em>, <em>redis1</em> opens a tcp connection to <em>redis2</em>. In normal case, when <em>redis2</em> receives a connection, it accepts it, looks up source ip address of who's connecting and also opens a tcp connection to that address, so to <em>redis1</em> in that case. (More on how gossip protocol works inredis can be found in <a href="https://redis.io/topics/cluster-spec" rel="nofollow noreferrer">redis documentation</a>, or in <a href="https://medium.com/@Alibaba_Cloud/in-depth-analysis-of-redis-cluster-gossip-protocol-344b01f71c03" rel="nofollow noreferrer">this article</a>)</p> <p>Here comes the <em>istio</em> part. Istio by default configures envoy as a typical proxy and as you can read in <a href="https://istio.io/docs/reference/config/istio.mesh.v1alpha1/#ProxyConfig-InboundInterceptionMode" rel="nofollow noreferrer">istio documentation</a>:<br><br> <a href="https://i.stack.imgur.com/AwfJB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AwfJB.png" alt="interception mode"></a></p> <p>Istio be default is using <code>REDIRECT</code> proxing and as states in documentation:</p> <blockquote> <p>This mode loses source IP addresses during redirection</p> </blockquote> <p>This is our source of problem.</p> <p><em>redis2</em> when receives a connection, it sees it as coming from localhost. Envoy has lost source IP address of <em>redis1</em> and <em>redis2</em> is now unable to open a connection back to <em>redis1</em>.</p> <p>Now, we have some options:</p> <ol> <li>you can try to change proxing mode to <code>TPROXY</code> (I tried it but couldn't make it work)</li> <li>use redis built-in config variable</li> </ol> <p>Lets take a look at the second option a bit closer because this is the one that worked for me. In <a href="https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf" rel="nofollow noreferrer"><code>redis.conf</code></a> file you can find this section:</p> <blockquote> <h3>CLUSTER DOCKER/NAT support</h3> <p>In certain deployments, Redis Cluster nodes address discovery fails, because addresses are NAT-ted or because ports are forwarded (the typical case is Docker and other containers).</p> <p>In order to make Redis Cluster working in such environments, a static configuration where each node knows its public address is needed. The following two options are used for this scope, and are:</p> <ul> <li>cluster-announce-ip</li> <li>cluster-announce-port</li> <li>cluster-announce-bus-port</li> </ul> <p>Each instruct the node about its address, client port, and cluster message bus port. The information is then published in the header of the bus packets so that other nodes will be able to correctly map the address of the node publishing the information.</p> <p>If the above options are not used, the normal Redis Cluster auto-detection will be used instead.</p> <p>Note that when remapped, the bus port may not be at the fixed offset of clients port + 10000, so you can specify any port and bus-port depending on how they get remapped. If the bus-port is not set, a fixed offset of 10000 will be used as usually.</p> <p>Example:</p> <p>cluster-announce-ip 10.1.1.5<br> cluster-announce-port 6379<br> cluster-announce-bus-port 6380<br></p> </blockquote> <p><strong>We need to set <code>cluster-announce-ip</code> variable to redis's pod own ip address.</strong></p> <p>You can do it for example modifying <code>redis-cluster</code> ConfigMap like this (It's modified redis configmap from <a href="https://rancher.com/blog/2019/deploying-redis-cluster/" rel="nofollow noreferrer">this article</a>):</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster data: update-node.sh: | #!/bin/sh REDIS_NODES="/data/nodes.conf" sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES} cp /conf/redis.conf /redis.conf # &lt;------HERE----- sed -i "s/MY_IP/${POD_IP}/" /redis.conf # &lt;------HERE----- exec "$@" redis.conf: |+ cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file /data/nodes.conf cluster-migration-barrier 1 appendonly yes protected-mode no cluster-announce-ip MY_IP # &lt;------HERE----- </code></pre> <p>Also remember to change you container's <code>command</code> like this to point to the right <code>redis.conf</code> file:</p> <pre><code>command: ["/conf/update-node.sh", "redis-server", "/redis.conf"] </code></pre> <p>Every redis node will now advertise this address as it's own so the other redis nodes will now know how to connect with it.</p> <p>Let me know if it helped.</p>
Matt
<p>I'm trying to install Sonarqube in Kubernetes environment which needs PostgresSQL. I'm using an external Postgres instance and I have the crednetials kv secret set in Vault. SonarQube helm chart creates an Environment variable in the container which takes the username and password for Postgres.</p> <p>How can I inject the secret from my Vault to environment variable of sonarqube pod running on Kubernetes?</p> <p>Creating a Kubernetes secret and using the secret in the helm chart works, but we are managing all secrets on Vault and need Vault secrets to be injected into pods.</p> <p>Thanks</p>
Krishna Arani
<p>There are 2 ways to inject vault secrets into the k8s pod as ENV vars.</p> <h1>1) Use the vault Agent Injector</h1> <p>A template should be created that exports a Vault secret as an environment variable.</p> <pre><code>spec: template: metadata: annotations: # Environment variable export template vault.hashicorp.com/agent-inject-template-config: | {{ with secret &quot;secret/data/web&quot; -}} export api_key=&quot;{{ .Data.data.payments_api_key }}&quot; {{- end }} </code></pre> <p>And the application container should source those files during startup.</p> <pre><code>args: ['sh', '-c', 'source /vault/secrets/config &amp;&amp; &lt;entrypoint script&gt;'] </code></pre> <p>Reference: <a href="https://www.vaultproject.io/docs/platform/k8s/injector/examples#environment-variable-example" rel="noreferrer">https://www.vaultproject.io/docs/platform/k8s/injector/examples#environment-variable-example</a></p> <h1>2) Use banzaicloud bank-vault</h1> <p>Reference: <a href="https://banzaicloud.com/blog/inject-secrets-into-pods-vault-revisited/" rel="noreferrer">https://banzaicloud.com/blog/inject-secrets-into-pods-vault-revisited/</a>.</p> <h1>Comments:</h1> <p>Both methods are bypassing k8s security because secrets are not stored in etcd. In addition, pods are unaware of vault in both methods. So any one of these can be adopted without a deep comparison.</p> <p><strong>For vault-k8s and vault-helm users, I recommend the first method.</strong></p>
James Wang
<p>I am trying to run minikube on ubuntu 18.04 version. getting an error while starting minikube. Please help. Tried minikube delete and start again but dosent work</p> <pre><code>Aspire-E5-573G:~$ minikube start --driver=podman --container-runtime=cri-o 😄 minikube v1.13.0 on Ubuntu 18.04 ❗ Using podman 2 is not supported yet. your version is &quot;2.0.6&quot;. minikube might not work. use at your own risk. ✨ Using the podman (experimental) driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 💾 Downloading Kubernetes v1.19.0 preload ... &gt; preloaded-images-k8s-v6-v1.19.0-cri-o-overlay-amd64.tar.lz4: 551.13 MiB / 🔄 Restarting existing podman container for &quot;minikube&quot; ... 🤦 StartHost failed, but will try again: podman inspect ip minikube: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125 stdout: stderr: Error: error inspecting object: no such container minikube 🔄 Restarting existing podman container for &quot;minikube&quot; ... 😿 Failed to start podman container. Running &quot;minikube delete&quot; may fix it: podman inspect ip minikube: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125 stdout: stderr: Error: error inspecting object: no such container minikube ❌ Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: sudo -n podman container inspect -f minikube: exit status 125 stdout: stderr: Error: error inspecting object: no such container minikube 😿 If the above advice does not help, please let us know: 👉 https://github.com/kubernetes/minikube/issues/new/choose </code></pre>
Ashwin Agarkhed
<p>As the error already indicates podman 2 is not yet supported.</p> <pre><code>Using podman 2 is not supported yet. your version is &quot;2.0.6&quot;. minikube might not work. use at your own risk. </code></pre> <p>The workaround for this as described <a href="https://github.com/kubernetes/minikube/issues/9120" rel="nofollow noreferrer">here</a> is to use version v.1.9.3.</p> <p>Here`s the <a href="https://github.com/kubernetes/minikube/pull/8784/files/0d78fe56afd08ca789b91b18fc40b18329f63374" rel="nofollow noreferrer">merge</a> that was done to warn about podman version 2.</p>
acid_fuji
<p>This is a purely theoretical question. A standard Kubernetes clusted is given with autoscaling in place. If memory goes above a certain targetMemUtilizationPercentage than a new pod is started and it takes on the flow of requests that is coming to the contained service. The number of minReplicas is set to 1 and the number of maxReplicas is set to 5.</p> <p>What happens when the number of pods that are online reaches maximum (5 in our case) and requests from clients are still coming towards the node? Are these requests buffered somewhere of they are discarded? Can I take any actions to avoid request loss?</p>
otto
<p>Natively Kubernetes does not support messaging queue buffering. Depends on the scenario and setup you use your requests will most likely 'timeout'. To efficiently manage those you`ll need custom resource running inside Kubernetes cluster.</p> <p>In that situations it very common to use a message broker which ensures communication between microservices is reliable and stable, that the messages are managed and monitored within the system and that messages don’t get lost.</p> <p><a href="https://www.rabbitmq.com/" rel="nofollow noreferrer">RabbitMQ</a>, <a href="https://kafka.apache.org/" rel="nofollow noreferrer">Kafka</a> and <a href="https://redis.io/" rel="nofollow noreferrer">Redis</a> appears to be most popular but choosing the right one will heaving depend on your requirement and features needed.</p> <p>Worth to note since Kubernetes essentially runs on linux is that linux itself also manages/limits the requests coming in socket. You may want to read more about it <a href="https://man7.org/linux/man-pages/man2/listen.2.html" rel="nofollow noreferrer">here</a>.</p> <p>Another thing is that if you have pods limits set or lack of resource it is most likely that pods might be restarted or cluster will become unstable. Usually you can prevent it by configuring some kind of &quot;circuit breaker&quot; to limit amount of requests that could go to backed without overloading it. If the amount of requests goes beyond the circuit breaker threshold, excessive requests will be dropped.</p> <p>It is better to drop some request than having <a href="https://en.wikipedia.org/wiki/Cascading_failure" rel="nofollow noreferrer">cascading failure</a>.</p>
acid_fuji
<p>I have a kubernetes pod that is staying in Pending state. When I describe the pod, I am not seeing why it fails to start, I can just see <code>Back-off restarting failed container</code>.</p> <p>This is what I can see when I describe the pod.</p> <p><code>kubectl describe po jenkins-68d5474964-slpkj -n infrastructure</code></p> <pre><code>Name: jenkins-68d5474964-slpkj Namespace: infrastructure Priority: 0 PriorityClassName: &lt;none&gt; Node: ip-172-20-120-29.eu-west-1.compute.internal/172.20.120.29 Start Time: Fri, 05 Feb 2021 17:10:34 +0100 Labels: app=jenkins chart=jenkins-0.35.0 component=jenkins-jenkins-master heritage=Tiller pod-template-hash=2481030520 release=jenkins Annotations: checksum/config=fc546aa316b7bb9bd6a7cbeb69562ca9f224dbfe53973411f97fea27e90cd4d7 Status: Pending IP: 100.125.247.153 Controlled By: ReplicaSet/jenkins-68d5474964 Init Containers: copy-default-config: Container ID: docker://a6ce91864c181d4fc851afdd4a6dc2258c23e75bbed6981fe1cafad74a764ff2 Image: jenkins/jenkins:2.248 Image ID: docker-pullable://jenkins/jenkins@sha256:352f10079331b1e63c170b6f4b5dc5e2367728f0da00b6ad34424b2b2476426a Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh /var/jenkins_config/apply_config.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Fri, 05 Feb 2021 17:15:16 +0100 Finished: Fri, 05 Feb 2021 17:15:36 +0100 Ready: False Restart Count: 5 Limits: cpu: 2560m memory: 2Gi Requests: cpu: 50m memory: 256Mi Environment: ADMIN_PASSWORD: &lt;set to the key 'jenkins-admin-password' in secret 'jenkins'&gt; Optional: false ADMIN_USER: &lt;set to the key 'jenkins-admin-user' in secret 'jenkins'&gt; Optional: false Mounts: /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (rw) /var/jenkins_home from jenkins-home (rw) /var/jenkins_plugins from plugin-dir (rw) /var/run/docker.sock from docker-sock (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-5tbbb (rw) Containers: jenkins: Container ID: Image: jenkins/jenkins:2.248 Image ID: Ports: 8080/TCP, 50000/TCP Host Ports: 0/TCP, 0/TCP Args: --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD) --argumentsRealm.roles.$(ADMIN_USER)=admin State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 2560m memory: 2Gi Requests: cpu: 50m memory: 256Mi Environment: JAVA_OPTS: JENKINS_OPTS: JENKINS_SLAVE_AGENT_PORT: 50000 ADMIN_PASSWORD: &lt;set to the key 'jenkins-admin-password' in secret 'jenkins'&gt; Optional: false ADMIN_USER: &lt;set to the key 'jenkins-admin-user' in secret 'jenkins'&gt; Optional: false Mounts: /usr/share/jenkins/ref/plugins/ from plugin-dir (rw) /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (ro) /var/jenkins_home from jenkins-home (rw) /var/run/docker.sock from docker-sock (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-5tbbb (rw) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: jenkins-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: jenkins Optional: false plugin-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: secrets-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: jenkins-home: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: jenkins ReadOnly: false default-token-5tbbb: Type: Secret (a volume populated by a Secret) SecretName: default-token-5tbbb Optional: false docker-sock: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock HostPathType: QoS Class: Burstable Node-Selectors: nodePool=ci Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m default-scheduler Successfully assigned infrastructure/jenkins-68d5474964-slpkj to ip-172-20-120-29.eu-west-1.compute.internal Normal Started 5m (x4 over 7m) kubelet, ip-172-20-120-29.eu-west-1.compute.internal Started container Normal Pulling 4m (x5 over 7m) kubelet, ip-172-20-120-29.eu-west-1.compute.internal pulling image &quot;jenkins/jenkins:2.248&quot; Normal Pulled 4m (x5 over 7m) kubelet, ip-172-20-120-29.eu-west-1.compute.internal Successfully pulled image &quot;jenkins/jenkins:2.248&quot; Normal Created 4m (x5 over 7m) kubelet, ip-172-20-120-29.eu-west-1.compute.internal Created container Warning BackOff 2m (x14 over 6m) kubelet, ip-172-20-120-29.eu-west-1.compute.internal Back-off restarting failed container </code></pre> <p>Once I run <code>helm upgrade</code> for that container, I can see:</p> <pre><code>RESOURCES: ==&gt; v1/ConfigMap NAME DATA AGE jenkins 5 441d jenkins-configs 1 441d jenkins-tests 1 441d ==&gt; v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE jenkins 0/1 1 0 441d ==&gt; v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE jenkins Bound pvc-8813319f-0d37-11ea-9864-0a7b1d347c8a 4Gi RWO aws-efs 441d ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE jenkins-7b85495f65-2w5mv 0/1 Init:0/1 3 2m9s ==&gt; v1/Secret NAME TYPE DATA AGE jenkins Opaque 2 441d jenkins-secrets Opaque 3 441d ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins LoadBalancer 100.65.2.235 a881a20a40d37... 8080:31962/TCP 441d jenkins-agent ClusterIP 100.64.69.113 &lt;none&gt; 50000/TCP 441d ==&gt; v1/ServiceAccount NAME SECRETS AGE jenkins 1 441d ==&gt; v1beta1/ClusterRoleBinding NAME AGE jenkins-role-binding 441d </code></pre> <p>Can someone advice?</p>
Bob
<p>Now you cannot get any logs by <code>kubectl logs pod_name</code> because the pod status is initializing. When you use <code>kubectl logs</code> command;</p> <ul> <li>If the pod has multiple containers, you have to specify the container name explicitly.</li> <li>If you have only one container, then no need to specify the container name.</li> <li>If you want to get logs of initContainers, you need to specify the initContainer name.</li> </ul> <p>For your case, the pod has one init container and seems it stuck now.</p> <pre><code>Init Containers: copy-default-config: Command: sh /var/jenkins_config/apply_config.sh </code></pre> <p>You can check the log of this container.</p> <pre><code>kubectl logs jenkins-68d5474964-slpkj copy-default-config </code></pre>
James Wang
<p>I'm running a pipeline on gitlab which uses .gitlab-ci.yml and Dockerfile to build a image and push it to GCP container repository. Then I use that image to make deployment on Kubernetes.</p> <p>I'm using CI/CD private variables to inject <strong>username(service_now_qa_username)</strong> and <strong>password(service_now_qa_password)</strong> and passing it as arguments to docker build command which are then used in <strong>Dockerfile</strong></p> <p>This is what my gitlab-ci.yml file looks like</p> <pre><code>variables: BUILD_ARGS: &quot;--build-arg USERNAME=$service_now_qa_username --build-arg PASSWORD=$service_now_qa_password&quot; script: - docker build -t &quot;$IMAGE_NAME:$CI_COMMIT_REF_NAME&quot; -t &quot;$IMAGE_NAME:latest&quot; $BUILD_ARGS . </code></pre> <p>This is what my Dockerfile looks like:</p> <pre><code>ARG USERNAME ARG PASSWORD ENV USER=$USERNAME ENV PASS=$PASSWORD ENTRYPOINT java -jar servicenowapi.jar -conf config.json -username ${USER} -password ${PASS} </code></pre> <p>Now I want to override these arguments of <strong>username</strong> and <strong>password</strong> using Kubernetes <strong>Deployment.yml</strong> and use Deployment.yaml file to override these arguments.</p> <p>My end goal is to use Kubernetes Deployment.yaml for <strong>username</strong> and <strong>password</strong> and passed as arguments to Dockerfile.</p>
M Usama Alvi
<p><code>ARGS</code> are build-time variables so no way to pass values to ARGS at runtime.</p> <p>By the way, you can override <code>entrypoint</code> as following in Kubernetes deployment and consume environment variables. (<code>entrypoint</code> in Dockerfile is <code>command</code> in Kubernetes container spec).</p> <pre class="lang-bash prettyprint-override"><code>containers: - name: xxxx env: - name: USER value: &quot;Input your user&quot; - name: PASS value: &quot;Input your pass&quot; command: [&quot;java&quot;, &quot;-jar servicenowapi.jar -conf config.json -username $(USER) -password $(PASS)&quot;] </code></pre> <p>The main thing here is to define env vars and to use <code>$()</code> instead of <code>${}</code>.</p>
James Wang
<p>I need to configure the probes in kubernetes. In my probes endpoints I use http and basic auth. Here is an example of my deployment.yml</p> <pre><code>livenessProbe: httpGet: path: /actuator/health/liveness port: 8080 scheme: HTTP httpHeaders: - name: Authorization value: Basic dXNlcjpwYXNzd29yZA== </code></pre> <p>Is it possible to configure it to use the secret value in header authorization? As far as I know header must be 'user: password' encoded in base64, otherwise it won't work. I keep the user and password in my application's application.yml, the configMap for this project was created on the basis of application.yml. Is it possible to automatically retrieve user and password from configMap and add to base64 and assign to a variable or secrecy? or is there any possibility to keep user and password in secret and use this in probe? [I don't mean to keep 'user: password' already in base64 in secret type token, only basic auth '].</p> <p>I will be very grateful for any answer.</p>
annaskwq
<blockquote> <p>Is it possible to automatically retrieve user and password from configMap and add to base64 and assign to a variable or secrecy?</p> </blockquote> <p>Short answer is no. If you look into docs you will notice that Kubernetes does not have this functionality. All available options are listed in this <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#httpheader-v1-core" rel="nofollow noreferrer">Kubernetes API</a> reference.</p> <p>You will have to create a custom solution to make this functionality possible like write your own script that read that ConfigMap (e.g. curl API for that config).</p>
acid_fuji
<p>I am trying to migrate <code>cert-manager</code> to API v1, I was able to migrate the Issuer to ClusterIssue (the first part of the YAML). However, I am dealing with a breaking change that there is no more <code>acme</code> on kind Certificate</p> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-myapp-issuer namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: wildcard-myapp-com solvers: - dns01: cloudDNS: serviceAccountSecretRef: name: clouddns-service-account key: key.json project: project-id --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: myapp-com-tls namespace: default spec: secretName: myapp-com-tls issuerRef: name: letsencrypt-myapp-issuer commonName: '*.myapp.com' dnsNames: - myapp.com acme: config: - dns01: provider: google-dns domains: - '*.myapp.com' - myapp.com </code></pre> <p>When I run kubectl apply I got the error:</p> <blockquote> <p>error validating data: ValidationError(Certificate.spec): unknown field &quot;acme&quot; in io.cert-manager.v1.Certificate.spec</p> </blockquote> <p>How can I migrate to the new version of cert-manager?</p>
Rodrigo
<p>As part of v0.8, a new format for configure ACME Certificate resources has been introduced. Notably, challenge solver configuration has moved from the Certificate resource (under <code>certificate.spec.acme</code>) and now resides on your configure Issuer resource, under <code>issuer.spec.acme.solvers</code>.</p> <p>So the result manifests should be as following;</p> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-myapp-issuer namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: wildcard-myapp-com solvers: - selector: dnsNames: - '*.myapp.com' - myapp.com dns01: cloudDNS: serviceAccountSecretRef: name: clouddns-service-account key: key.json project: project-id apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: myapp-com-tls namespace: default spec: secretName: myapp-com-tls issuerRef: name: letsencrypt-myapp-issuer commonName: '*.myapp.com' dnsNames: - myapp.com </code></pre>
James Wang
<p>I am working on migrating my applications to Kubernetes. I am using EKS.</p> <p>I want to distribute my pods to different nodes, to avoid having a single point of failure. I read about <code>pod-affinity</code> and <code>anti-affinity</code> and <code>required</code> and <code>preferred</code> mode.</p> <p><a href="https://stackoverflow.com/a/49900137/3333052">This answer</a> gives a very nice way to accomplish this.</p> <p>But my doubt is, let's say if I have 3 nodes, of which 2 are already full(resource-wise). If I use <code>requiredDuringSchedulingIgnoredDuringExecution</code>, k8s will spin up new nodes and will distribute the pods to each node. And if I use <code>preferredDuringSchedulingIgnoredDuringExecution</code>, it will check for preferred-nodes, and not finding different nodes, will deploy all pods on the third node only. In which case, it will again become a single point of failure.</p> <p>How do I solve this condition?</p> <p>One way I can think of is to have an over-provisioned cluster, so that there are always some extra nodes.</p> <p>The second way, I am not sure how to do this, but I think there should be a way of using both <code>requiredDuringSchedulingIgnoredDuringExecution</code> and <code>preferredDuringSchedulingIgnoredDuringExecution</code>.</p> <p>Can anyone help me with this? Am I missing something? How do people work with this condition?</p> <p>I am new to Kubernetes, so feel free to correct me if I am wrong or missing something.</p> <p>Thanks in advance</p> <p><strong>Note:</strong></p> <p>I don't have a problem running a few similar pods on the same node, just don't want all of the pods to be running on the same node, just because there was only one node available to deploy.</p>
kadamb
<p>I see you are trying to make sure that k8s will never schedule all pod replicas on the same node.</p> <p>It's not possible to create hard requrement like this for kubernetes scheduler.</p> <p>Scheduler will try its best to schedule your application as evenly as possible but in situation when you have 2 nodes without spare resources and 1 node where all pod replicas would be scheduled, k8s can do one of the folowing actions (depending on configuration):</p> <ol> <li>schedule your pods on one node (best effort/default)</li> <li>run one pod and not schedule rest of the pods at all (<code>antiaffnity</code> + <code>requiredDuringSchedulingIgnoredDuringExecution</code>)</li> <li>create new nodes for pods if needed (<code>antiaffnity</code> + <code>requiredDuringSchedulingIgnoredDuringExecution</code> + <code>cluster autoscaler</code>)</li> <li>start deleting pods from nodes to free resources for high-priority pods (<a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer"><code>priority based preemption</code></a>) and reschedule preempted pods if possible</li> </ol> <p>Also read this <a href="https://medium.com/expedia-group-tech/how-to-keep-your-kubernetes-deployments-balanced-across-multiple-zones-dfe719847b41" rel="nofollow noreferrer">article</a> to get better understanding on how scheduler makes its decisions.</p> <p>You can also use <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">PodDisruptionBudget</a> to tell kubernetes to make sure a specified replicas are always working, remember that although:</p> <blockquote> <p>A disruption budget does not truly guarantee that the specified number/percentage of pods will always be up.</p> </blockquote> <p>kubernetes will take it under consideration when making scheduling decisions.</p>
Matt
<p>I need advice for k3s architecture. I would like to create small cluster with one master and 3 agent nodes, but in my opinion master node should be in separate server so it have resources only for itself. But I can't see in k3s documentation <code>--disable-agent</code> anymore, and I read that it is buggy so they removed it, so I am wondering how can I have only server setup on one node and is it a good practice at all?</p>
Most31
<p>Having master node separated is a <a href="https://www.suse.com/c/kubernetes-cluster-vs-master-node/#:%7E:text=A%20master%20node%20is%20a,resembles%20a%20cluster%20in%20Kubernetes.&amp;text=All%20external%20communication%20to%20the,controllers%20for%20the%20running%20cluster." rel="nofollow noreferrer">typical</a> architecture that Kubernetes utilizes since it runs all the vital components (API Server, Controller manager, etcd and scheduler) necessary to manage your cluster. So it a good idea to have it running on another node (In K8s it is the only way although it is possible to schedule pods on master node if you untaint it)</p> <p>Here`s a good <a href="https://levelup.gitconnected.com/kubernetes-cluster-with-k3s-and-multipass-7532361affa3" rel="nofollow noreferrer">article</a> about having multinode k3 cluster that relates to your desire state.</p> <p>Alternative way would be to a solution suggested in this <a href="https://github.com/rancher/k3s/issues/978" rel="nofollow noreferrer">github</a> issue related to <code>--disable-agent</code> and taint the master with <code>NoExecute</code> key.</p>
acid_fuji
<p>I am trying to use ingress-nginx to implement nginx's proxy_pass specified with a URI to distribute different requests to different backend services. How do I configure it?</p> <p>The version of ingress-nginx I am using is 0.22. I tried using the nginx.ingress.kubernetes.io/rewrite-target annotation, but it is not the effect I want.</p> <p>I want to use ingress-nginx to implement the following nginx configuration, the proxy_pass directive is specified with a URI: </p> <pre><code>location /asset/api { proxy_pass http://asset_api/; } </code></pre>
xudahuo
<p>My fault, using nginx.ingress.kubernetes.io/rewrite-target can achieve the above functions:</p> <p>Routing: </p> <pre><code>example.com/asset/web/(.*) </code></pre> <p>Then use the ingress annotation feature:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: '$1' </code></pre>
xudahuo
<p>Following <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/quick-start-guide.md#installation" rel="nofollow noreferrer">this</a> guide, I have deployed a "spark-on-k8s" operator inside my Kubernetes cluster.</p> <p>In the guide, it mentions how you can deploy Spark applications using kubectl commands. </p> <p>My question is whether it is possible to deploy Spark applications from inside a different pod instead of kubectl commands? Say, from some data pipeline applications such as Apache NiFi or Streamsets.</p>
toerq
<p>Yes, you can create pod from inside another pod.</p> <p>All you need is to create a <em>ServiceAcount</em> with appropriate <em>Role</em> that will allow creating pods and assign it to the pod so then you can authenticate to kubernetes api server using rest api or one of k8s client libraries to create you pod.</p> <p>Read more how to do it using <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">kubernetes api</a> in kubernetes documentation.</p> <p>Also read <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">here</a> on how to create roles.</p> <p>And take a look <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">here</a> for list of k8s client libraries.</p> <p>Let me know if it was helpful.</p>
Matt
<p>I have a service deployed in Google Kubernetes Engine and have gotten the request to support TLS 1.3 connections on that service. Currently I do not get higher than TLS 1.2. Do I need to define my ingress differently?</p> <p>My ingress is</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: service-tls-__CI_ENVIRONMENT_SLUG__ namespace: __KUBE_NAMESPACE__ labels: app: __CI_ENVIRONMENT_SLUG__ ref: __CI_ENVIRONMENT_SLUG__ annotations: kubernetes.io/tls-acme: &quot;true&quot; kubernetes.io/ingress.class: &quot;nginx&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; spec: tls: - hosts: - __SERVICE_TLS_ENDPOINT__ secretName: __CI_ENVIRONMENT_SLUG__-service-cert rules: - host: __SERVICE_TLS_ENDPOINT__ http: paths: - path: / backend: serviceName: service-__CI_ENVIRONMENT_SLUG__ servicePort: 8080 </code></pre> <p>Master version 1.17.13-gke.600 Pool version 1.17.13-gke.600</p>
Eelke
<p>Your <code>Ingress</code> resource looks good. I used the same setup as yours and received a message that <code>TLS 1.3</code> was supported.</p> <p>The official documentation states:</p> <blockquote> <h3>Default TLS Version and Ciphers</h3> <p>To provide the most secure baseline configuration possible,</p> <p>nginx-ingress defaults to using TLS 1.2 and <strong>1.3</strong> only, with a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers" rel="nofollow noreferrer">secure set of TLS ciphers</a>.</p> </blockquote> <p>Please check which version of <code>nginx-ingress-controller</code> you are running:</p> <ul> <li><em><a href="https://kubernetes.github.io/ingress-nginx/deploy/#detect-installed-version" rel="nofollow noreferrer">Kubernetes.github.io: Ingress-nginx: Deploy: Detect installed version </a></em></li> </ul> <p>Also you can check if <code>TLS 1.3</code> is enabled in <code>nginx.conf</code> of your <code>nginx-ingress-controller</code> pod (<code>ssl_protocols TLSv1.2 TLSv1.3;</code>). You will need to <code>exec</code> into the pod.</p> <hr /> <h2>Troubleshooting steps for ensuring support for <code>TLS 1.3</code></h2> <hr /> <h3>Does your server (<code>nginx-ingress</code>) supports <code>TLS 1.3</code>?</h3> <p>You can check if your <code>Ingress</code> controller supports it by running an online analysis:</p> <ul> <li><em><a href="https://www.ssllabs.com/ssltest/analyze.html" rel="nofollow noreferrer">SSLLabs.com: SSLTest: Analyze</a></em></li> </ul> <p>You should get a message stating that <code>TLS 1.3</code> is supported:</p> <p><a href="https://i.stack.imgur.com/jzyV6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jzyV6.png" alt="ANALYSIS" /></a></p> <p>You can also use alternative online tools:</p> <ul> <li><em><a href="https://gf.dev/tls-test" rel="nofollow noreferrer">Geekflare.dev: TLS test</a></em></li> <li><em><a href="https://geekflare.com/ssl-test-certificate/" rel="nofollow noreferrer">Geekflare.com: 10 Online Tool to Test SSL, TLS and Latest Vulnerability</a></em></li> </ul> <hr /> <h3>Does your client supports <code>TLS 1.3</code>?</h3> <p>Please make sure that the client connecting to your <code>Ingress</code> supports <code>TLS 1.3</code>.</p> <p>The client connecting to the server was not mentioned in the question:</p> <ul> <li>Assuming that it's a <strong>web browser</strong>, you can check it with a similar tool to the one used for a server: <ul> <li><em><a href="https://clienttest.ssllabs.com:8443/ssltest/viewMyClient.html" rel="nofollow noreferrer">Clienttest.ssllabs.com:8443: SSLTest: ViewMyClient</a></em></li> </ul> </li> <li>Assuming that it is some other tool (<code>curl</code>, <code>nmap</code>, <code>openssl</code>, etc.) please check its documentation for more reference.</li> </ul> <hr /> <p>Additional reference:</p> <ul> <li><em><a href="https://github.com/kubernetes/ingress-nginx/issues/2384" rel="nofollow noreferrer">Github.com: Kubernetes: Ingress nginx: Enable tls 1.3 in the nginx image</a></em></li> <li><em><a href="https://en.wikipedia.org/wiki/Transport_Layer_Security_Adoption" rel="nofollow noreferrer">En.wikipedia.org: Wiki: Transport Layer Security Adoption</a></em></li> </ul>
Dawid Kruk
<p>I'm using the OSS <code>ingress-nginx</code> Ingress controller and trying to create a rewrite-target rule such that I can append a path string before my string match.</p> <p>If I wanted to create a rewrite rule with regex that matches <code>/matched/path</code> and rewrites that to <code>/prefix/matched/path</code>, how might I be able to do that?</p> <p>I've tried something like the following but it's no good, and I'm just confused about the syntax of this ingress definition:</p> <pre><code>metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - path: /(/prefix/)(/|$)(/matched/path)(.*) backend: serviceName: webapp1 </code></pre>
Xenyal
<blockquote> <p>If I wanted to create a rewrite rule with regex that matches <code>/matched/path</code> and rewrites that to <code>/prefix/matched/path</code>, how might I be able to do that?</p> </blockquote> <p>In order to achieve this you have add <code>/prefix</code> into your <code>rewrite-target</code>. Here's a working example with ingress syntax from k8s v1.18:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: example-ingress-v118 annotations: nginx.ingress.kubernetes.io/rewrite-target: /prefix/$1 spec: rules: http: paths: - path: /(matched/path/?.*) backend: serviceName: test servicePort: 80 </code></pre> <p>Since the syntax for the new ingress changed in 1.19 (see <a href="https://kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">release notes</a> and some small info at the end) I`m placing also an example with it:</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress-v119 annotations: nginx.ingress.kubernetes.io/rewrite-target: /prefix/$1 spec: rules: - http: paths: - path: /(matched/path/?.*) pathType: Prefix backend: service: name: test port: number: 80 </code></pre> <p>Here is a test with http echo server:</p> <pre><code>➜ ~ curl 172.17.0.4/matched/path { &quot;path&quot;: &quot;/prefix/matched/path&quot;, &quot;headers&quot;: { &quot;host&quot;: &quot;172.17.0.4&quot;, &quot;x-request-id&quot;: &quot;011585443ebc6adcf913db1c506abbe6&quot;, &quot;x-real-ip&quot;: &quot;172.17.0.1&quot;, &quot;x-forwarded-for&quot;: &quot;172.17.0.1&quot;, &quot;x-forwarded-host&quot;: &quot;172.17.0.4&quot;, &quot;x-forwarded-port&quot;: &quot;80&quot;, &quot;x-forwarded-proto&quot;: &quot;http&quot;, &quot;x-scheme&quot;: &quot;http&quot;, &quot;user-agent&quot;: &quot;curl/7.52.1&quot;, &quot;accept&quot;: &quot;*/*&quot; }, </code></pre> <p>This rule will also ignore the <code>/</code> at the end of the request:</p> <pre><code>➜ ~ curl 172.17.0.4/matched/path/ { &quot;path&quot;: &quot;/prefix/matched/path/&quot;, &quot;headers&quot;: { &quot;host&quot;: &quot;172.17.0.4&quot;, &quot;x-request-id&quot;: &quot;0575e9022d814ba07457395f78dbe0fb&quot;, &quot;x-real-ip&quot;: &quot;172.17.0.1&quot;, &quot;x-forwarded-for&quot;: &quot;172.17.0.1&quot;, &quot;x-forwarded-host&quot;: &quot;172.17.0.4&quot;, &quot;x-forwarded-port&quot;: &quot;80&quot;, &quot;x-forwarded-proto&quot;: &quot;http&quot;, &quot;x-scheme&quot;: &quot;http&quot;, &quot;user-agent&quot;: &quot;curl/7.52.1&quot;, &quot;accept&quot;: &quot;*/*&quot; }, </code></pre> <hr /> <p>Worth to mention some notable differences/changes in the new ingress syntax:</p> <ul> <li><code>spec.backend</code> -&gt; <code>spec.defaultBackend</code></li> <li><code>serviceName</code> -&gt; <code>service.name</code></li> <li><code>servicePort</code> -&gt; <code>service.port.name</code> (for string values)</li> <li><code>servicePort</code> -&gt; <code>service.port.number</code> (for numeric values) <code>pathType</code> no longer has a default value in v1; &quot;Exact&quot;, &quot;Prefix&quot;, or &quot;ImplementationSpecific&quot; must be specified Other Ingress API updates</li> <li>backends can now be resource or service backends</li> <li><code>path</code> is no longer required to be a valid regular expression (#89778, @cmluciano) [SIG API Machinery, Apps, CLI, Network and Testing]</li> </ul>
acid_fuji
<p>The <code>caBundle</code> for <code>MutatingWebhookConfiguration</code> is defined <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#webhookclientconfig-v1-admissionregistration-k8s-io" rel="nofollow noreferrer">here</a> as:</p> <blockquote> <p><code>caBundle</code> is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used.</p> </blockquote> <p>I am getting the PEM encoded CA bundle with this command.</p> <pre class="lang-sh prettyprint-override"><code>kubectl config view --raw --minify --flatten \ -o jsonpath='{.clusters[].cluster.certificate-authority-data}' </code></pre> <p>The resulting value is saved in a variable that is used in a <code>sed</code> command to replace the <code>CA_BUNDLE</code> string in a 'template' YAML as shown below.</p> <pre><code>apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: WEBHOOK_APP labels: app: WEBHOOK_APP webhooks: - name: com.demo.NAMESPACE.WEBHOOK_APP sideEffects: None admissionReviewVersions: ["v1", "v1beta1"] matchPolicy: Equivalent failurePolicy: Fail clientConfig: caBundle: CA_BUNDLE service: name: WEBHOOK_APP namespace: NAMESPACE path: "/mutate" rules: - operations: [ "CREATE", "UPDATE" ] apiGroups: [""] apiVersions: ["v1"] resources: ["pods"] scope: "*" </code></pre> <p>What is the way in Helm chart to pass on the <code>CA_BUNDLE</code>?</p>
cogitoergosum
<p>Reading variable dirctly from env variable in your helm chart is not possible due to security reasons and this functionality was not implemented as states in <a href="https://github.com/technosophos/k8s-helm/blob/master/docs/charts_tips_and_tricks.md#know-your-template-functions" rel="nofollow noreferrer">this document</a>.</p> <p>In helm chart you can always create a variable e.g. <code>myCAbundleVariable</code> in <code>values.yaml</code> file that will be holding your PEM encoded CA and then use value from this variable in chart like this:</p> <pre><code>webhooks: - ... clientConfig: caBundle: {{ .myCAbundleVariable }} </code></pre> <p>If you want to pass the value 'in runtime' when running helm command you can use <code>--set</code> parameter.</p> <p>So your helm command would look like this:</p> <pre><code>helm install ... --set myCAbundleVariable=$(kubectl config view --raw --minify --flatten \ -o jsonpath='{.clusters[].cluster.certificate-authority-data}')` </code></pre> <p>Let me know if it was helpful.</p>
Matt
<p>I am using terraform aws eks registry module <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/12.1.0?tab=inputs" rel="nofollow noreferrer">https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/12.1.0?tab=inputs</a></p> <p>Today with a new change to TF configs (unrelated to EKS) I saw that my EKS worker nodes are going to be rebuilt due to AMI updates which I am trying to prevent. </p> <pre><code> # module.kubernetes.module.eks-cluster.aws_launch_configuration.workers[0] must be replaced +/- resource "aws_launch_configuration" "workers" { ~ arn = "arn:aws:autoscaling:us-east-2:555065427312:launchConfiguration:6c59fac6-5912-4079-8cc9-268a7f7fc98b:launchConfigurationName/edna-dev-eks-02020061119383942580000000b" -&gt; (known after apply) associate_public_ip_address = false ebs_optimized = true enable_monitoring = true iam_instance_profile = "edna-dev-eks20200611193836418800000007" ~ id = "edna-dev-eks-02020061119383942580000000b" -&gt; (known after apply) ~ image_id = "ami-05fc7ae9bc84e5708" -&gt; "ami-073f227b0cd9507f9" # forces replacement instance_type = "t3.medium" + key_name = (known after apply) ~ name = "edna-dev-eks-02020061119383942580000000b" -&gt; (known after apply) name_prefix = "edna-dev-eks-0" security_groups = [ "sg-09b14dfce82015a63", ] </code></pre> <p>The rebuild happens because EKS got updated version of the AMI for worker nodes of the cluster. </p> <p>This is my EKS terraform config</p> <pre><code>################################################################################### # EKS CLUSTER # # # # This module contains configuration for EKS cluster running various applications # ################################################################################### module "eks_label" { source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master" namespace = var.project environment = var.environment attributes = [var.component] name = "eks" } data "aws_eks_cluster" "cluster" { name = module.eks-cluster.cluster_id } data "aws_eks_cluster_auth" "cluster" { name = module.eks-cluster.cluster_id } provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false version = "~&gt; 1.9" } module "eks-cluster" { source = "terraform-aws-modules/eks/aws" cluster_name = module.eks_label.id cluster_version = "1.16" subnets = var.subnets vpc_id = var.vpc_id worker_groups = [ { instance_type = var.cluster_node_type asg_max_size = var.cluster_node_count } ] tags = var.tags } </code></pre> <p>If I am trying to add <strong>lifecycle</strong> block in the module config</p> <pre><code>lifecycle { ignore_changes = [image_id] } </code></pre> <p>I get error:</p> <pre><code>➜ terraform plan Error: Reserved block type name in module block on modules/kubernetes/main.tf line 45, in module "eks-cluster": 45: lifecycle { The block type name "lifecycle" is reserved for use by Terraform in a future version. </code></pre> <h2>Any ideas?</h2>
DmitrySemenov
<p>What about trying to use the <code>worker_ami_name_filter</code> variable for <code>terraform-aws-modules/eks/aws</code> to specifically find only your current AMI?</p> <p>For example:</p> <pre><code>module "eks-cluster" { source = "terraform-aws-modules/eks/aws" cluster_name = module.eks_label.id &lt;...snip...&gt; worker_ami_name_filter = "amazon-eks-node-1.16-v20200531" } </code></pre> <p>You can use AWS web console or cli to map the AMI IDs to their names:</p> <pre><code>user@localhost:~$ aws ec2 describe-images --filters "Name=name,Values=amazon-eks-node-1.16*" --region us-east-2 --output json | jq '.Images[] | "\(.Name) \(.ImageId)"' "amazon-eks-node-1.16-v20200423 ami-01782c0e32657accf" "amazon-eks-node-1.16-v20200531 ami-05fc7ae9bc84e5708" "amazon-eks-node-1.16-v20200609 ami-073f227b0cd9507f9" "amazon-eks-node-1.16-v20200507 ami-0edc51bc2f03c9dc2" </code></pre> <p>But why are you trying to prevent the Auto Scaling Group from using a newer AMI? It will only apply the newer AMI to new nodes. It won't terminate existing nodes just to update them.</p>
weichung.shaw
<p>I want to deploy nextcloud with helm and a custom <code>value.yaml</code> file that fits my needs. Do i have to specify all values given from the original <code>value.yaml</code> or is it possible to only change the values needed, Eg if the only thing I want to change is the host adress my file can look like this:</p> <pre class="lang-yaml prettyprint-override"><code>nextlcoud: host: 192.168.178.10 </code></pre> <p>instead of copying <a href="https://github.com/nextcloud/helm/blob/master/charts/nextcloud/values.yaml" rel="nofollow noreferrer">this file</a> and changing only a few values.</p>
8bit
<p>As the underlying issue was resolved by the answer of user @Kun Li, I wanted to add some examples when customizing Helm charts as well as some additional reference.</p> <p>As asked in the question:</p> <blockquote> <p>Do i have to specify all values given from the original value.yaml or is it possible to only change the values needed</p> </blockquote> <p>In short you don't need to specify all of the values. You can change only some of them (like the <code>host</code> from your question). The ways to change the values are following:</p> <blockquote> <ul> <li>Individual parameters passed with <code>--set</code> (such as <code>helm install --set foo=bar ./mychart</code>)</li> <li>A values file if passed into <code>helm install</code> or <code>helm upgrade</code> with the <code>-f</code> flag (<code>helm install -f myvals.yaml ./mychart</code>)</li> <li>If this is a subchart, the <code>values.yaml</code> file of a parent chart</li> <li>The <code>values.yaml</code> file in the chart</li> </ul> </blockquote> <p>You can read more about it by following official <code>Helm</code> documentation:</p> <ul> <li><em><a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">Helm.sh: Docs: Chart template guide: Values files</a></em></li> </ul> <blockquote> <p><strong>A side note!</strong></p> <p>Above points are set in the order of priority. The first one (<code>--set</code>) will have the highest priority to override the values.</p> </blockquote> <hr /> <h3>Example</h3> <blockquote> <p>A side note!</p> <p>This examples assume that you are in the directory of a pulled Helm chart and you are using Helm v3</p> </blockquote> <p>Using the <code>nextcloud</code> Helm chart used in the question you can set the <code>nextcloud.host</code> value by:</p> <ul> <li>Pulling the Helm chart and editing the values.yaml</li> <li>Creating additional <code>new-values.yaml</code> to pass it in (the <code>values.yaml</code> from Helm chart will be used regardless with lower priority): <ul> <li><code>$ helm install NAME . -f new-values.yaml</code></li> </ul> </li> </ul> <p><code>new-values.yaml</code></p> <pre class="lang-yaml prettyprint-override"><code>nextcloud: host: 192.168.0.2 </code></pre> <ul> <li>Setting the value with <code>helm install NAME . --set nextcloud.host=192.168.0.2</code></li> </ul> <p>You can check if the changes were done correctly by either:</p> <ul> <li><code>$ helm template .</code> - as pointed by user @David Maze</li> <li><code>$ helm install NAME . --dry-run --debug</code></li> </ul>
Dawid Kruk
<h2>Files structure (minimized)</h2> <p>There is a <em>charts</em> folder containing multiple charts.</p> <pre><code>charts/ foo-chart/ templates/ deployment.yml secrets.yml bar-chart/ templates/ configmaps/ script.yml </code></pre> <h3>secrets.yml</h3> <p>Defines a token:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: {{ .Release.Name }}-secret labels: app: {{ include "metrics.name" . }} chart: {{ include "metrics.chart" . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} type: Opaque data: # NOTE: Service token has to fit the NIST requirement serviceToken: {{ randAscii 40 | b64enc }} </code></pre> <h3>deployment.yml</h3> <p>Runs a command which uses an environmental variable which uses a secret:</p> <pre><code>containers: command: - fancy-binary - -token - $(AUTH_TOKEN) env: - name: AUTH_TOKEN valueFrom: secretKeyRef: name: {{ .Release.Name }}-secret key: serviceToken </code></pre> <h3>script.yml</h3> <p>Is supposed to run bash command (Django admin-command) and use environmental variable as well:</p> <pre><code># Create a Service Token django-admin service_token_add $(AUTH_TOKEN) </code></pre> <hr> <h2>Issues</h2> <ol> <li>Is the <code>AUTH_TOKEN</code> going to be visible in <em>script.yml</em>?</li> <li>Does the <code>env</code> <code>valueFrom</code> auto-set the value of <code>AUTH_TOKEN</code> (is deployment going to work)?</li> </ol>
0leg
<p>Answering to your first question, environment variables passed through <code>env</code> field of a container will be visible everywhere in your container so also in the script you run unless you explicitly unset it.</p> <p>You can check it by creating this (you should be able to copypaste the example):</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque data: serviceToken: MTIzNDU2Nzg5MAo= # base64 encoded string: "1234567890" --- apiVersion: v1 kind: Pod metadata: name: test spec: containers: - args: - echo - hello - $(AUTH_TOKEN) name: test env: - name: AUTH_TOKEN valueFrom: secretKeyRef: name: test-secret key: serviceToken image: centos:7 restartPolicy: Never </code></pre> <p>and then when pod completes, check logs and you will see your token:</p> <pre><code>$ kubectl logs test hello 1234567890 </code></pre> <p>The same applies to scripts.</p> <p>Answering you second question; as you probably already saw in example above, using env valueFrom will indeed auto-set your env to the value from secret.</p> <p>Let me know if it was helpful.</p>
Matt
<p>I'm using the <a href="https://github.com/kubernetes/autoscaler" rel="nofollow noreferrer">Kubernetes autoscaler</a> for <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws" rel="nofollow noreferrer">AWS</a>. I've deployed it using the following commands:</p> <pre><code> command: - ./cluster-autoscaler - --v=4 - --stderrthreshold=info - --cloud-provider=aws - --skip-nodes-with-local-storage=false - --nodes=1:10:nodes.k8s-1-17.dev.platform </code></pre> <p>However, the autoscaler can't seem to initiate scaledown. The logs show it finds an unused node, but then doesn't scale it down and doesn't give me an error (the nodes that show "no node group config" are the master nodes).</p> <pre><code>I0610 22:09:37.164102 1 static_autoscaler.go:147] Starting main loop I0610 22:09:37.164462 1 utils.go:471] Removing autoscaler soft taint when creating template from node ip-10-141-10-176.ec2.internal I0610 22:09:37.164805 1 utils.go:626] No pod using affinity / antiaffinity found in cluster, disabling affinity predicate for this loop I0610 22:09:37.164823 1 static_autoscaler.go:303] Filtering out schedulables I0610 22:09:37.165083 1 static_autoscaler.go:320] No schedulable pods I0610 22:09:37.165106 1 static_autoscaler.go:328] No unschedulable pods I0610 22:09:37.165123 1 static_autoscaler.go:375] Calculating unneeded nodes I0610 22:09:37.165141 1 utils.go:574] Skipping ip-10-141-12-194.ec2.internal - no node group config I0610 22:09:37.165155 1 utils.go:574] Skipping ip-10-141-15-159.ec2.internal - no node group config I0610 22:09:37.165167 1 utils.go:574] Skipping ip-10-141-11-28.ec2.internal - no node group config I0610 22:09:37.165181 1 utils.go:574] Skipping ip-10-141-13-239.ec2.internal - no node group config I0610 22:09:37.165197 1 utils.go:574] Skipping ip-10-141-10-69.ec2.internal - no node group config I0610 22:09:37.165378 1 scale_down.go:379] Scale-down calculation: ignoring 4 nodes unremovable in the last 5m0s I0610 22:09:37.165397 1 scale_down.go:410] Node ip-10-141-10-176.ec2.internal - utilization 0.023750 I0610 22:09:37.165692 1 cluster.go:90] Fast evaluation: ip-10-141-10-176.ec2.internal for removal I0610 22:09:37.166115 1 cluster.go:225] Pod metrics-storage/querier-6bdfd7c6cf-wm7r8 can be moved to ip-10-141-13-253.ec2.internal I0610 22:09:37.166227 1 cluster.go:225] Pod metrics-storage/querier-75588cb7dc-cwqpv can be moved to ip-10-141-12-116.ec2.internal I0610 22:09:37.166398 1 cluster.go:121] Fast evaluation: node ip-10-141-10-176.ec2.internal may be removed I0610 22:09:37.166553 1 static_autoscaler.go:391] ip-10-141-10-176.ec2.internal is unneeded since 2020-06-10 22:06:55.528567955 +0000 UTC m=+1306.007780301 duration 2m41.635504026s I0610 22:09:37.166608 1 static_autoscaler.go:402] Scale down status: unneededOnly=true lastScaleUpTime=2020-06-10 21:45:31.739421421 +0000 UTC m=+22.218633767 lastScaleDownDeleteTime=2020-06-10 21:45:31.739421531 +0000 UTC m=+22.218633877 lastScaleDownFailTime=2020-06-10 22:06:44.128044684 +0000 UTC m=+1294.607257070 scaleDownForbidden=false isDeleteInProgress=false </code></pre> <p>Why is the autoscaler not scaling down nodes?</p>
djsumdog
<p>It looks to me <code>cluster-autoscaler</code> is behaving correctly so far. It has decided one of the nodes can be scaled down:</p> <pre><code> 1 cluster.go:121] Fast evaluation: node ip-10-141-10-176.ec2.internal may be removed I0610 22:09:37.166553 1 static_autoscaler.go:391] ip-10-141-10-176.ec2.internal is unneeded since 2020-06-10 22:06:55.528567955 +0000 UTC m=+1306.007780301 duration 2m41.635504026s </code></pre> <p>However, by default <code>cluster-autoscaler</code> will wait 10 minutes before it actually does terminate the node. See "How does scale-down work": <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work</a></p> <p>From the first logs above, it says that your node has been unneeded for <code>duration 2m41</code> - when it reaches 10 minutes, the scale down will occur.</p> <p>After 10 minutes, you should see something like:</p> <pre><code>I0611 14:58:02.384101 1 static_autoscaler.go:382] &lt;node_name&gt; is unneeded since 2020-06-11 14:47:59.621770178 +0000 UTC m=+1299856.757452427 duration 10m2.760318899s &lt;...snip...&gt; I0611 14:58:02.385035 1 scale_down.go:754] Scale-down: removing node &lt;node_name&gt;, utilization: {0.8316326530612245 0.34302838802551344 0.8316326530612245}, pods to reschedule: &lt;...snip...&gt; I0611 14:58:02.386146 1 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"cluster-autoscaler", Name:"cluster-autoscaler-status", UID:"31a72ce9-9c4e-11ea-a0a8-0201be076001", APIVersion:"v1", ResourceVersion:"13431409", FieldPath:""}): type: 'Normal' reason: 'ScaleDown' Scale-down: removing node &lt;node_name&gt;, utilization: {0.8316326530612245 0.34302838802551344 0.8316326530612245}, pods to reschedule: &lt;...snip...&gt; </code></pre> <p>I believe this set up is to prevent thrashing.</p>
weichung.shaw
<p>So in the last few days, I tried to find a way to dynamically attach ingress names (like <code>game-1.myapp.com</code>) to solve TCP &amp; UDP for Steam Dedicated Servers on Kubernetes. I have attached the following diagram on how I planned it, but there are some issues I encountered.</p> <p>I can dynamically create Namespaces, Pods (controlled by Stateful Sets), PVCs, Services, and Ingresses for each individual game server using the Kubernetes API. Each game server lies in its own namespace, completely separated by the others. I assured that the server runs under the hood, the Pod is also Running and active, the logs are good.</p> <p>I got locked out when I needed to assign the Stateful Set service to an Ingress that is able to continuously reply to TCP/UDP traffic by using a namespaced DNS, that routes to the cluster's Ingress Controller (in Minikube; for Production an ALB/NLB should be used, AFAIK).</p> <p>Somehow, I need a way to ingress the <code>game-xxxxx.myapp.com</code> to the specific <code>game-xxxxx</code> namespace's pod. It doesn't really matter if they will have appended ports or not.</p> <p>For this, I can simply just API-call the DNS solver for <code>myapp.com</code> and add or remove <code>A Records</code> when it's needed. This seems okay, but I have found out that I can use ExternalDNS (<a href="https://github.com/bitnami/charts/tree/master/bitnami/external-dns" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/external-dns</a>) to do this automatically for me, based on the existent services.</p> <p>What I have tried, no luck yet:</p> <h3>NGINX</h3> <p>Setting up NGINX, but I had to define the exposed ports for each Service. Based on their documentation (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services</a>), it is <strong>OVERKILL</strong> to modify that ConfigMap and Recreate the NGINX pods each time, since there might be many changes and this does not seem viable. Plus, I highly doubt that NGINX will be a breeze under heavy load, I find it more suitable for web servers rather than game servers.</p> <p>Also, I might need a way to make sure that I can have duplicated ports. For example, I cannot assign in NGINX the same <code>28015</code> port to many other servers, even when they are in different namespaces. If I use Agones (<a href="https://github.com/googleforgames/agones/blob/release-1.9.0/examples/gameserver.yaml" rel="nofollow noreferrer">https://github.com/googleforgames/agones/blob/release-1.9.0/examples/gameserver.yaml</a>) to assign random ports, at some point I might run out of them to assign.</p> <h3>Traefik</h3> <p>I have tried to use Traefik, but had no luck. The IngressRoute allows the TCP/UDP routing from a Router to and EntryPoint than then routes it to the service assigned. I am not really sure how this works, I tried setting annotations to services &amp; defining entry points, but it still refuses to work: <a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroutetcp" rel="nofollow noreferrer">https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroutetcp</a></p> <h3>Agones</h3> <p>Agones should be working for game servers and it supports <code>TCPUDP</code> protocol for service ports, but again, no luck with this.</p> <h3>Flow</h3> <p>I have posted below the diagram on how things should work. I also have this following YAML file that will create the Stateful Set, a PVC, and the Service. You can clearly see I tried ExternalName setup so maybe I can set the Minikube IP to that name and be able to connect, yet again, no luck:</p> <p><a href="https://i.stack.imgur.com/VENAy.png" rel="nofollow noreferrer">Steam Dedicated Server workflow</a></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: rust-service labels: game: rust spec: # type: ExternalName # externalName: rust-1.rust.coal.app # clusterIP: &quot;&quot; selector: game: rust ports: - name: rust-server-tcp protocol: TCP port: 28015 targetPort: 28015 - name: rust-server-udp protocol: UDP port: 28015 targetPort: 28015 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: rust-server spec: selector: matchLabels: game: rust replicas: 1 serviceName: rust-service template: metadata: name: rust-server labels: game: rust spec: containers: - name: rust image: didstopia/rust-server:latest ports: - name: rust-server-tcp protocol: TCP containerPort: 28015 - name: rust-server-udp protocol: UDP containerPort: 28015 volumeClaimTemplates: - metadata: name: local-disk spec: resources: requests: storage: &quot;10Gi&quot; accessModes: [&quot;ReadWriteOnce&quot;] </code></pre> <p>Edit: bump</p>
Alex Renoki
<blockquote> <p><strong>A side note!</strong></p> <p>If <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource is used/mentioned it's referring to <code>HTTP</code>/<code>HTTPS</code> traffic.</p> </blockquote> <hr /> <p>The diagram that you've posted is looking like a good opportunity to use <code>Service</code> of type <code>LoadBalancer</code>.</p> <p>Service of type <code>LoadBalancer</code> is used to handle external <code>TCP</code>/<code>UDP</code> traffic (Layer 4).</p> <blockquote> <p>Disclaimer!</p> <p>This solution <a href="https://github.com/kubernetes/kubernetes/issues/2995" rel="nofollow noreferrer">supports only one</a> at the time protocol, <strong>either</strong> <code>TCP</code> or <code>UDP</code>.</p> <p>To have both protocol on the same port you will need to fallback to <code>Service</code> of type <code>NodePort</code> (which allocates port on a node from <code>30000</code> to <code>32767</code>).</p> <p>You can read more about creating cloud agnostic LoadBalancer that is using <code>NodePort</code> type of service by following this link:</p> <ul> <li><em><a href="https://medium.com/asl19-developers/build-your-own-cloud-agnostic-tcp-udp-loadbalancer-for-your-kubernetes-apps-3959335f4ec3" rel="nofollow noreferrer">Medium.com: Build your own cloud agnostic tcp udp loadbalancer for Kubernetes apps</a></em></li> </ul> </blockquote> <p>In this setup <code>Ingress</code> controllers like <code>Traefik</code> or <code>Nginx</code> are not needed as they will only be an additional step between your <code>Client</code> and a <code>Pod</code>.</p> <p>The example of such <code>LoadBalancer</code> you already have in your <code>YAML</code> definition (I slightly modified it):</p> <pre><code>apiVersion: v1 kind: Service metadata: name: rust-service labels: game: rust spec: type: LoadBalancer # &lt;-- THE CHANGE selector: game: rust ports: - name: rust-server-tcp protocol: TCP port: 28015 targetPort: 28015 </code></pre> <p>If you are intending to use <code>AWS</code> with it's <code>EKS</code> please refer to it's documentation:</p> <ul> <li><em><a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="nofollow noreferrer">Docs.aws.amazon.com: EKS: Userguide: Getting Started</a></em></li> </ul> <p>Example of a possible setup (steps):</p> <ul> <li>For each &quot;game-X&quot;: <ul> <li>Create a <code>namespace</code> &quot;game-X-namespace&quot;</li> <li>Create a <code>deployment</code> &quot;game-X-deployment&quot;</li> <li><strong>Create a <code>service</code> of type <code>LoadBalancer</code> &quot;game-X-&quot; that would point to a &quot;game-X-deployment&quot;</strong></li> <li><strong>Create a <code>DNS</code> record pointing &quot;game-X.com&quot; to IP of <code>LoadBalancer</code> created in previous step.</strong></li> </ul> </li> </ul> <p>Each <code>LoadBalancer</code> would have it's own IP and the <code>DNS</code> name associated with it like:</p> <ul> <li><code>awesome-game.com</code> with IP of <code>123.123.123.123</code> and port to connect to <code>28015/TCP</code></li> <li><code>magnificent-game.com</code> with IP of <code>234.234.234.234</code> and port to connect to <code>28015/TCP</code></li> </ul> <hr /> <p>I reckon this medium guide to create dedicated Steam server could prove useful:</p> <ul> <li><em><a href="https://medium.com/alterway/deploying-a-steam-dedicated-server-on-kubernetes-645099d063e0" rel="nofollow noreferrer">Medium.com: Deploying a steam dedicated server on Kubernetes</a></em></li> </ul> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Service</a></em></li> </ul>
Dawid Kruk
<p>As described <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">here</a>, this is a <a href="https://github.com/kubernetes/kubernetes/blob/v1.13.0/test/images/webhook/main.go" rel="nofollow noreferrer">reference implementation</a> of a webhook server as used in kubernetes e2e test. In the <code>main</code> function, a number of endpoints have been defined to handle different requests for mutation. However, there is no clear <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#contacting-the-webhook" rel="nofollow noreferrer">documentation</a> as to which endpoint gets invoked when.</p> <p>So, how do we know which endpoint is invoked when?</p>
cogitoergosum
<p>I see you are trying to understand what is the ordering of execution of mutating webhooks.</p> <p>I have found <a href="https://github.com/kubernetes/kubernetes/blob/3edbc6afff17ea8dfe5c10b2677dcdc8767f67e2/staging/src/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go#L83-L86" rel="nofollow noreferrer">this piece of code in kubernetes repo</a>. Based on this you can see that these are sorted by name of a webhook to have a deterministic order.</p> <p>A single ordering of mutating admissions plugins (including webhooks) does not work for all cases, so take a look at <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/00xx-admission-webhooks-to-ga.md#mutating-plugin-ordering" rel="nofollow noreferrer">mutating plugin ordering</a> section in Admission webhook proposal for explanation how its handled.</p> <p>Also notice there are no "pod only endpoints" or "endpoints that get called for pods". Let's say you have your webhook server and want to mutate pods and your server has only one endpoint: <code>/</code>. If you want to mutate pods with it you need to specify it under <code>rules</code>. So setting <code>rules[].resources: ["pods"]</code> and <code>rules[].operations: ["CREATE"]</code> in your webhook config will run your mutating webhook whenever there is pod to be created.</p> <p>Let me know it it helped.</p>
Matt
<p>On AWS EKS, I have three pods in a cluster each of which have been exposed by services. The problem is the services can not communicate with each other as discussed here <a href="https://stackoverflow.com/questions/71434784/error-while-doing-inter-pod-communication-on-eks">Error while doing inter pod communication on EKS</a>. It has not been answered yet but further search said that it can be done through Ingress. I am having confusion as to how to do it? Can anybody help ?</p> <p>Code:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: test name: ingress-test annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: server-service port: number: 8000 </code></pre> <p>My server-service has APIs like /api/v1/getAll, /api/v1/updateAll, etc. So, what should I write in path and for a database service what should I do??</p> <p>And say in future I make another microservice and open another service which has APIs like /api/v1/showImage, /api/v1/deleteImage will I have to write all paths in ingress or is their another way for it to work?</p>
Mareek Roy
<p>A Ingress is a really good solution to expose both a frontend and a backend at the same domain with different paths (but reading your other question, it will be of no help in exposing the database)</p> <p>With this said, you don't have to write all the paths in the Ingress (unless you want to) as you can instead use <code>pathType: Prefix</code> as it is already in your example.</p> <p>Let me link you to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#examples" rel="nofollow noreferrer">documentation Examples</a> which explain how it works really well. Basically, you can add a rule with:</p> <pre><code>path: /api pathType: Prefix </code></pre> <p>In order to expose your backend under <code>/api</code> and all the child paths.</p> <hr /> <p>The case where you put a second backend, which wants to be exposed under <code>/api</code> as the first one, is way more complex instead. If two Pods wants to be exposed at the same paths, you will probably need to list all the subpaths in a way that differentiate them.</p> <p>For example:</p> <p>Backed A</p> <pre><code>/api/v1/foo/listAll /api/v1/foo/save /api/v1/foo/delete </code></pre> <p>Backend B</p> <pre><code>/api/v1/bar/listAll /api/v1/bar/save /api/v1/bar/delete </code></pre> <p>Then you could expose one under subPath <code>/api/v1/foo</code> (Prefix) and the other under <code>/api/v1/bar</code> (Prefix).</p> <p>As another alternative, you may want to expose the backends at different paths from what they actually expect using a <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">rewrite target rule</a>.</p>
AndD
<p>I have the following kubernetes manifest</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: traefik-external traefik.ingress.kubernetes.io/router.entrypoints: websecure, web traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip name: ingressname namespace: thenamespace spec: rules: - host: my.host http: paths: - backend: serviceName: theservice servicePort: 8080 path: /api </code></pre> <p>Havin an service, <code>theservice</code>, that listens to <code>/</code> I would expect the url <code>my.host/api/something/anotherthing</code> match to <code>/something/anotherthing</code> in <code>theservice</code>. That doesn't happen for me though, I get a 404 back.</p> <p>Any ideas what might be wrong?</p>
Tomas Jansson
<p>During the transition from v1 to v2, a number of internal pieces and components of Traefik were rewritten and reorganized. As such, the combination of core notions such as frontends and backends has been replaced with the combination of <a href="https://doc.traefik.io/traefik/v2.0/routing/routers/" rel="nofollow noreferrer">routers</a>, <a href="https://doc.traefik.io/traefik/v2.0/routing/services/" rel="nofollow noreferrer">services</a>, and <a href="https://doc.traefik.io/traefik/v2.0/middlewares/overview/" rel="nofollow noreferrer">middlewares</a>.</p> <p>With v2 transforming the URL path prefix of incoming requests is configured with <a href="https://doc.traefik.io/traefik/v2.0/middlewares/overview/" rel="nofollow noreferrer">middlewares</a> object, after the routing step with <a href="https://docs.traefik.io/v2.0/routing/routers/#rule" rel="nofollow noreferrer">router rule <code>PathPrefix</code></a>.</p> <p>With v1 it is defined at ingress level:</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: traefik annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip spec: rules: - host: company.org http: paths: - path: /admin backend: serviceName: admin-svc servicePort: admin </code></pre> <p>With v2 you have define also middleware object alongside ingress-route:</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: http-redirect-ingressroute namespace: admin-web spec: entryPoints: - web routes: - match: Host(`company.org`) &amp;&amp; PathPrefix(`/admin`) kind: Rule services: - name: admin-svc port: admin middlewares: - name: admin-stripprefix --- apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: admin-stripprefix spec: stripPrefix: prefixes: - /admin </code></pre> <p>More information can be found here: <a href="https://doc.traefik.io/traefik/v2.0/migration/v1-to-v2/#frontends-and-backends-are-dead-long-live-routers-middlewares-and-services" rel="nofollow noreferrer">Frontends and Backends Are Dead...<br /> ... Long Live Routers, Middlewares, and Services</a></p>
acid_fuji
<p>My backend services is working great with ingress nginx. I'm trying without success to add a frontend SPA react app to my ingress.</p> <p>I did manage to get it work but I can't get both my backend AND front end to works.</p> <p>My ingress yml is</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' nginx.ingress.kubernetes.io/rewrite-target: /$2 #nginx.ingress.kubernetes.io/add-base-url: &quot;true&quot; spec: rules: - host: accounting.easydeal.dev http: paths: - path: / pathType: Prefix backend: service: name: frontend-srv port: number: 3000 - host: api.easydeal.dev http: paths: - path: / pathType: Prefix backend: service: name: docker-hello-world-svc port: number: 8088 - path: /accounting(/|$)(.*) pathType: Prefix backend: service: name: accounting-srv port: number: 80 - path: /company(/|$)(.*) pathType: Prefix backend: service: name: dealers-srv port: number: 80 </code></pre> <p>With this yml above i'm able to poke my backend like so -&gt; api.easydeal.dev/helloword or api.easydeal.dev/company/* and it work !.</p> <p>However my react app (accounting.easydeal.dev) end up with a white page a console log with this error -&gt;</p> <pre><code>Uncaught SyntaxError: Unexpected token '&lt;' </code></pre> <p>The only way i'm able to make my react app work is to change rewrite-target: <strong>/$2 to /</strong> . However doing so prevent to route correctly my other apis.</p> <p><em>I did set the homepage for the react app to &quot;.&quot; but still have the error and I also try to set path to /?(*) for my front end</em></p> <p>here is my dockerfile</p> <pre><code># pull the base image FROM node:alpine # set the working direction WORKDIR /app # add `/app/node_modules/.bin` to $PATH ENV PATH /app/node_modules/.bin:$PATH # install app dependencies COPY package.json ./ COPY package-lock.json ./ RUN npm install COPY . ./ EXPOSE 3000 CMD [&quot;npm&quot;, &quot;start&quot;] </code></pre>
Pilouk
<p>As pointed in the comments by original poster:</p> <blockquote> <p>Doing 2 ingress services sold this issue.</p> </blockquote> <p>The solution to this issue was to create 2 separate <code>Ingress</code> resources.</p> <p>The underlying issue was that the workload required 2 different <code>nginx.ingress.kubernetes.io/rewrite-target:</code> parameters.</p> <p><strong>Above annotations can be set per <code>Ingress</code> resource and not per path.</strong></p> <p>You can create 2 <code>Ingress</code> resources that will be separate entities (will have different annotations) and they will work &quot;together&quot;.</p> <p>More reference can be found in the links below:</p> <ul> <li><em><a href="https://stackoverflow.com/a/60750408/12257134">Stackoverflow.com: Answer: Apply nginx-ingress annotations at path level </a></em></li> <li><em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: User guide: Basic usage</a></em></li> </ul> <hr /> <h3>Being specific to <code>nginx-ingress</code>:</h3> <p><strong>By default</strong> when you provision/deploy <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">NGINX Ingress controller</a> you are telling your Kubernetes cluster to create <code>Service</code> of type <code>LoadBalancer</code>. This <code>Service</code> will requests the IP address from the cloud provider (<code>GKE</code>, <code>EKS</code>, <code>AKS</code>) and will route the traffic from this IP to your <code>Ingress</code> controller where the requests will be evaluated and send further according to your <code>Ingress</code> resources definitions.</p> <blockquote> <p><strong>A side note!</strong></p> <p><code>By default</code> was not used without a reason as there are other methods to expose your <code>Ingress</code> controller to the external traffic. You can read more about it by following below link:</p> <ul> <li><em><a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: Deploy: Baremetal</a></em></li> </ul> </blockquote> <p>Your <code>Ingress</code> controller will have <strong>single</strong> IP address to expose your workload:</p> <ul> <li><code>$ kubectl get service -n ingress-nginx ingress-nginx-controller</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.32.6.63 AA.BB.CC.DD 80:30828/TCP,443:30664/TCP 19m </code></pre> <p><code>Ingress</code> resource that are using <code>kubernetes.io/ingress.class: &quot;nginx&quot;</code> will use that address.</p> <p><code>Ingress</code> resources created in this way will look like following when issuing:</p> <ul> <li><code>$ kubectl get ingress</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>NAME HOSTS ADDRESS PORTS AGE goodbye-ingress goodbye.domain.name AA.BB.CC.DD 80 19m hello-ingress hello.domain.name AA.BB.CC.DD 80 19m </code></pre> <blockquote> <p><strong>A second side note!</strong></p> <p>If you are using a managed Kubernetes cluster, please refer to it's documentation for more reference on using <code>Ingress</code> resources as there could be major differences.</p> </blockquote>
Dawid Kruk
<p>I keep getting this error when I try to setup liveness &amp; readiness prob for my awx_web container</p> <pre><code>Liveness probe failed: Get http://POD_IP:8052/: dial tcp POD_IP:8052: connect: connection refused </code></pre> <p>Liveness &amp; Readiness section in my deployment for the container awx_web</p> <pre><code> ports: - name: http containerPort: 8052 # the port of the container awx_web protocol: TCP livenessProbe: httpGet: path: / port: 8052 initialDelaySeconds: 5 periodSeconds: 5 readinessProbe: httpGet: path: / port: 8052 initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>if I test if the port 8052 is open or not from another pod in the same namespace as the pod that contains the container awx_web or if I test using a container deployed in the same pod as the container awx_web i get this (port is open)</p> <pre><code>/ # nc -vz POD_IP 8052 POD_IP (POD_IP :8052) open </code></pre> <p>I get the same result (port 8052 is open) if I use netcat (nc) from the worker node where pod containing the container awx_web is deployed.</p> <p>for info I use a NodePort service that redirect traffic to that container (awx_web)</p> <pre><code>type: NodePort ports: - name: http port: 80 targetPort: 8052 nodePort: 30100 </code></pre>
Abderrahmane
<p>I recreated your issue and it looks like your problem is caused by too small value of <code>initialDelaySeconds</code> for the liveness probe.</p> <p>It takes more than 5s for awx container to open 8052 port. You need to wait a bit longer for it to start. I have found out that setting it to 15s is enough for me, but you may require some tweaking.</p>
Matt
<p>I have a running pod that was created with the following <code>pod-definition.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: microservice-one-pod-name labels: app: microservice-one-app-label type: front-end spec: containers: - name: microservice-one image: vismarkjuarez1994/microserviceone ports: - containerPort: 2019 </code></pre> <p>I then created a Service using the following <code>service-definition.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>kind: Service apiVersion: v1 metadata: name: microserviceone-service spec: ports: - port: 30008 targetPort: 2019 protocol: TCP selector: app: microservice-one-app-label type: NodePort </code></pre> <p>I then ran <code>kubectl describe node minikube</code> to find the Node IP I should be connecting to -- which yielded:</p> <pre class="lang-sh prettyprint-override"><code>Addresses: InternalIP: 192.168.49.2 Hostname: minikube </code></pre> <p>But I get no response when I run the following curl command:</p> <pre class="lang-sh prettyprint-override"><code>curl 192.168.49.2:30008 </code></pre> <p>The request also times out when I try to access <code>192.168.49.2:30008</code> from a browser.</p> <p>The pod logs show that the container is up and running. Why can't I access my Service?</p>
Vismark Juarez
<p>The problem is that you are trying to access your service at the <code>port</code> parameter which is the internal port at which the service will be exposed, even when using <code>NodePort</code> type.</p> <p>The parameter you were searching is called <code>nodePort</code>, which can optionally be specified together with <code>port</code> and <code>targetPort</code>. Quoting the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">documentation</a>:</p> <blockquote> <p>By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: <code>30000-32767</code>)</p> </blockquote> <p>Since you didn't specify the <code>nodePort</code>, one in the range was automatically picked up. You can check which one by:</p> <pre><code>kubectl get svc -owide </code></pre> <p>And then access your service externally at that port.</p> <p>As an alternative, you can change your service definition to be something like:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: microserviceone-service spec: ports: - port: 30008 targetPort: 2019 nodePort: 30008 protocol: TCP selector: app: microservice-one-app-label type: NodePort </code></pre> <p>But take in mind that you may need to delete your service and create it again in order to change the <code>nodePort</code> allocated.</p>
AndD
<p>I have an local Kubernetes environment and I basically copy <strong>.kube/config</strong> file to my local and added &quot;<strong>context</strong>&quot;, &quot;<strong>users</strong>&quot;, and &quot;<strong>cluster</strong>&quot; informations to my current &quot;<strong>.kube/config</strong>&quot; file. That's ok, I can connect to my local file.</p> <p>But I want to add these informations to my local config file with commands.</p> <p>So regarding to this page, I can use &quot;<strong>certificate-authority-data</strong>&quot; as parameter like below: ---&gt; <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</a></p> <pre><code>PS C:\Users\user\.kube&gt; kubectl config --kubeconfig=config set-cluster local-kubernetes --server=https://10.10.10.10:6443 --certificate-authority-data=LS0tLSAASDASDADAXXXSDETRDFDJHFJWEtGCmx0YVR2SE45Rm9IVjAvQkdwRUM2bnFNTjg0akd2a3R4VUpabQotLS0tLUVORCBDADADADDAADS0tXXXCg== Error: unknown flag: --certificate-authority-data See 'kubectl config set-cluster --help' for usage. PS C:\Users\user\.kube&gt; </code></pre> <p>But it throws error like above. I'm using kubernetes latest version.</p> <p>How can I add these informations to my local file with kubectl config command?</p> <p>Thanks!</p>
yatta
<p>Possible solution for that is to use <code>--flatten</code> flag with config command:</p> <pre><code>➜ ~ kubectl config view --flatten=true </code></pre> <blockquote> <p>flatten the resulting kubeconfig file into self contained output (useful for creating portable kubeconfig files)</p> </blockquote> <p>That can be also exported to a file (portable config):</p> <pre class="lang-sh prettyprint-override"><code>kubectl config view --flatten &gt; out.txt </code></pre> <p>You can read more about kube config in <a href="https://ahmet.im/blog/mastering-kubeconfig/" rel="noreferrer">Mastering the KUBECONFIG file</a> document.</p> <p>Once you run this command on the server where the appropriate certificate are present you will receive base64 encoded keys: <code>certificate-authority-data</code>, <code>client-certificate-data</code> and <code>client-key-data</code>.</p> <p>Then you can use the command provided in the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#config" rel="noreferrer">official config document</a>:</p> <pre><code>➜ ~ kubectl config set clusters.my-cluster.certificate-authority-data $(echo &quot;cert_data_here&quot; | base64 -i -) </code></pre> <p>Then you have to replace <code>(echo &quot;cert_data_here&quot; | base64 -i -)</code> with data from flatten config file.</p> <p>Worth to mention that this info is also available with <code>-help flag</code> for kubectl config:</p> <pre><code>kubectl config set --help Sets an individual value in a kubeconfig file PROPERTY_VALUE is the new value you wish to set. Binary fields such as 'certificate-authority-data' expect a base64 encoded string unless the --set-raw-bytes flag is used. Specifying a attribute name that already exists will merge new fields on top of existing values. Examples: # Set certificate-authority-data field on the my-cluster cluster. kubectl config set clusters.my-cluster.certificate-authority-data $(echo &quot;cert_data_here&quot; | base64 -i -) </code></pre>
acid_fuji
<p>I've installed microk8s on Ubuntu to have a simple Kubernetes cluster for test purposes.</p> <p>I have a usecase where I have to execute a command in a container (in a kubernetes pod) with another user than the one which is used to run the container.</p> <p>Since kubectl does not provide such a possibility, the workaround for docker environment is to use <code>docker exec -u</code>. But the Kubernetes cluster installed by microk8s does not use docker as container runtime, but only containerd.</p> <p>I did not find a possibility to execute a command (as it is possible with docker) in a container as another user with containerd's ctr cli.</p> <p>Is there a possibility?</p>
buderu
<p>As noted in the comment:</p> <blockquote> <p>@buderu I'm afraid this will not be possible with containerd's ctrl cli as per this <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl" rel="nofollow noreferrer">documentation</a>.</p> </blockquote> <p>Citing above documentation:</p> <blockquote> <h3>Mapping from docker cli to crictl</h3> <p>The exact versions for below mapping table are for docker cli v1.40 and crictl v1.19.0.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>docker cli</th> <th>crictl</th> <th>Description</th> <th><strong>Unsupported Features</strong></th> </tr> </thead> <tbody> <tr> <td>attach</td> <td>attach</td> <td>Attach to a running container</td> <td>--detach-keys, --sig-proxy</td> </tr> <tr> <td>exec</td> <td>exec</td> <td>Run a command in a running container</td> <td>--privileged, <strong>--user</strong>, --detach-keys</td> </tr> </tbody> </table> </div></blockquote> <hr /> <p>A way to approach the problem would be the following: use <code>crictl exec</code> to run a UID-changing program which in turn runs the desired payload; for example, to run a login <code>bash</code> shell as user with UID 1000:</p> <ul> <li><code>$ crictl exec -i -t gosu 1000 bash -l</code></li> </ul> <p>A word about <code>gosu</code>. It's Go-based <code>setuid</code>+<code>setgid</code>+<code>setgroups</code>+<code>exec</code> program:</p> <pre class="lang-sh prettyprint-override"><code>$ gosu Usage: ./gosu user-spec command [args] eg: ./gosu tianon bash ./gosu nobody:root bash -c 'whoami &amp;&amp; id' ./gosu 1000:1 id ./gosu version: 1.1 (go1.3.1 on linux/amd64; gc) </code></pre> <p>You can read more about it by following it's github page:</p> <ul> <li><em><a href="https://github.com/tianon/gosu" rel="nofollow noreferrer">Github.com: Tianon: gosu</a></em></li> </ul> <p><strong>It's worth noting that the solution above won't work with a generic container.</strong></p> <p>User is required to install mentioned program by either:</p> <ul> <li>Including the <a href="https://github.com/tianon/gosu/blob/master/INSTALL.md" rel="nofollow noreferrer">installation part in Dockerfile</a> when creating container's image.</li> <li>Downloading it into the container (provided that the container have the ability to download files with <code>curl</code> or <code>wget</code>): <ul> <li><code>$ crictl exec my-container wget -O /gosu https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64</code></li> <li><code>$ crictl exec -i -t my-container /gosu 1000 some-other-command</code></li> </ul> </li> </ul> <blockquote> <p><strong>A side note!</strong></p> <p>Using second option (downloading straight into the container) required also to run:</p> <ul> <li><code>$ chmod +x ./gosu</code></li> </ul> </blockquote> <hr /> <p>Additional notes to consider:</p> <ul> <li><p><code>su</code> and <code>sudo</code> are meant for a full-fledged UNIX system, and likely won't work unless PAM is installed and the user to switch to is listed in <code>/etc/passwd</code></p> </li> <li><p><code>gosu</code> and <code>setpriv</code> are much simpler and will basically only run Linux <code>setuid()</code> syscall and then execute the specified payload</p> </li> <li><p><code>gosu</code>, being a Go program, can be easily compiled statically which simplifies deployment (just copy the binary in the container)</p> </li> </ul>
Dawid Kruk
<p>I am looking for a way to get environment variable in data section of configmap. In the below yml configuration, I have assigned $NODE_NAME which didn't help me. Is there any way to get this work</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: config namespace: kube-system data: test.conf: | { &quot;push&quot;: 5, &quot;test&quot;: $NODE_NAME } </code></pre>
JibinNajeeb
<p>One way to achieve this would be by using the <a href="https://stackoverflow.com/a/14157575/12201084">envsubst</a> as following:</p> <pre><code>$ export NODE_NAME=my-node-name $ cat &lt;&lt; EOF | envsubst | kubectl apply -f- apiVersion: v1 kind: ConfigMap metadata: name: config namespace: kube-system data: test.conf: | { &quot;push&quot;: 5, &quot;test&quot;: $NODE_NAME } EOF </code></pre> <hr /> <p>But sth tells me that you want to use this in a pod and populate config with environment variable.</p> <p>Have a look at this example:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: config namespace: kube-system data: test.conf: | { &quot;push&quot;: 5, &quot;test&quot;: $NODE_NAME } --- apiVersion: v1 kind: Pod metadata: labels: run: example-pod name: example-pod spec: initContainers: - args: - sh - -c - cat /test.conf | envsubst &gt; /data/test.conf image: bhgedigital/envsubst name: envsubst env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: - mountPath: /data name: data-volume - mountPath: /test.conf subPath: test.conf name: config-volume containers: - image: busybox name: busybox args: - sleep - &quot;1000&quot; volumeMounts: - mountPath: /data name: data-volume volumes: - name: data-volume emptyDir: {} - name: config-volume configMap: name: config </code></pre> <p>when you apply the above yaml you can check if the file was substituted correctly as following:</p> <pre><code>$ kubectl exec -it example-pod -- cat /data/test.conf { &quot;push&quot;: 5, &quot;test&quot;: minikube } </code></pre> <p>As you can see I was testing it in minikube (hence nodeName = minikube in my case)</p>
Matt
<p>I am using the Nginx annotations in Helm like so:</p> <pre><code>ingress: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 hosts: - host: &quot;example.com&quot; paths: - path: /api(/?)(.*) </code></pre> <p>When visiting <code>example.com/api/</code>, my URL is rewritten as expected and I am forwarded to my application.</p> <p>However, when the trailing slash is omitted, e.g <code>example.com/api</code>, this no longer is the case. What could I do to ensure that the scenario without a trailing slash included is correctly evaluated?</p>
a clue
<p>I think you are searching for regex alternatives?</p> <pre><code>ingress: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 hosts: - host: &quot;example.com&quot; paths: - path: /api(/|$)(.*) </code></pre> <p>Either after <code>/api</code> there's another <code>/</code> with whatever (captured by the <code>$2</code>) or there is the end of the line, which will make <code>/api</code> be rewritten to <code>/</code>.</p>
AndD
<p>I'm trying to deploy mongodb with the kubernetes operator on AWS EKS with EFS for the storage class. Just following the documentation examples here:</p> <p><a href="https://github.com/mongodb/mongodb-kubernetes-operator" rel="nofollow noreferrer">https://github.com/mongodb/mongodb-kubernetes-operator</a></p> <p>I don't understand how to define the PVC naming properly. I've looked through the Github issues and Stack Overflow. Just not finding the example to resolve what seems to be a simple issue.</p> <hr /> <pre><code>apiVersion: mongodb.com/v1 kind: MongoDB metadata: name: mongodb spec: members: 1 type: ReplicaSet version: &quot;4.2.6&quot; security: authentication: modes: [&quot;SCRAM&quot;] users: - name: my-user db: admin passwordSecretRef: # a reference to the secret that will be used to generate the user's password name: my-user-password roles: - name: clusterAdmin db: admin - name: userAdminAnyDatabase db: admin scramCredentialsSecretName: my-scram statefulSet: spec: volumeClaimTemplates: - metadata: name: data-volume spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 1Gi storageClassName: &quot;efs-sc&quot; </code></pre> <p>Events:</p> <pre><code>create Pod mongodb-0 in StatefulSet mongodb failed error: failed to create PVC -mongodb-0: PersistentVolumeClaim &quot;-mongodb-0&quot; is invalid: metadata.name: Invalid value: &quot;-mongodb-0&quot;: a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') </code></pre>
miqueltango
<p>I replicated this error and I found an error in your yaml file. There is a missing indentation in the <code>voluemClaimTemplates</code> name.</p> <p>This is your yaml file:</p> <pre class="lang-yaml prettyprint-override"><code> volumeClaimTemplates: - metadata: 👉 name: data-volume spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 1Gi storageClassName: &quot;efs-sc&quot; </code></pre> <p>And this is the correct part with fixed indentation:</p> <pre class="lang-yaml prettyprint-override"><code> volumeClaimTemplates: - metadata: 👉 name: data-volume spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 1Gi storageClassName: &quot;efs-sc&quot; </code></pre> <p>It appears that because of this error the operator could not get the name correctly and he was trying to create volumes with its default template. You can verify that yourself with <code>kubectl get sts mongodb -o yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code> volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeMode: Filesystem - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10G volumeMode: Filesystem </code></pre>
acid_fuji
<p>Im trying to update the <code>tls-cipher-suites</code> for the <code>daemonset.apps/node-exporter</code> of <code>openshift-monitoring</code> namespace using <code>oc edit daemonset.apps/node-exporter -n openshift-monitoring</code></p> <pre><code>. . . - args: - --secure-listen-address=:9100 - --upstream=http://127.0.0.1:9101/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 image: quay.io/coreos/kube-rbac-proxy:v0.3.1 imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: . . . </code></pre> <p>Once the <code>tls-cipher-suites</code> are updated, i see that node-exporter pods are getting re-deployed. But when I check the <code>daemonset.apps/node-exporter</code> using <code>oc get -o yaml daemonset.apps/node-exporter -n openshift-monitoring</code> i see that the updates done to <code>tls-cipher-suites</code> are lost and it has got re-set to old value. How can i set this value permanently?</p> <p>Note: The purpose of updating <code>tls-cipher-suites</code> is Nessus scan has reported <code>SWEET32</code> vulnerability for port 9100 for the Medium Strength Ciphers: ECDHE-RSA-DES-CBC3-SHA and DES-CBC3-SHA.</p>
Rakesh Kotian
<p>Openshift 3.11 seems to indeed be using <a href="https://github.com/openshift/openshift-ansible/tree/release-3.11/roles/openshift_cluster_monitoring_operator" rel="nofollow noreferrer">openshift_cluster_monitoring_operator</a>. This is why when you delete or change anything it recovers it to its defaults.</p> <p>It manages node-exporter installation and it doesn't seem to allow for customizing node-exporter installation. Take a look at <a href="https://github.com/openshift/cluster-monitoring-operator/blob/ca646cfdb92b202289c3faf92227fcaddd19c8a1/Documentation/user-guides/configuring-cluster-monitoring.md#nodeexporterconfig" rel="nofollow noreferrer">the cluster-monitoring-operator docs</a></p> <p>My recommendation would be to <a href="https://github.com/openshift/openshift-ansible/tree/release-3.11/roles/openshift_cluster_monitoring_operator#installation" rel="nofollow noreferrer">uninstall openshift monitoring operator</a> and install node-exporter yourself from <a href="https://github.com/prometheus-operator/kube-prometheus/blob/master/manifests/node-exporter-daemonset.yaml" rel="nofollow noreferrer">official node-exporter repository</a> or with <a href="https://github.com/prometheus-community/helm-charts" rel="nofollow noreferrer">helm chart</a> where you actually have full controll over deployment.</p>
Matt
<p>when running specific command from linux terminal command is the following:</p> <pre><code>/opt/front/arena/sbin/ads_start ads -db_server vmazfassql01 -db_name Test1 </code></pre> <p>In regular docker compose yaml file we define it like this:</p> <pre><code>ENTRYPOINT [&quot;/opt/front/arena/sbin/ads_start&quot;, &quot;ads&quot; ] command: [&quot;-db_server vwmazfassql01&quot;,&quot;-db_name Test1&quot;] </code></pre> <p>Then I tried to convert it to Kubernetes</p> <pre><code>command: [&quot;/opt/front/arena/sbin/ads_start&quot;,&quot;ads&quot;] args: [&quot;-db_server vwmazfassql01&quot;,&quot;-db_name Test1&quot;] </code></pre> <p>or without quotes for args</p> <pre><code>command: [&quot;/opt/front/arena/sbin/ads_start&quot;,&quot;ads&quot;] args: [-db_server vwmazfassql01,-db_name Test1] </code></pre> <p>but I got errors for both cases:</p> <pre><code>Unknown parameter value '-db_server vwmazfassql01' Unknown parameter value '-db_name Test1' </code></pre> <p>I then tried to remove dashes from args but then it seems those values are being ignored and not set up. During the Initialization values process, during the container start, those properties seem to have they default values e.g. db_name: &quot;ads&quot;. At least that is how it is printed out in the log during the Initialization.</p> <p>I tried few more possibilities: To define all of them in command:</p> <pre><code>command: - /opt/front/arena/sbin/ads_start - ads - db_server vwmazfassql01 - db_name Test1 </code></pre> <p>To define them in little bit different way:</p> <pre><code>command: [&quot;/opt/front/arena/sbin/ads_start&quot;,&quot;ads&quot;] args: - db_server vwmazfassql01 - db_name Test1 command: [&quot;/opt/front/arena/sbin/ads_start&quot;,&quot;ads&quot;] args: [db_server vwmazfassql01,db_name Test1] </code></pre> <p>again they are being ignored, and not being set up. Am I doing something wrong? How I can make some workaround for this? Thanks</p>
vel
<p>I would try separating the args, following the documentation example (<a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#run-a-command-in-a-shell" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#run-a-command-in-a-shell</a>)</p> <p>Something like:</p> <pre><code>command: [&quot;/opt/front/arena/sbin/ads_start&quot;, &quot;ads&quot;] args: [&quot;-db_server&quot;, &quot;vwmazfassql01&quot;, &quot;-db_name&quot;, &quot;Test1&quot;] </code></pre> <p>Or maybe, it would work even like this and it looks more clean:</p> <pre><code>command: [&quot;/opt/front/arena/sbin/ads_start&quot;] args: [&quot;ads&quot;, &quot;-db_server&quot;, &quot;vwmazfassql01&quot;, &quot;-db_name&quot;, &quot;Test1&quot;] </code></pre> <p>This follows the general approach of running an external command from code (a random example is python subprocess module) where you specify each single piece of the command that means something on its own.</p>
AndD
<p>kubernetes V19</p> <p>Create a new NetworkPolicy named allow-port-from-namespace that allows Pods in the existing namespace internal to connect to port 80 of other Pods in the same namespace.</p> <p>Ensure that the new NetworkPolicy:</p> <p>does not allow access to Pods not listening on port 80 does not allow access from Pods not in namespace internal</p> <p>i need to know if i can do it without adding a labels to namspace and pod or not ?</p>
Inforedaster
<p>In <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods" rel="nofollow noreferrer">k8s networkpolicy docs</a> you read:</p> <blockquote> <p>By default, pods are non-isolated; they accept traffic from any source.</p> <p>Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)</p> <p>Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result</p> </blockquote> <p>This means that once you assign(select) a pod with network policy you never set deny rules because everyting is denied by default. You only specify allow rules.</p> <p>This beeing explained lets go back to k8s docs where you can read the following:</p> <blockquote> <p>There are four kinds of selectors that can be specified in an ingress from section or egress to section:</p> <p><strong>podSelector</strong>: This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations.</p> <p><strong>namespaceSelector</strong>: This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations.</p> <p><strong>namespaceSelector</strong> and <strong>podSelector</strong>: A single to/from entry that specifies both namespaceSelector and podSelector selects particular Pods within particular namespaces [...]</p> </blockquote> <p>I am not going to paste all docs here, check the rest <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">here</a>.</p> <hr /> <p>Now to answer you question: <code>&quot;I need to know if i can do it without adding a labels to namspace and pod or not ?&quot;</code></p> <p>What you should notice in the docs metioned above is that you can only target namespace and pods using labels.</p> <p>And when you don't use namespace label selector, the selector dafaults to the namespace where networkpolicy is deployed.</p> <p>So, yes, you can do it without adding a labels to a namespace as long as you deploy network policy in the namespace you want to target. And you can also do it without adding labels to a pod as long as this is the only pod in the namespace.</p>
Matt
<p>I am trying to write a Go program to get pod logs from the cluster. I am using AKS kubernetes cluster. How can I access the kubeconfig file inside my script? Following is my code:</p> <pre><code>package main import ( &quot;context&quot; &quot;flag&quot; &quot;fmt&quot; &quot;time&quot; &quot;os&quot; &quot;path/filepath&quot; &quot;k8s.io/apimachinery/pkg/api/errors&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/kubernetes&quot; //&quot;k8s.io/client-go/rest&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; // // Uncomment to load all auth plugins // _ &quot;k8s.io/client-go/plugin/pkg/client/auth&quot; // // Or uncomment to load specific auth plugins // _ &quot;k8s.io/client-go/plugin/pkg/client/auth/azure&quot; // _ &quot;k8s.io/client-go/plugin/pkg/client/auth/gcp&quot; // _ &quot;k8s.io/client-go/plugin/pkg/client/auth/oidc&quot; // _ &quot;k8s.io/client-go/plugin/pkg/client/auth/openstack&quot; ) func main() { /*// creates the in-cluster config config, err := rest.InClusterConfig() if err != nil { panic(err.Error()) }*/ fmt.Printf(&quot;Creating cluster config&quot;) kubePtr := flag.Bool(&quot;use-kubeconfig&quot;, false, &quot;use kubeconfig on local system&quot;) flag.Parse() fmt.Printf(&quot;Updating the existing config&quot;) var kubeconfig string if *kubePtr == true { kubeconfig = filepath.Join(os.Getenv(&quot;HOME&quot;), &quot;.kube&quot;, &quot;config&quot;) } else { kubeconfig = &quot;&quot; } fmt.Printf(&quot;Building config from flags&quot;) config, err := clientcmd.BuildConfigFromKubeconfigGetter(&quot;&quot;, kubeconfig) fmt.Printf(&quot;creating the clientset&quot;) // creates the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } for { // get pods in all the namespaces by omitting namespace // Or specify namespace to get pods in particular namespace pods, err := clientset.CoreV1().Pods(&quot;&quot;).List(context.TODO(), metav1.ListOptions{}) if err != nil { panic(err.Error()) } fmt.Printf(&quot;There are %d pods in the cluster\n&quot;, len(pods.Items)) // Examples for error handling: // - Use helper functions e.g. errors.IsNotFound() // - And/or cast to StatusError and use its properties like e.g. ErrStatus.Message _, err = clientset.CoreV1().Pods(&quot;default&quot;).Get(context.TODO(), &quot;example-xxxxx&quot;, metav1.GetOptions{}) if errors.IsNotFound(err) { fmt.Printf(&quot;Pod example-xxxxx not found in default namespace\n&quot;) } else if statusError, isStatus := err.(*errors.StatusError); isStatus { fmt.Printf(&quot;Error getting pod %v\n&quot;, statusError.ErrStatus.Message) } else if err != nil { panic(err.Error()) } else { fmt.Printf(&quot;Found example-xxxxx pod in default namespace\n&quot;) } time.Sleep(10 * time.Second) } } </code></pre> <p>I am getting error in line 51. Following is my error:</p> <pre><code>error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined </code></pre> <p>Where do I find <code>KUBERNETES_SERVICE_HOST</code> and <code>KUBERNETES_SERVICE_PORT</code> and how do I pass it? I could not find any example.</p>
Sormita Chakraborty
<p>First thing I noticed is that you didn't mention how you are starting your program.</p> <p>Looking into the code I saw you are creating a <code>kubePtr</code> flag that defaults to <code>false</code>. When flag is set to <code>true</code> it sets kubeconfig variable to kubeconfig path, but beacuse it is false (by default) it sets it to &quot;&quot; and that's why it cant find the config.</p> <hr /> <p>After you set this flag to true you saw the following error:</p> <blockquote> <p>cannot use kubeconfig (type string) as type clientcmd.KubeconfigGetter in argument to clientcmd.BuildConfigFromKubeconfigGetter</p> </blockquote> <p>whihc means that you have type mismatch. Let's have a look at <a href="https://godoc.org/k8s.io/client-go/tools/clientcmd#BuildConfigFromKubeconfigGetter" rel="nofollow noreferrer">BuildConfigFromKubeconfigGetter() function</a> argument types:</p> <pre><code>func BuildConfigFromKubeconfigGetter(masterUrl string, kubeconfigGetter KubeconfigGetter) (*restclient.Config, error) </code></pre> <p>Notice that you are passing a <em>string</em> as an argument that is expected to be of type <em>KubeconfigGetter</em>.</p> <p>Preferably use different function like <a href="https://godoc.org/k8s.io/client-go/tools/clientcmd#BuildConfigFromFlags" rel="nofollow noreferrer">clientcmd.BuildConfigFromFlags()</a>, that as an argument is expecting a path (<em>string</em>) to kubeconfig file.</p> <p>In official client-go library repository on github you can find <a href="https://github.com/kubernetes/client-go/tree/master/examples" rel="nofollow noreferrer">several examples</a> that can help you get started with clien-go library.</p> <p>E.g. take a look at <a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go#L44-L62" rel="nofollow noreferrer">this official example</a>, and notice how the client is configured.</p>
Matt
<p>I have a microservice that is working on my laptop. However, I am using docker compose. I am working to deploy to a kubernetes cluster which I have already set up. I am stuck on making data persistent. E.g here is my mongodb in docker-compose</p> <pre><code>systemdb: container_name: system-db image: mongo:4.4.1 restart: always ports: - '9000:27017' volumes: - ./system_db:/data/db networks: - backend </code></pre> <p>Since it is an on premise solution, I went with an NFS server. I have created a Persistent Volume and Persistent Volume Claim (pvc-nfs-pv1) which seem to work well when testing with nginx. However, I don't know how to deploy a mongodb statefulset to use the pvc. I am not implementing a replicaset.</p> <p>Here is my yaml:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mongod spec: serviceName: mongodb-service replicas: 1 selector: matchLabels: role: mongo template: metadata: labels: role: mongo environment: test spec: terminationGracePeriodSeconds: 10 containers: - name: mongod-container image: mongo resources: requests: cpu: &quot;0.2&quot; memory: 200Mi ports: - containerPort: 27017 volumeMounts: - name: pvc-nfs-pv1 mountPath: /data/db volumeClaimTemplates: - metadata: name: pvc-nfs-pv1 annotations: volume.beta.kubernetes.io/storage-class: &quot;standard&quot; spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 500Mi </code></pre> <p>How do i do it?</p>
Denn
<p><code>volumeClaimTemplates</code> are used for dynamic volume provisioning. So you're defining one volume claim template which will be used to create a <code>PersistentVolumeClaim</code> for each pod.</p> <blockquote> <p>The <code>volumeClaimTemplates</code> will provide stable storage using <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolumes</a> provisioned by a PersistentVolume Provisioner</p> </blockquote> <p>So for your use case you would need to create <code>storageclass</code> with nfs provisioner. <a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="nofollow noreferrer">NFS Subdir external provisioner</a> is an automatic provisioner that use your <em>existing and already configured</em> NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as <code>${namespace}-${pvcName}-${pvName}</code>.</p> <p>Here`s an example how to define storage class:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: pathPattern: &quot;${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}&quot; # waits for nfs.io/storage-path annotation, if not specified will accept as empty string. onDelete: delete </code></pre>
acid_fuji
<p><strong>GOAL</strong> I want to get access to kubernetes dashboard with a standalone nginx service and a microk8s nodeport service.</p> <p><strong>CONTEXT</strong> I have a linux server. On this server, there are several running services such as:</p> <ul> <li>microk8s</li> <li>nginx (note: I am not using ingress, nginx service works independently from microk8s).</li> </ul> <p>Here is the workflow that I am looking for:</p> <ol> <li>http:// URL /dashboard</li> <li>NGINX service (FROM http:// URL /dashboard TO nodeIpAddress:nodeport)</li> <li>nodePort service</li> <li>kubernetes dashboard service</li> </ol> <p><strong>ISSUE:</strong> However, each time I request http:// URL /dashboard I receive a <strong>502 bad request</strong> answer, what I am missing?</p> <p><strong>CONFIGURATION</strong> Please find below, nginx configuration, node port service configuration and the status of microk8s cluster:</p> <p><a href="https://i.stack.imgur.com/s2i6p.png" rel="nofollow noreferrer">nginx configuration: /etc/nginx/site-availables/default</a></p> <p><a href="https://i.stack.imgur.com/exeDh.png" rel="nofollow noreferrer">node-port-service configuration</a></p> <p><a href="https://i.stack.imgur.com/qp27a.png" rel="nofollow noreferrer">node ip address</a></p> <p><a href="https://i.stack.imgur.com/qT5eb.png" rel="nofollow noreferrer">microk8s namespaces</a></p> <p>Thank you very much for your helps.</p>
itbob
<p>I'll summarize the whole problem and solutions here.</p> <p>First, the service which needs to expose the Kubernetes Dashboard needs to point at the right target port, and also needs to select the right Pod (the kubernetes-dashboard Pod)</p> <p>If you check your service with a:</p> <pre><code>kubectl desribe service &lt;service-name&gt; </code></pre> <p>You can easily see if its selecting a Pod (or more than one) or nothing, by looking at the Endpoints section. In general, your service should have the same selector, port, targetPort and so on of the standard kubernetes-dashboard service (which expose the dashboard but only internally to the cluster)</p> <p>Second, your NGINX configuration proxy the location /dashboard to the service, but the problem is that the kubernetes-dashboard Pod is expecting requests to reach / directly, so the path /dashboard means nothing to it.</p> <p>To solve the second problem, there are a few ways, but they all lay in the NGINX configuration. If you read the documentation of the module proxy (aka <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass" rel="nofollow noreferrer">http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass</a>) you can see that the solution is to add an URI in the configuration, something like this:</p> <pre><code>proxy_pass https://51.68.123.169:30000/ </code></pre> <p>Notice the trailing slash, that is the URI, which means that the location matching the proxy rule is rewritten into /. This means that your_url/dashboard will just become your_url/</p> <p>Without the trailing slash, your location is passed to the target as it is, since the target is only and endpoint.</p> <p>If you need more complex URI changes, what you're searching is a rewrite rule (they support regex and lots more) but adding the trailing slash should solve your second problem.</p>
AndD
<p>I am using Kubernetes to exec into a pod like this:</p> <pre class="lang-sh prettyprint-override"><code>kubectl exec myPod bash -i </code></pre> <p>which works fine, except I don't get a prompt. So then I do:</p> <pre class="lang-sh prettyprint-override"><code>export PS1=&quot;myPrompt &quot; </code></pre> <p>Which I would expect to give me a prompt, but doesn't. Is there some workaround for this?</p>
egilchri
<p>Trying to exec into pod in interactive way requires specifying <code>-ti</code> option.</p> <p>Where <code>-i</code> passes stdin to the container and <code>-t</code> connects your terminal to this stdin.</p> <p>Take a look at the following example:</p> <pre><code>kubectl exec -it myPod -- bash </code></pre>
Matt
<p><strong>Goal</strong></p> <p>I want to use keycloak as oauth/oidc provider for my minikube cluster.</p> <p><strong>Problem</strong></p> <p>I am confused with the available documentation.</p> <p>According to this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#external-authentication" rel="noreferrer">documentation</a> ngnix-ingress can handle external authentication with annotations</p> <ul> <li>nginx.ingress.kubernetes.io/auth-method</li> <li>nginx.ingress.kubernetes.io/auth-signin</li> </ul> <p>But it is not clear from the doc what kind of authentication is used here. Is it OAUTH/BASIC/ SAML ???</p> <p>I have not found any variables to provide oauth CLIENTID for ingress for example.</p> <p><strong>Additional findings</strong></p> <p>I also have found this project <a href="https://github.com/oauth2-proxy/oauth2-proxy" rel="noreferrer">https://github.com/oauth2-proxy/oauth2-proxy</a> which seems to be what I need and provides following design</p> <p>user -&gt; ngnix-ingress -&gt; oauth2-proxy -&gt; keycloak</p> <p><strong>Questions:</strong></p> <ol> <li>Do I have to use oauth2-proxy to achieve keycloak oauth?</li> <li>Am I right that ngnix-ingress does not have functionality for direct connection to keycloak?</li> <li>Is there any clear documentation about what exactly nginx.ingress.kubernetes.io/auth-method and nginx.ingress.kubernetes.io/auth-signin are doing?</li> <li>Is there any right way/documentation for building <strong>user -&gt; ngnix-ingress -&gt; oauth2-proxy -&gt; keycloak</strong> integration?</li> </ol>
Ivan
<p>The nginx ingress controller documents provide an <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/" rel="noreferrer">example</a> of <code>auth-url</code> and <code>auth-signin</code>:</p> <pre class="lang-yaml prettyprint-override"><code>... metadata: name: application annotations: nginx.ingress.kubernetes.io/auth-url: &quot;https://$host/oauth2/auth&quot; nginx.ingress.kubernetes.io/auth-signin: &quot;https://$host/oauth2/start?rd=$escaped_request_uri&quot; ... </code></pre> <p>Please be aware of that this functionality works only with two ingress objects:</p> <blockquote> <p>This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.</p> <p>Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect <code>401</code>s to the same endpoint.</p> </blockquote> <p>This <a href="https://docs.syseleven.de/metakube/de/tutorials/setup-ingress-auth-to-use-keycloak-oauth" rel="noreferrer">document</a> shows a good example how those two ingress objects are used in order to have this functionality.</p> <p>So the first ingress here points to <code>/oauth2</code> path which is then defined in separate ingress object since this one does not have auth configured for itself.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/auth-url: &quot;https://$host/oauth2/auth&quot; nginx.ingress.kubernetes.io/auth-signin: &quot;https://$host/oauth2/start?rd=$escaped_request_uri&quot; name: external-auth-oauth2 namespace: MYNAMESPACE spec: rules: - host: foo.bar.com </code></pre> <p>The second ingress as mentioned earlier defines the <code>/oauth2</code> path under the same domain and points to your ouauth2 proxy deployment which also answers one of your question that you</p> <p>The second ingress objects defines the <em>/oauth2</em> path under the same domain and points to the oauth2-proxy deployment:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: oauth2-proxy namespace: MYNAMESPACE annotations: cert-manager.io/cluster-issuer: designate-clusterissuer-prod kubernetes.io/ingress.class: nginx spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: oauth2-proxy servicePort: 80 path: /oauth2 </code></pre> <blockquote> <p>Is there any clear documentation about what exactly nginx.ingress.kubernetes.io/auth-method and nginx.ingress.kubernetes.io/auth-signin are doing?</p> </blockquote> <p>The <code>auth-method</code> annotation specifies the HTTP method to use while <code>auth-signin</code> specifies the location of the error page. Please have a look at valid nginx controllers methods <a href="https://github.com/kubernetes/ingress-nginx/blob/14345ebcfe82521437580c5d63a1e5b4cadedd73/internal/ingress/annotations/authreq/main.go#L96" rel="noreferrer">here</a>.</p> <p><strong>Couple of points to know/consider:</strong></p> <ol> <li><p>What is the main goal:</p> <p>-- <a href="https://medium.com/@mrbobbytables/kubernetes-day-2-operations-authn-authz-with-oidc-and-a-little-help-from-keycloak-de4ea1bdbbe" rel="noreferrer">authentication to kubernetes</a> cluster using OIDC and keycloak?</p> <p>-- using dex: <a href="https://dexidp.io/docs/kubernetes/" rel="noreferrer">https://dexidp.io/docs/kubernetes/</a></p> <p>-- minikube <a href="https://minikube.sigs.k8s.io/docs/tutorials/openid_connect_auth/" rel="noreferrer">openid authentication</a>:</p> </li> <li><p><a href="https://www.keycloak.org/docs/latest/securing_apps/" rel="noreferrer">Securing</a> Applications and Services using keycloak</p> <p>Keycloak supports both OpenID Connect (an extension to OAuth 2.0) and SAML 2.0. When securing clients and services the first thing you need to decide is which of the two you are going to use. If you want you can also choose to secure some with OpenID Connect and others with SAML.</p> <p>To secure clients and services you are also going to need an adapter or library for the protocol you’ve selected. Keycloak comes with its own adapters for selected platforms, but it is also possible to use generic <a href="https://www.keycloak.org/docs/latest/securing_apps/#openid-connect-2" rel="noreferrer">OpenID</a> Connect Relying Party and SAML Service Provider libraries.</p> <p>In most cases Keycloak recommends using OIDC. For example, OIDC is also more suited for HTML5/JavaScript applications because it is easier to implement on the client side than SAML.</p> <p>Please also have look at the <a href="https://www.openshift.com/blog/adding-authentication-to-your-kubernetes-web-applications-with-keycloak" rel="noreferrer">adding authentication</a> to your Kubernetes Web applications with Keycloak document.</p> </li> </ol>
acid_fuji
<p>I am getting the following error on deployment:</p> <pre><code> Warning FailedScheduling 0s (x2 over 0s) default-scheduler 0/1 nodes are available: 1 node(s) had volume node affinity conflict. </code></pre> <p>Here are the yaml files.</p> <pre><code>dev-storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer dev-volume-pv-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: dev-volume-pv-claim spec: # storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi dev-volume.yaml apiVersion: v1 kind: PersistentVolume metadata: name: dev-volume labels: type: local spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete # storageClassName: local-storage local: path: /home/code nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - dev-volume docs.yaml apiVersion: v1 kind: Pod metadata: name: docs spec: volumes: - name: dev-volume-storage persistentVolumeClaim: claimName: dev-volume-pv-claim containers: - image: our/docs name: docs resources: {} volumeMounts: - mountPath: &quot;/app&quot; name: dev-volume-storage dnsPolicy: ClusterFirst restartPolicy: Always status: {} </code></pre> <p><strong>UPDATE:</strong> The following seems to work. I had to create a label clustername (or whatever you choose) and use that in the affinity filter</p> <pre><code>dev-storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer dev-volume-pv-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: dev-volume-pv-claim spec: # storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 1Gi dev-volume.yaml apiVersion: v1 kind: PersistentVolume metadata: name: dev-volume labels: clustername: local spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete # storageClassName: local-storage local: path: /home/myproj nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: clustername operator: In values: - local docs.yaml apiVersion: v1 kind: Pod metadata: name: docs spec: volumes: - name: dev-volume-storage persistentVolumeClaim: claimName: dev-volume-pv-claim containers: - image: our/docs:local name: docs resources: {} volumeMounts: - mountPath: &quot;/app/code&quot; name: dev-volume-storage ports: - containerPort: 8080 dnsPolicy: ClusterFirst restartPolicy: Always nodeSelector: clustername: local status: {} </code></pre>
mathtick
<p>Your PV define a nodeAffinity:</p> <pre><code>nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - dev-volume </code></pre> <p>This is basically asking for the PV to get created in the node of the Kubernetes cluster with hostname equal to dev-volume. And since the Pod uses that PV, the Pod will be scheduled only on nodes selected by the affinity specified in the PV. ( <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity</a>)</p> <p>You can see labels of nodes with the command:</p> <pre><code>kubectl get nodes --show-labels </code></pre> <p>And you can add (and also remove) labels from nodes with:</p> <pre><code>kubectl label nodes &lt;your-node-name&gt; disktype=ssd </code></pre> <p>With this said, if I understood correctly, you want to schedule a Pod which mounts a local filesystem path on one of your nodes. If you have <strong>only one node</strong>, you could simply remove the nodeAffinity part of the PV definition.</p> <p>If you have more than one node and you want the Pod to be scheduled on a specific node of the cluster, so that the local filesystem used is on a certain node, you can edit the nodeAffinity of the PV to match the hostname of the specific node you want it to get scheduled on.</p> <p>You can also give the Pod a nodeSelector, with something like:</p> <pre><code>nodeSelector: disktype: ssd </code></pre> <p>This node selector will match nodes with label disktype=ssd</p> <p>For more info on this, you could check Kubernetes documentation: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/</a></p>
AndD
<p>This might look like a <a href="https://stackoverflow.com/questions/62496338/app-on-path-instead-of-root-not-working-for-kubernetes-ingress">duplicate</a> but it isn't as the solution in the linked thread doesn't work for me.</p> <p>I have Ingress configured to dispatch requests to different pods based on the path</p> <p>Desired behavior:</p> <p>public_ip/app1 -&gt; pod1_ip:container1_port/<br /> public_ip/app2 -&gt; pod2_ip:container2_port/<br /> public_ip/app3 -&gt; pod3_ip:container3_port/</p> <p>Actual behavior:</p> <p>public_ip/app1 -&gt; pod1_ip:container1_port/app1<br /> public_ip/app2 -&gt; pod2_ip:container2_port/app2<br /> public_ip/app3 -&gt; pod3_ip:container3_port/app3</p> <p>So we get 404's on app1, app2, app3</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: some_name annotations: cert-manager.io/cluster-issuer: letsencrypt-prod acme.cert-manager.io/http01-edit-in-place: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: ingressClassName: &quot;nginx&quot; tls: - hosts: - some.host secretName: tls-cafe-ingress rules: - host: some.host http: paths: - path: /app1(/|$)(.*) backend: serviceName: app1 servicePort: 1234 - path: /app2(/|$)(.*) backend: serviceName: app2 servicePort: 2345 - path: /app3(/|$)(.*) backend: serviceName: app3 servicePort: 3456 </code></pre> <p>The problem is that Ingress igores the path specifications once there are regular expressions in it. This can be seen by checking the logs of Ingress:</p> <pre><code>k logs -n nginx-ingress ingress-pod-name </code></pre> <p>Here we can see hat nginx has requests to /appX in the log and tries to serve them form the local html folder, in other words, the path defined in the yaml are ignored.</p> <p>If regexes are removed from the path it works but then the path is sent downstream to the target pod which breaks the application</p>
Romeo Kienzler
<p>There are two popular K8s Ingress controller that uses Nginx:</p> <ul> <li>One is maintained by open source community (<a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">kubernetes/ingress-nginx</a>) and we use to call it <code>community ingress controller</code></li> <li>Second is maintained by NGINX and its called <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">nginxinc/kubernetes-ingress</a></li> </ul> <p>The key difference that apply in this case is the use of annotation. For the community ingress controller you use:</p> <p><code>nginx.ingress.kubernetes.io/&lt;annotation_type&gt;</code></p> <p>And for nginxinc uses:</p> <pre><code>nginx.org/&lt;annoation_type&gt; </code></pre> <p>To check more about differences please visit <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md" rel="nofollow noreferrer">this</a> document.</p>
acid_fuji
<p>I am trying to use Kubernetes Ingress Nginx Controller and running a simple nginx server in AWS EKS.</p> <p>Browser (https) --&gt; Route 53 (DNS) --&gt; CLB --&gt; nginx Ingress (Terminate TLS) --&gt; Service --&gt; POD</p> <p>But I am receiving 404 error in browser (url used: <a href="https://example.com/my-nginx" rel="nofollow noreferrer">https://example.com/my-nginx</a>):</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;404 Not Found&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx/1.19.10&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>and in ingress logs (kubectl logs -n nginx-ingress nginx-ingress-nginx-controller-6db6f85bc4-mfpwx), I can see below:</p> <p>192.168.134.181 - - [24/Apr/2021:19:02:01 +0000] &quot;GET /my-nginx HTTP/2.0&quot; 404 154 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0&quot; 219 0.002 [eshop-dev-my-nginx-9443] [] 192.168.168.105:80 154 0.000 404 42fbe692a032bb40bf193954526369cd</p> <p>Here is my deployment yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx namespace: eshop-dev spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 </code></pre> <p>Service yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: namespace: eshop-dev name: my-nginx spec: selector: run: my-nginx ports: - name: server port: 9443 targetPort: 80 protocol: TCP </code></pre> <p>and ingress yaml:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress namespace: eshop-dev annotations: kubernetes.io/ingress.class: &quot;nginx&quot; spec: rules: - host: example.com http: paths: - path: /my-nginx pathType: ImplementationSpecific backend: service: name: my-nginx port: number: 9443 tls: - hosts: - example.com secretName: externaluicerts </code></pre> <p>I have verified that service returns the desired output, when used with port forwarding:</p> <pre><code>kubectl -n eshop-dev port-forward service/my-nginx 9443:9443 </code></pre> <p>I'm not sure if the ingress is incorrectly configured or if it is another problem.Thanks in advance for the help!</p> <p><a href="https://i.stack.imgur.com/SJllT.png" rel="nofollow noreferrer">nginx-port-forward</a></p> <p>Here is the output of kubectl get ingress -n eshop-dev test-ingress -o yaml</p> <pre><code>kubectl get ingress -n eshop-dev test-ingress -o yaml Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;networking.k8s.io/v1&quot;,&quot;kind&quot;:&quot;Ingress&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;kubernetes.io/ingress.class&quot;:&quot;nginx&quot;},&quot;name&quot;:&quot;test-ingress&quot;,&quot;namespace&quot;:&quot;eshop-dev&quot;},&quot;spec&quot;:{&quot;rules&quot;:[{&quot;host&quot;:&quot;example.com&quot;,&quot;http&quot;:{&quot;paths&quot;:[{&quot;backend&quot;:{&quot;service&quot;:{&quot;name&quot;:&quot;my-nginx&quot;,&quot;port&quot;:{&quot;number&quot;:9443}}},&quot;path&quot;:&quot;/my-nginx&quot;,&quot;pathType&quot;:&quot;ImplementationSpecific&quot;}]}}],&quot;tls&quot;:[{&quot;hosts&quot;:[&quot;example.com&quot;],&quot;secretName&quot;:&quot;externaluicerts&quot;}]}} kubernetes.io/ingress.class: nginx creationTimestamp: &quot;2021-04-24T13:16:21Z&quot; generation: 13 managedFields: - apiVersion: networking.k8s.io/v1beta1 fieldsType: FieldsV1 fieldsV1: f:status: f:loadBalancer: f:ingress: {} manager: nginx-ingress-controller operation: Update time: &quot;2021-04-24T13:16:40Z&quot; - apiVersion: extensions/v1beta1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: {} manager: kubectl-client-side-apply operation: Update time: &quot;2021-04-24T13:18:36Z&quot; - apiVersion: networking.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:kubectl.kubernetes.io/last-applied-configuration: {} f:kubernetes.io/ingress.class: {} f:spec: f:rules: {} f:tls: {} manager: kubectl-client-side-apply operation: Update time: &quot;2021-04-24T16:33:47Z&quot; name: test-ingress namespace: eshop-dev resourceVersion: &quot;7555944&quot; selfLink: /apis/extensions/v1beta1/namespaces/eshop-dev/ingresses/test-ingress uid: a7694655-20c6-48c7-8adc-cf3a53cf2ffe spec: rules: - host: example.com http: paths: - backend: serviceName: my-nginx servicePort: 9443 path: /my-nginx pathType: ImplementationSpecific tls: - hosts: - example.com secretName: externaluicerts status: loadBalancer: ingress: - hostname: xxxxxxxxxxxxxxxxdc75878b2-433872486.eu-west-1.elb.amazonaws.com </code></pre>
user2725290
<p>From the image you posted of the nginx-port-forward, I see you went on <code>localhost:9443</code> directly, which means the Nginx server you are trying to access serve its content under <code>/</code></p> <p>But in the ingress definition, you define that the service will be served with <code>path: /my-nginx</code>. This could be the problem, as you are requesting <code>https://example.com/my-nginx</code> which will basically go to <code>my-nginx:9443/my-nginx</code> and, depending on the Pod behind this service, it could return a 404 if there's nothing at that path.</p> <p>To test if the problem is what I specified above, you have a few options:</p> <ul> <li>easiest one, remove <code>path: /my-nginx</code> an, instead, go with <code>path: /</code>. You could also specify <code>pathType: Prefix</code> which means that everything matching the subPath specified will be served by the service.</li> <li>Add a rewrite target, which is necessary if you want to serve a service at a different path from the one expected by the application.</li> </ul> <p>Add an annotation similar to the following:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress namespace: eshop-dev annotations: kubernetes.io/ingress.class: &quot;nginx&quot; # this will rewrite request under / + second capture group nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: example.com http: paths: # this will serve all paths under /my-nginx and capture groups for regex annotations - path: /my-nginx(/|$)(.*) pathType: ImplementationSpecific backend: service: name: my-nginx port: number: 9443 tls: - hosts: - example.com secretName: externaluicerts </code></pre> <ul> <li>Configure your application to know that it will be served under the path you desire. This is often the better approach, as frontend applications should almost always be served under the path that they expect to be.</li> </ul> <p>From the info you posted, I think this is the problem an once fixed, your setup <strong>should</strong> work.</p> <hr /> <p>If you are curious about rewrite targets or how paths work in an Ingress, here is some documentation:</p> <p>Rewrites ( <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite</a> )</p> <p>Path types ( <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types</a> )</p> <hr /> <p><strong>Update</strong></p> <p>About why configuring the application to directly serve its content at the path specified in the Ingress (basically to know at which path it will served) is the best solution:</p> <p>Let's say you serve a complex application in your Pod which will serve its content under /. The main page will try to load several other resources like css, js code and so on, everything from the root directory. Basically, if I open <code>/</code>, the app will load also:</p> <pre><code>/example.js /my-beautiful.css </code></pre> <p>Now, if I serve this app behind an ingress at another path, let's say under <code>/test/</code> <strong>with a rewrite target</strong>, the main page will work, because:</p> <pre><code>/test/ --&gt; / # this is my rewrite rule </code></pre> <p>but then, the page will request <code>/example.js</code>, and the rewrite works in one direction only, so the browser will request a resource which will go in 404, because the request should have been <code>/test/example.js</code> (as that would rewrite to remove the /test part of the path)</p> <p>So, with frontend applications, rewrite targets may not be enough, mostly if the applications request resources with absolute paths. With just REST API or single requests instead, rewrites usually works great.</p>
AndD
<p>I have a list of properties defined in <code>values.yaml</code> as follows:</p> <pre><code>files: - &quot;file1&quot; - &quot;file2&quot; </code></pre> <p>Then in my template I want to create config maps out of my values.</p> <p>I came up with the following template:</p> <pre><code>{{- range $value := .Values.files }} --- apiVersion: v1 kind: ConfigMap metadata: name: {{ $value }} data: {{ $value }}: {{ .Files.Get (printf &quot;%s/%s&quot; &quot;files&quot; $value) | indent 4 }} {{- end }} </code></pre> <p>As you can see I want to have configmaps with same name as files. I mixed up several parts of documentation, however my template does not work as expected.</p> <p>How can I achieve configmap creation through templates?</p> <p><strong>//EDIT</strong> I expect to have the following ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: file1 data: file1: &lt;file1 content&gt; --- apiVersion: v1 kind: ConfigMap metadata: name: file2 data: file2: &lt;file2 content&gt; </code></pre>
Forin
<p>Try the following:</p> <pre><code>{{- $files := .Files }} {{- range $value := .Values.files }} --- apiVersion: v1 kind: ConfigMap metadata: name: {{ $value }} data: {{ $value }}: | {{ $files.Get (printf &quot;%s/%s&quot; &quot;files&quot; $value) | indent 6 }} {{- end }} </code></pre> <p>You problem seems to be with incorrect indentation. Make sure the line with <code>$files.Get</code> starts with no spaces.</p> <p>And I also added <code>{{- $files := .Files }}</code> to access <code>.Files</code>, because for some reason <code>range</code> is changeing the <code>.</code> scope.</p> <p>I have found <a href="https://helm.sh/docs/chart_template_guide/accessing_files/#basic-example" rel="nofollow noreferrer">this example</a> in documentation but it seems to work only if file's content are one-line. So if your files are one line only, then you should check the example.</p> <p>Also notice the <code>|</code> after <code>{{ $value }}:</code>. You need it because file content is a multiline string. Check <a href="https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-over-multiple-lines">this StackQuestion</a> on how to use multiline strings in yaml.</p>
Matt
<p>I created a new service account and a rolebining giving him the role of cluster-admin as follows. I applied a new CRD resource with it and I expected it to fail as the default cluster-admin role can not manage CRD unless a new ClusterRole is created with aggregate-to-admin label, but the CRD was created and I do not understand why.</p> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles</a></p> <p>kubectl create -f new_crd.yaml --as=system:serviceaccount:test-ns:test</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: test-rolebinding subjects: - kind: ServiceAccount name: test namespace: test-ns roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io </code></pre>
Revital Eres
<p>Addressing the part of the last comment:</p> <blockquote> <p>I do not understand the purpose of using aggregate-to-admin label -- I thought its purpose is to add rules to cluster-admin but if cluster-admin can do anything in the first place then why it is used?</p> </blockquote> <p><code>aggregate-to-admin</code> is a <code>label</code> used to aggregate <code>ClusterRoles</code>. This exact is used to aggregate <code>ClusterRoles</code> to an <strong><code>admin</code></strong> <code>ClusterRole</code>.</p> <blockquote> <p>A side note!</p> <p><code>cluster-admin</code> and <code>admin</code> are two separate <code>ClusterRoles</code>.</p> </blockquote> <p>I will include the example of aggregating <code>ClusterRoles</code> with an explanation below.</p> <p>You can read in the official Kubernetes documentation:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Default ClusterRole</th> <th>Default ClusterRoleBinding</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>cluster-admin</td> <td>system:masters group</td> <td>Allows super-user access to perform any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the role binding's namespace, including the namespace itself.</td> </tr> <tr> <td>admin</td> <td>None</td> <td>Allows admin access, intended to be granted within a namespace using a RoleBinding. If used in a RoleBinding, allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role does not allow write access to resource quota or to the namespace itself.</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> </div> <ul> <li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access authn authz: RBAC: User facing roles</a></em></li> </ul> <hr /> <h3>Using aggregated <code>ClusterRoles</code></h3> <p>The principle behind aggregated <code>Clusterroles</code> is to have one <code>ClusterRole</code> that have multiple other <code>ClusterRoles</code> aggregated to it.</p> <p>Let's assume that:</p> <ul> <li>A <code>ClusterRole</code>: <code>aggregated-clusterrole</code> will be aggregating two other <code>ClusterRoles</code> that will have needed permissions on some actions.</li> <li>A <code>ClusterRole</code>: <code>clusterrole-one</code> will be used to add some permissions to <code>aggregated-clusterrole</code></li> <li>A <code>ClusterRole</code>: <code>clusterrole-two</code> will be used to add some permissions to <code>aggregated-clusterrole</code></li> </ul> <p>An example of such setup could be implemented by <code>YAML</code> definitions like below:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: aggregated-clusterrole aggregationRule: clusterRoleSelectors: - matchLabels: rbac.example.com/put-here-any-label-name: &quot;true&quot; # &lt;-- IMPORTANT rules: [] </code></pre> <p>Above definition will be aggregating <code>ClusterRoles</code> created with a <code>label</code>:</p> <ul> <li><code>rbac.example.com/put-here-any-label-name: &quot;true&quot;</code></li> </ul> <p>Describing this <code>ClusterRole</code> without aggregating any <code>ClusterRoles</code> with previously mentioned <code>label</code>:</p> <ul> <li><code>$ kubectl describe clusterrole aggregated-clusterrole</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>Name: aggregated-clusterrole Labels: &lt;none&gt; Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- </code></pre> <p>Two <code>ClusterRoles</code> that will be used are the following:</p> <p><code>clusterrole-one.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterrole-one labels: rbac.example.com/put-here-any-label-name: &quot;true&quot; # &lt;-- IMPORTANT rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] </code></pre> <p><code>clusterrole-two.yaml</code>:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterrole-two labels: rbac.example.com/put-here-any-label-name: &quot;true&quot; # &lt;-- IMPORTANT rules: - apiGroups: [&quot;&quot;] resources: [&quot;services&quot;] verbs: [&quot;create&quot;, &quot;delete&quot;] </code></pre> <p>After applying above definitions, you can check if <code>aggregated-clusterrole</code> have permissions used in <code>clusterrole-one</code> and <code>clusterrole-two</code>:</p> <ul> <li><code>$ kubectl describe clusterrole aggregated-clusterrole</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>Name: aggregated-clusterrole Labels: &lt;none&gt; Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- services [] [] [create delete] pods [] [] [get list watch] </code></pre> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access authn authz: RBAC</a></em></li> </ul>
Dawid Kruk
<p>I'm trying to create a simple solution on my local server running Windows Server 2016 in which I want to deploy a dotnet core API with 5 replicas into k8s. I want to have an Ingress service set up such that it acts as a load balancer between the replicas of my app.</p> <p>So far I got nothing.</p> <p>Could anyone provide me a template that could accomplish that?</p> <p>Thank you all in advance.</p>
Mehmet Can Erim
<p>Do you have a running Kubernetes cluster?</p> <p>If not, then I would start from that. There are many ways to achieve that. If you are beginner you might want to try <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">minikube</a> or bootstrap your cluster with <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a>.</p> <p>Once you achieve that you will need to use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a> to have your .net app running in Kubernetes.</p> <p>To expose your deployment you use <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress</a>. In minikube this is very easy as you just have to enable it via <code>minikube addons</code>. In <code>kubeadm</code> you will have <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">deploy</a> ingress controller that will support your ingress object.</p> <p>There are many documents and information about how to start with deploying .NET application on the web, such as:</p> <ul> <li><p><a href="https://andrewlock.net/deploying-asp-net-core-applications-to-kubernetes-part-1-an-introduction-to-kubernetes/" rel="nofollow noreferrer">Andrew Lock | Net escapades</a></p> </li> <li><p><a href="https://medium.com/@bterkaly/running-asp-net-applications-in-kubernetes-a-detailed-step-by-step-approach-96c98f273d1a" rel="nofollow noreferrer">Running ASP.NET Applications in Kubernetes — A Detailed Step By Step Approach</a></p> </li> </ul> <p>Once you try something out please come back with some more specific issues. Stackoverflow is not that type of place for such a general questions.</p>
acid_fuji
<p>I am learning to deploy applications on private clusters. The application is up and running in a pod and is reachable from the node itself. I have created an ingress controller service as well, but I am not sure what's going wrong. The external IP of the nginx-ingress service always returns 404. Any ideas on the fix ?</p> <p>Services running :</p> <p><a href="https://i.stack.imgur.com/7lRiv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7lRiv.png" alt="enter image description here" /></a></p> <p>Application service :</p> <p><a href="https://i.stack.imgur.com/vbN7e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vbN7e.png" alt="enter image description here" /></a></p> <p>Nginx service :</p> <p><a href="https://i.stack.imgur.com/PeHmM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PeHmM.png" alt="enter image description here" /></a></p> <p>Application ingress :</p> <p><a href="https://i.stack.imgur.com/sJ72i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sJ72i.png" alt="enter image description here" /></a></p> <p>Ingress yaml : <a href="https://i.stack.imgur.com/hQ3it.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hQ3it.png" alt="enter image description here" /></a></p>
Kshitij Karandikar
<p>Looks like the Ingress is not being served by your Nginx Ingress controller at the moment. If the Ingress is served by a controller, it should have at least one IP Address under its <code>status.loadBalancer</code> (which should be the external IP used by the Ingress Controller which is serving it), while in your case, looks empty like this:</p> <pre><code>status: loadBalancer: {} </code></pre> <p>The most common problem on this regard is that the Ingress does not define an Ingress Class or there is no default Ingress Class in the cluster.</p> <p>First of all, do a <code>k get IngressClass</code> and see if there's any Ingress Class defined. in your cluster. <strong>Depending on the Kubernetes version and Ingress Controller version</strong>, it could make use of IngressClass objects or simply use annotations (or both).</p> <p>I would try simply adding the annotation <code>kubernetes.io/ingress.class: nginx</code> under the Ingress <code>metadata</code> as the nginx class is usually the one defined by the Nginx Ingress Controller. Or, if your Ingress Controller is using a different Ingress Class, I'd try specify that in the annotation, then your setup <strong>should</strong> work.</p> <hr /> <p>If you are curious on what is the purpose of an Ingress Class, it can mostly be used to associate an Ingress resource definition, with an ingress controller. On a Kubernetes cluster, there may be more than one Ingress Controller, each one with its own ingress class and Ingress resources are associated to one of them by matching the requested ingress class.</p> <p>If an ingress class is not specified, the Ingress uses the default one, which means that the IngressClass annotated to be the default one of the cluster is automatically used.</p> <p>For more info, check the documentation here ( <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class</a> )</p>
AndD
<p>I have a problem to deploy docker image via kubernetes.</p> <p>One issue is that, we cannot use any docker image registry service e.g. docker hub or any cloud services. But, yes I have docker images as .tar file.</p> <p>However, it always fails with following message</p> <pre><code>Warning Failed 1s kubelet, dell20 Failed to pull image "test:latest": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/test:latest": failed to do request: Head https://registry-1.docker.io/v2/library/test/manifests/latest: dial tcp i/o timeout </code></pre> <p>I also change deployment description with IfNotPresent or Never. In this case it will fail anyway with ErrImageNeverPull.</p> <p>My guess is: kubernetes tries to use Docker Hub anyway, since it <a href="https://registry-1.docker.io" rel="noreferrer">https://registry-1.docker.io</a> in order to pull the image. I just want to use tar docker image in local disk, rather than pulling from some services.</p> <p>And yes the image is in docker:</p> <pre><code>docker images REPOSITORY TAG IMAGE ID CREATED SIZE test latest 9f4916a0780c 6 days ago 1.72GB </code></pre> <p>Can anyone give me any advices on this problem?</p>
zedoul
<p>I was successful with using local image with Kubernetes cluster. I provided the explanation with example below: </p> <p>The only prerequisite is that you need to make sure you have access to upload this image directly to nodes. </p> <h3>Create the image</h3> <p>Pull the default nginx image from docker registry with below command: </p> <p><code>$ docker pull nginx:1.17.5</code></p> <p>Nginx image is used only for demonstration purposes. </p> <p>Tag this image with new name as <code>nginx-local</code> with command: </p> <p><code>$ docker tag nginx:1.17.5 nginx-local:1.17.5</code></p> <p>Save this image as nginx-local.tar executing command: </p> <p><code>$ docker save nginx-local:1.17.5 &gt; nginx-local.tar</code></p> <p>Link to documentation: <a href="https://docs.docker.com/engine/reference/commandline/save/" rel="noreferrer">docker save</a></p> <p><strong>File <code>nginx-local.tar</code> is used as your image.</strong></p> <h3>Copy the image to all of the nodes</h3> <p><strong>The problem with this technique is that you need to ensure all of the nodes have this image.</strong><br> Lack of image will result in failed pod creation. </p> <p>To copy it you can use <code>scp</code>. It's secure way to transer files between machines.<br> Example command for scp: </p> <pre><code>$ scp /path/to/your/file/nginx-local.tar user@ip_adddress:/where/you/want/it/nginx-local.tar </code></pre> <p>If image is already on the node, you will need to load it into local docker image repository with command:</p> <p><code>$ docker load -i nginx-local.tar</code></p> <p>To ensure that image is loaded invoke command </p> <p><code>$ docker images | grep nginx-local</code></p> <p>Link to documentation: <a href="https://docs.docker.com/engine/reference/commandline/load/" rel="noreferrer">docker load</a>: </p> <p>It should show something like that: </p> <pre><code>docker images | grep nginx nginx-local 1.17.5 540a289bab6c 3 weeks ago 126MB </code></pre> <h3>Creating deployment with local image</h3> <p>The last part is to create deployment with use of nginx-local image. </p> <p><strong>Please note that:</strong></p> <ul> <li>The image version is <strong>explicitly typed inside</strong> yaml file. </li> <li>ImagePullPolicy is set to <strong>Never</strong>. <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="noreferrer">ImagePullPolicy</a></li> </ul> <p><strong>Without this options the pod creation will fail.</strong></p> <p>Below is example deployment which uses exactly that image: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-local namespace: default spec: selector: matchLabels: run: nginx-local replicas: 5 template: metadata: labels: run: nginx-local spec: containers: - image: nginx-local:1.17.5 imagePullPolicy: Never name: nginx-local ports: - containerPort: 80 </code></pre> <p>Create this deployment with command: <code>$ kubectl create -f local-test.yaml</code></p> <p>The result was that pods were created successfully as shown below: </p> <pre><code>NAME READY STATUS RESTARTS AGE nginx-local-84ddb99b55-7vpvd 1/1 Running 0 2m15s nginx-local-84ddb99b55-fgb2n 1/1 Running 0 2m15s nginx-local-84ddb99b55-jlpz8 1/1 Running 0 2m15s nginx-local-84ddb99b55-kzgw5 1/1 Running 0 2m15s nginx-local-84ddb99b55-mc7rw 1/1 Running 0 2m15s </code></pre> <p>This operation was successful but I would recommend you to use local docker repository. It will easier management process with images and will be inside your infrastructure. Link to documentation about it: <a href="https://docs.docker.com/registry/" rel="noreferrer">Local Docker Registry </a></p>
Dawid Kruk
<p>As was pointed out here (<a href="https://stackoverflow.com/questions/57496077/istio-queryparams-always-returning-truthy">Istio queryParams always returning truthy</a>), matching on queryParams was only implemented in Istio 1.3. I am running a system that is currently locked to Istio 1.1.6 and have a use case where I need to be be able to match on query params. My question is, whether there is some workaround with which that can be achieved?</p>
Karl
<p>Unfortunately support for query param matching was added in v1.3 and there is no way to do this on 1.1.6. The solution here would be to upgrade your istio at least to 1.3 since version 1.1.6 seems to be a bit old.</p> <p>Github pull ref: <a href="https://github.com/istio/istio/pull/13730" rel="nofollow noreferrer">Add support for HTTP query param matching #13730</a></p>
acid_fuji
<p>I am using a kube config file to fetch the pod CPU and MEM data using go-lang. I am stuck to fetch the HPA details, i.e I am trying to write the equivalent of &quot;kubectl get hpa&quot;, so I can know I have applied hpa to known services or not.</p> <p>Any help on this is highly appreciated.</p> <p>I have tried the below so far.</p> <pre><code>kubeClient &quot;k8s.io/client-go/kubernetes/typed/autoscaling/v1&quot; hpaWatch, err := kubeClient.AutoscalingV1().HorizontalPodAutoscalers(&quot;default&quot;).Watch(metav1.ListOptions{}) </code></pre> <p>But this is not working.</p>
abinash
<p>Here is the line you should have used:</p> <pre><code>hpas, err := clientset.AutoscalingV1().HorizontalPodAutoscalers(&quot;default&quot;).List(context.TODO(), metav1.ListOptions{}) </code></pre> <p>and the following is a complete and working example for listing HPAs. You should be able just copy-paste it and run it.</p> <p>It was tested with <code>[email protected]</code>.</p> <pre><code>package main import ( &quot;context&quot; &quot;flag&quot; &quot;fmt&quot; &quot;os&quot; &quot;path/filepath&quot; &quot;time&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; ) func main() { var kubeconfig *string if home := homeDir(); home != &quot;&quot; { kubeconfig = flag.String(&quot;kubeconfig&quot;, filepath.Join(home, &quot;.kube&quot;, &quot;config&quot;), &quot;(optional) absolute path to the kubeconfig file&quot;) } else { kubeconfig = flag.String(&quot;kubeconfig&quot;, &quot;&quot;, &quot;absolute path to the kubeconfig file&quot;) } flag.Parse() // use the current context in kubeconfig config, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, *kubeconfig) if err != nil { panic(err.Error()) } // create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } for { hpas, err := clientset.AutoscalingV1().HorizontalPodAutoscalers(&quot;default&quot;).List(context.TODO(), metav1.ListOptions{}) if err != nil { panic(err.Error()) } for _, hpa := range hpas.Items { fmt.Printf(&quot;%q\n&quot;, hpa.GetName()) } time.Sleep(10 * time.Second) } } func homeDir() string { if h := os.Getenv(&quot;HOME&quot;); h != &quot;&quot; { return h } return os.Getenv(&quot;USERPROFILE&quot;) // windows } </code></pre>
Matt
<p>From k8s docs and along with other answers I can find, it shows load balancer(LB) before the ingress. However I am confused that after matching the ingress rule, there can be still multiple containers that backed the selected services. Does LB happen again here for selecting one container to route to?</p> <p><a href="https://i.stack.imgur.com/IKu15.png" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress</a></p>
BRANDON CHEN
<p>As you can see from the picture you posted, the Ingress choose a Service (based on a Rule) and not directly a Pod. Then, the Service may (or may not) have more than one Pod behind.</p> <p>The default Service type for Kubernetes is called <code>ClusterIP</code>. It receives a virtual IP, which then redirects requests to one of the Pods served behind. On each node of the cluster, runs a <code>kube-proxy</code> which is responsible for implementing this virtual ip mechanic.</p> <p>So, yes, load balancing happens again after a Service is selected.. if that service selects more than one Pod. Which backend (Pod) is choosen depends on how <code>kube-proxy</code> is configured and is usually either round robin or just random.</p> <hr /> <p>There is also a way to create a Service without a virtual IP. Such services, called headless Services, directly uses DNS to redirect requests to the different backends.. but they are not the default because it is better to use proxies than try to load balance with DNS.. which may have side effects (depending on who makes requests)</p> <hr /> <p>You can find a lot of info regarding how Services work in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">docs</a>.</p>
AndD
<p>I can't connect to my app running with nginx ingress (Docker Desktop win 10).</p> <p>The nginx-ingress controller pod is running, the app is healthy, and I have created an ingress. However, when I try to connect to my app on localhost, I get "connection refused".</p> <p>I see this error in the log:</p> <pre><code>[14:13:13.028][VpnKit ][Info ] vpnkit.exe: Connected Ethernet interface f6:16:36:bc:f9:c6 [14:13:13.028][VpnKit ][Info ] vpnkit.exe: UDP interface connected on 10.96.181.150 [14:13:22.320][GoBackendProcess ][Info ] Adding vpnkit-k8s-controller tcp forward from 0.0.0.0:80 to 10.96.47.183:80 [14:13:22.323][ApiProxy ][Error ] time="2019-12-09T14:13:22-05:00" msg="Port 443 for service ingress-nginx is already opened by another service" </code></pre> <p>I think port 443 is used by another app, possibly zscaler security or skype. Excerpt from <code>netstat -a -b</code>:</p> <pre><code> [svchost.exe] TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 16012 [com.docker.backend.exe] TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 8220 </code></pre> <p>I don't know how to make the ingress work. Please help!</p> <p>My ingress:</p> <pre><code>$ kubectl describe ing kbvalues-deployment-dev-ingress Name: kbvalues-deployment-dev-ingress Namespace: default Address: localhost Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- localhost / kbvalues-deployment-dev-frontend:28000 (10.1.0.174:8080) Annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/cors-allow-headers: X-Forwarded-For, X-app123-XPTO Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 42m nginx-ingress-controller Ingress default/kbvalues-deployment-dev-ingress Normal UPDATE 6s (x5 over 42m) nginx-ingress-controller Ingress default/kbvalues-deployment-dev-ingress </code></pre> <p>My service:</p> <pre><code>$ kubectl describe svc kbvalues-deployment-dev-frontend Name: kbvalues-deployment-dev-frontend Namespace: default Labels: chart=tomcat-sidecar-war-1.0.4 environment=dev name=kbvalues-frontend-dev release=kbvalues-test tier=frontend Annotations: &lt;none&gt; Selector: app=kbvalues-dev Type: ClusterIP IP: 10.98.89.94 Port: &lt;unset&gt; 28000/TCP TargetPort: 8080/TCP Endpoints: 10.1.0.174:8080 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>I am trying to access the app at: <code>http://localhost:28000/health</code>. I verified that the <code>/health</code> URL is accessible locally within the web server container.</p> <p>I appreciate any help you can offer.</p> <p><strong>Edit:</strong></p> <p>I tried altering the ingress-nginx service to remove HTTPS, as suggested here: <a href="https://stackoverflow.com/a/56303330/166850">https://stackoverflow.com/a/56303330/166850</a></p> <p>This got rid of the 443 error in the logs, but didn't fix my setup (still getting connection refused).</p> <p><strong>Edit 2:</strong> Here is the Ingress YAML definition (kubectl get -o yaml):</p> <pre><code>$ kubectl get ing -o yaml apiVersion: v1 items: - apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx creationTimestamp: "2019-12-09T18:47:33Z" generation: 5 name: kbvalues-deployment-dev-ingress namespace: default resourceVersion: "20414" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/kbvalues-deployment-dev-ingress uid: 5c34bf7f-1ab4-11ea-80e4-00155d169409 spec: rules: - host: localhost http: paths: - backend: serviceName: kbvalues-deployment-dev-frontend servicePort: 28000 path: / status: loadBalancer: ingress: - hostname: localhost kind: List metadata: resourceVersion: "" selfLink: "" </code></pre> <p><strong>Edit 3:</strong> Output of <code>kubectl get svc -A</code> (ingress line only):</p> <pre><code>ingress-nginx ingress-nginx LoadBalancer 10.96.47.183 localhost 80:30470/TCP 21h </code></pre> <p><strong>Edit 4:</strong> I tried to get the VM's IP address from windows HyperV, but it seems like the VM doesn't have an IP?</p> <pre><code>PS C:\&gt; (Get-VMNetworkAdapter -VMName DockerDesktopVM) Name IsManagementOs VMName SwitchName MacAddress Status IPAddresses ---- -------------- ------ ---------- ---------- ------ ----------- Network Adapter False DockerDesktopVM DockerNAT 00155D169409 {Ok} {} </code></pre> <p><strong>Edit 5:</strong></p> <p>Output of <code>netstat -a -n -o -b</code> for port 80:</p> <pre><code> TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 4 Can not obtain ownership information </code></pre>
RMorrisey
<p>I have managed to create Ingress resource in Kubernetes on Docker in Windows.</p> <p><strong>Steps to reproduce</strong>:</p> <ul> <li>Enable Hyper-V</li> <li>Install Docker for Windows and enable Kubernetes</li> <li>Connect kubectl</li> <li>Enable Ingress</li> <li>Create deployment</li> <li>Create service</li> <li>Create ingress resource</li> <li>Add host into local hosts file</li> <li>Test</li> </ul> <h3>Enable <a href="https://learn.microsoft.com/pl-pl/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v" rel="noreferrer">Hyper-V</a></h3> <p>From Powershell with administrator access run below command:</p> <p><code>Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All</code></p> <p>System could ask you to reboot your machine.</p> <h3>Install Docker for Windows and enable Kubernetes</h3> <p>Install Docker application with all the default options and enable Kubernetes</p> <h3>Connect kubectl</h3> <p>Install <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows" rel="noreferrer">kubectl </a>.</p> <h3>Enable Ingress</h3> <p>Run this commands:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml </code></pre> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml </code></pre> <h3><strong>Edit:</strong> Make sure no other service is using port 80</h3> <p>Restart your machine. From a <code>cmd</code> prompt running as admin, do: <code>net stop http</code> Stop the listed services using <code>services.msc</code></p> <p>Use: <code>netstat -a -n -o -b</code> and check for other processes listening on port 80.</p> <h3>Create deployment</h3> <p>Below is simple deployment with pods that will reply to requests:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: selector: matchLabels: app: hello version: 2.0.0 replicas: 3 template: metadata: labels: app: hello version: 2.0.0 spec: containers: - name: hello image: &quot;gcr.io/google-samples/hello-app:2.0&quot; env: - name: &quot;PORT&quot; value: &quot;50001&quot; </code></pre> <p>Apply it by running command:</p> <p><code>$ kubectl apply -f file_name.yaml</code></p> <h3>Create service</h3> <p>For pods to be able for you to communicate with them you need to create a service.</p> <p>Example below:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: hello-service spec: type: NodePort selector: app: hello version: 2.0.0 ports: - name: http protocol: TCP port: 80 targetPort: 50001 </code></pre> <p>Apply this service definition by running command:</p> <p><code>$ kubectl apply -f file_name.yaml</code></p> <h3>Create Ingress resource</h3> <p>Below is simple Ingress resource using service created above:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-ingress spec: rules: - host: hello-test.internal http: paths: - path: / backend: serviceName: hello-service servicePort: http </code></pre> <p>Take a look at:</p> <pre class="lang-yaml prettyprint-override"><code>spec: rules: - host: hello-test.internal </code></pre> <p><code>hello-test.internal </code> will be used as the <code>hostname</code> to connect to your pods.</p> <p>Apply your Ingress resource by invoking command:</p> <p><code>$ kubectl apply -f file_name.yaml</code></p> <h3>Add host into local hosts file</h3> <p>I found this <a href="https://github.com/docker/for-win/issues/1901" rel="noreferrer">Github link </a> that will allow you to connect to your Ingress resource by <code>hostname</code>.</p> <p>To achieve that add a line <code>127.0.0.1 hello-test.internal</code> to your <code>C:\Windows\System32\drivers\etc\hosts</code> file and save it. You will need Administrator privileges to do that.</p> <p><strong>Edit:</strong> The newest version of Docker Desktop for Windows already adds a hosts file entry: <code>127.0.0.1 kubernetes.docker.internal</code></p> <h3>Test</h3> <p>Display the information about Ingress resources by invoking command: <code>kubectl get ingress</code></p> <p>It should show:</p> <pre><code>NAME HOSTS ADDRESS PORTS AGE hello-ingress hello-test.internal localhost 80 6m2s </code></pre> <p>Now you can access your Ingress resource by opening your web browser and typing</p> <p><code>http://kubernetes.docker.internal/</code></p> <p>The browser should output:</p> <pre><code>Hello, world! Version: 2.0.0 Hostname: hello-84d554cbdf-2lr76 </code></pre> <p><code>Hostname: hello-84d554cbdf-2lr76</code> is the name of the pod that replied.</p> <p>If this solution is not working please check connections with the command: <code>netstat -a -n -o </code> (<strong>with Administrator privileges</strong>) if something is not using port 80.</p>
Dawid Kruk
<p>I would like to see the changes made to the helm chart compared to its previous release - running <code>helm list</code> i see there were xx revisions - any way to see differences ? I know about rollback <code>helm rollback &lt;RELEASE&gt; 0</code> but just wanted to know what's changed</p>
potatopotato
<p>On <a href="https://helm.sh/docs/community/related/" rel="noreferrer">helm website</a> you can find some plugins. One of them called <a href="https://github.com/databus23/helm-diff" rel="noreferrer">helm-diff</a>, and it can generate diffs between releases.</p> <p>Here is how to use this plugin:</p> <pre><code>$ helm diff release -h This command compares the manifests details of a different releases created from the same chart It can be used to compare the manifests of - release1 with release2 $ helm diff release [flags] release1 release2 Example: $ helm diff release my-prod my-stage </code></pre> <p><a href="https://github.com/databus23/helm-diff#install" rel="noreferrer">Here is explained how to install the plugin</a>. TLDR: if you are using helm version &gt; 2.3.x jest run:</p> <pre><code>helm plugin install https://github.com/databus23/helm-diff </code></pre> <p>Let me know it this solves your problem. If you have any further questions I'd be happy to answer them.</p>
Matt
<p>On the host, everything in the mounted directory (<code>/opt/testpod</code>) is owned by uid=0 gid=0. I need those files to be owned by whatever the container decides, i.e. a different gid, to be able to write there. Resources I'm testing with:</p> <pre><code>--- apiVersion: v1 kind: PersistentVolume metadata: name: pv labels: name: pv spec: storageClassName: manual capacity: storage: 10Mi accessModes: - ReadWriteOnce hostPath: path: &quot;/opt/testpod&quot; --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc spec: storageClassName: manual selector: matchLabels: name: pv accessModes: - ReadWriteOnce resources: requests: storage: 10Mi --- apiVersion: v1 kind: Pod metadata: name: testpod spec: nodeSelector: foo: bar securityContext: runAsUser: 500 runAsGroup: 500 fsGroup: 500 volumes: - name: vol persistentVolumeClaim: claimName: pvc containers: - name: testpod image: busybox command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;sleep 1h&quot; ] volumeMounts: - name: vol mountPath: /data </code></pre> <p>After the pod is running, I <code>kubectl exec</code> into it and <code>ls -la /data</code> shows everything still owned by gid=0. According to some Kuber docs, <code>fsGroup</code> is supposed to chown everything on the pod start but it doesn't happen. What am I doing wrong please?</p>
Yuri Geinish
<p>The <code>hostpath</code> type PV doesn't support security context. You have to be root for the volume to be written in. It is described well in this <a href="https://github.com/kubernetes/minikube/issues/1990" rel="noreferrer">github issue</a> and this docs about <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/storage/volumes/#hostpath" rel="noreferrer">hostPath</a>:</p> <blockquote> <p>The directories created on the underlying hosts are only writable by root. You either need to run your process as root in a <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/docs/user-guide/security-context" rel="noreferrer">privileged container</a> or modify the file permissions on the host to be able to write to a <code>hostPath</code> volume</p> </blockquote> <p>You may also want to check this <a href="https://github.com/kubernetes/kubernetes/pull/39438" rel="noreferrer">github request</a> describing why changing permission of host directory is dangerous.</p> <p>The workaround people describe that it appears to be working is to grant your user sudo privileges but that actually makes the idea of running container as non root user useless.</p> <p>Security context appears to be working well with emptyDir volume (described well in the k8s docs <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">here</a>)</p>
acid_fuji
<p>I was curious if it is possible to move completely off hostpaths and use local volumes (persistent volumes) instead. However, there doesn't seem to be a way to include volumeClaimTemplates into a daemonset.</p> <p>Statefulsets provide volumeClaimTemplates, but they require the replicas to be manually defined, as opposed to auto scaling to as many nodes as can be found.</p> <p>Is there a way to automatically scale to all nodes, and also create a pvc per replica?</p> <p>Related question where they decided to use statefulsets and give up autoscaling: <a href="https://stackoverflow.com/questions/55165517/handling-persistentvolumeclaim-in-daemonset">Handling PersistentVolumeClaim in DaemonSet</a></p>
kittydoor
<p>There is no possibility for now to use PersistentVolumes in Daemon Set. </p> <p>There was a feature request for it on <a href="https://github.com/" rel="nofollow noreferrer">Github</a> but unfortunately it was closed. Link to this request: <a href="https://github.com/kubernetes/kubernetes/issues/78902" rel="nofollow noreferrer">volumeClaimTemplates available for Daemon Sets</a></p> <p>There is a <a href="https://github.com/netdata/helmchart/issues/61#issuecomment-550764504" rel="nofollow noreferrer">comment</a> in the link above which is describing this topic a bit more. </p>
Dawid Kruk
<p>I need to install a helm with a dynamic &quot;project_id&quot; flag on deploy time inside the <code>rawConfig</code> multi-line string</p> <p>Example <code>values.yaml</code></p> <pre><code>sinks: console: type: &quot;console&quot; inputs: [&quot;kafka&quot;] rawConfig: | target = &quot;stdout&quot; encoding.codec = &quot;json&quot; stackdriver: type: &quot;gcp_stackdriver_logs&quot; inputs: [&quot;kafka&quot;] rawConfig: | healthcheck = true log_id = &quot;mirth-channels-log&quot; project_id = &quot;my_project&quot; resource.type = &quot;k8s_cluster&quot; </code></pre> <p>How do I override this value of <code>project_id</code> in rawConfig? I'm trying to do this:</p> <pre><code> helm install vector helm/vector --values helm/vector-agent/values.yaml -n my-namespace --set sinks.stackdriver.rawConfig='\nlog_id\ =\ &quot;mirth-channels-log&quot;\nproject_id\ =\ &quot;my_project_test&quot;\nresource.type\ =\ &quot;k8s_cluster&quot;\n' </code></pre> <p>But it does not work</p>
Igor Corradi
<p>Use second --values like following:</p> <pre><code># values_patch.yaml sinks: stackdriver: rawConfig: | healthcheck = true log_id = &quot;mirth-channels-log&quot; project_id = &quot;SOME_OTHER_PROJECT_ID&quot; resource.type = &quot;k8s_cluster&quot; </code></pre> <pre><code>$ helm install [...] --values values_patch.yaml </code></pre>
Matt
<p>I deployed a EKS cluster and a fargate profile. Then I deployed a few application to this cluster. I can see these fargate instances are launched.</p> <p><a href="https://i.stack.imgur.com/gYzMv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gYzMv.png" alt="enter image description here" /></a></p> <p>when I click each of this instance, it shows me some information like <code>os</code>, <code>image</code> etc. But it doesn't tell me the CPU and memory. When I look at fargate pricing: <a href="https://aws.amazon.com/fargate/pricing/" rel="nofollow noreferrer">https://aws.amazon.com/fargate/pricing/</a>. It is calculated based on CPU and Memory.</p> <p>I have used ECS and it is very clear that I need to provision CPU/Memory in service/task level. But I can't find anything in EKS.</p> <p>How do I know how much resources they are consuming?</p>
Joey Yi Zhao
<p>With Fargate you don`t have provision, configure or scale virtual machines to run your containers so that they become fundamental compute primitive.</p> <p>This solution model is called <code>serverless</code> where you are being charged for only the compute resources and storage that are need to execute some piece of your code. It does not mean that there are not server involved in this, it just you don`t need to care about those.</p> <p>To monitor there those you can use <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/viewing_cloudwatch_metrics.html#viewing_service_metrics" rel="nofollow noreferrer">CloudWatch</a>. Below documents describe how this can be achieved:</p> <ul> <li><p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-high-cpu-utilization/" rel="nofollow noreferrer">How do I troubleshoot high CPU utilization on an Amazon ECS task on Fargate?</a></p> </li> <li><p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-tasks-fargate-memory-utilization/" rel="nofollow noreferrer">How can I monitor high memory utilization for Amazon ECS tasks on<br /> Fargate?</a></p> </li> </ul> <p>It is worth to mention that Fargate is just a launch type for ECS (Another one is EC2). Please have a look at the <a href="https://epsagon.com/development/key-metrics-for-monitoring-amazon-ecs-and-aws-fargate/" rel="nofollow noreferrer">diagram</a> in this document for clear image of how those are connected. The CloudWatch metrics are collected automatically for Fargate. If you are using the AKS with Fargate you can monitor them with usage of metrics-addon or prometheus inside your kubernetes cluster.</p> <p>Here's an <a href="https://sysdig.com/blog/monitor-aws-fargate-prometheus/" rel="nofollow noreferrer">example</a> of monitoring Fargate with Prometheus. Notice that it scrapes the metrics from CloudWatch.</p>
acid_fuji
<p>I'm trying to figure out how to use Ansible with Vagrant the proper way. By default, it seems Vagrant is isolating Ansible execution per box and executes playbooks after each box partially as it applies to that single box in the loop. I find this VERY counterproductive and I have tried tricking Vagrant into executing a playbook across all of the hosts AFTER all of them booted, but it seems Ansible, when started from Vagrant never sees more than a single box at a time.</p> <p>Edit: these are the version I am working with:</p> <p>Vagrant: 2.2.6 Ansible: 2.5.1 Virtualbox: 6.1</p> <p>The playbook (with the hosts.ini) by itsef executes without issues when I run it stand-alone with the ansible-playbook executable after the hosts come up, so the problem is with my Vagrant file. I just cannot figure it out.</p> <p>This is the Vagrantfile:</p> <pre class="lang-rb prettyprint-override"><code># -*- mode: ruby -*- # vi: set ft=ruby : IMAGE_NAME = "ubuntu/bionic64" Vagrant.configure("2") do |config| config.ssh.insert_key = false config.vm.box = IMAGE_NAME # Virtualbox configuration config.vm.provider "virtualbox" do |v| v.memory = 4096 v.cpus = 2 #v.linked_clone = true end # master and node definition boxes = [ { :name =&gt; "k8s-master", :ip =&gt; "192.168.50.10" }, { :name =&gt; "k8s-node-1", :ip =&gt; "192.168.50.11" } ] boxes.each do |opts| config.vm.define opts[:name] do |config| config.vm.hostname = opts[:name] config.vm.network :private_network, ip: opts[:ip] if opts[:name] == "k8s-node-1" config.vm.provision "ansible_local" do |ansible| ansible.compatibility_mode = "2.0" ansible.limit = "all" ansible.config_file = "ansible.cfg" ansible.become = true ansible.playbook = "playbook.yml" ansible.groups = { "masters" =&gt; ["k8s-master"], "nodes" =&gt; ["k8s-node-1"] } end end end end end </code></pre> <p>ansible.cfg</p> <pre><code>[defaults] connection = smart timeout = 60 deprecation_warnings = False host_key_checking = False inventory = hosts.ini [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes </code></pre> <p>hosts.ini</p> <pre><code>[masters] k8s-master ansible_host=192.168.50.10 ansible_user=vagrant [nodes] k8s-node-1 ansible_host=192.168.50.11 ansible_user=vagrant [all:vars] ansible_python_interpreter=/usr/bin/python3 ansible_ssh_user=vagrant ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key </code></pre> <p>playbook.yml</p> <pre><code>- hosts: all become: yes tasks: - name: Update apt cache. apt: update_cache=yes cache_valid_time=3600 when: ansible_os_family == 'Debian' - name: Ensure swap is disabled. mount: name: swap fstype: swap state: absent - name: Disable swap. command: swapoff -a when: ansible_swaptotal_mb &gt; 0 - name: create the 'mobile' user user: name=mobile append=yes state=present createhome=yes shell=/bin/bash - name: allow 'mobile' to have passwordless sudo lineinfile: dest: /etc/sudoers line: 'mobile ALL=(ALL) NOPASSWD: ALL' validate: 'visudo -cf %s' - name: set up authorized keys for the mobile user authorized_key: user: mobile key: "{{ lookup('pipe','cat ssh_keys/*.pub') }}" state: present exclusive: yes - hosts: all become: yes tasks: - name: install Docker apt: name: docker.io state: present update_cache: true - name: install APT Transport HTTPS apt: name: apt-transport-https state: present - name: add Kubernetes apt-key apt_key: url: https://packages.cloud.google.com/apt/doc/apt-key.gpg state: present - name: add Kubernetes' APT repository apt_repository: repo: deb http://apt.kubernetes.io/ kubernetes-xenial main state: present filename: 'kubernetes' - name: install kubelet apt: name: kubelet=1.17.0-00 state: present update_cache: true - name: install kubeadm apt: name: kubeadm=1.17.0-00 state: present - hosts: masters become: yes tasks: - name: install kubectl apt: name: kubectl=1.17.0-00 state: present force: yes - hosts: k8s-master become: yes tasks: - name: check docker status systemd: state: started name: docker - name: initialize the cluster shell: kubeadm init --apiserver-advertise-address 192.168.50.10 --pod-network-cidr=10.244.0.0/16 &gt;&gt; cluster_initialized.txt args: chdir: $HOME creates: cluster_initialized.txt - name: create .kube directory become: yes become_user: mobile file: path: $HOME/.kube state: directory mode: 0755 - name: copy admin.conf to user's kube config copy: src: /etc/kubernetes/admin.conf dest: /home/mobile/.kube/config remote_src: yes owner: mobile - name: install Pod network become: yes become_user: mobile shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml &gt;&gt; pod_network_setup.txt args: chdir: $HOME creates: pod_network_setup.txt - hosts: k8s-master become: yes gather_facts: false tasks: - name: get join command shell: kubeadm token create --print-join-command 2&gt;/dev/null register: join_command_raw - name: set join command set_fact: join_command: "{{ join_command_raw.stdout_lines[0] }}" - hosts: nodes become: yes tasks: - name: check docker status systemd: state: started name: docker - name: join cluster shell: "{{ hostvars['k8s-master'].join_command }} &gt;&gt; node_joined.txt" args: chdir: $HOME creates: node_joined.txt </code></pre> <p>The moment the playbook tries to execute against k8s-master, it fails like this:</p> <pre><code>fatal: [k8s-master]: UNREACHABLE! =&gt; {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname k8s-master: Temporary failure in name resolution", "unreachable": true} </code></pre> <p>The host is up. SSH works.</p> <p>Who can help me sort this out?</p> <p>Thanks!</p>
Tobias
<p>I have managed to use Ansible inside of Vagrant. </p> <p>Here is what I did to make it work: </p> <p><strong>Steps to reproduce:</strong></p> <ul> <li>Install Vagrant, Virtualbox</li> <li>Create all the necessary files and directories <ul> <li>ansible.cfg</li> <li>playbook.yml</li> <li>hosts</li> <li>insecure_private_key</li> <li>Vagrant file</li> </ul></li> <li>Test</li> </ul> <h2>Install Vagrant, Virtualbox</h2> <p>Follow installation guides at appropriate sites:</p> <ul> <li><a href="https://www.vagrantup.com/docs/installation/" rel="nofollow noreferrer">Vagrant</a></li> <li><a href="https://www.virtualbox.org/wiki/Downloads" rel="nofollow noreferrer">Virtualbox</a></li> </ul> <h2>Create all the necessary files and directories</h2> <p><strong>This example bases on original poster files</strong>.</p> <p>Create <code>vagrant</code> and <code>ansible</code> folders to store all the configuration files and directories. The structure of it could look like that:</p> <ul> <li><code>vagrant</code> - directory <ul> <li>Vagrantfile - file with main configuration </li> </ul></li> <li><code>ansible</code> - directory <ul> <li>ansible.cfg - configuration file of Ansible</li> <li>playbook.yml - file with steps for Ansible to execute </li> <li>hosts - file with information about hosts </li> <li>insecure_private_key - private key of created machines </li> </ul></li> </ul> <p><code>Ansible</code> folder is a seperate directory that will be copied to <code>k8s-node-1</code>. </p> <p>By default Vagrant shares a <code>vagrant</code> folder with permissions of <code>777</code>. It allows owner, group and others to have full access on everything that is inside of it. </p> <p>Logging to virtual machine manualy and running <code>ansible-playbook</code> command inside <code>vagrant</code> directory will output errors connected with permissions. It will render <code>ansible.cfg</code> and <code>insecure_private_key</code> useless. </p> <h3>Ansible.cfg</h3> <p><code>Ansible.cfg</code> is configuration file of Ansible. Example used below:</p> <pre><code>[defaults] connection = smart timeout = 60 deprecation_warnings = False host_key_checking = False [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes </code></pre> <p>Create <code>ansible.cfg</code> inside <code>ansible</code> directory.</p> <h3>Playbook.yml</h3> <p>Example <code>playbook.yml</code> is a file with steps for Ansible to execute. It will check connections and test if groups are configured correctly: </p> <pre><code>- name: Check all connections hosts: all tasks: - name: Ping ping: - name: Check specific connection to masters hosts: masters tasks: - name: Ping ping: - name: Check specific connection to nodes hosts: nodes tasks: - name: Ping ping: </code></pre> <p>Create <code>playbook.yml</code> inside <code>ansible</code> directory.</p> <h3>Insecure_private_key</h3> <p>To successfully connect to virtual machines you will need <code>insecure_private_key</code>. You can create it by invoking command:<code>$ vagrant init</code> inside <code>vagrant</code> directory. It will create <code>insecure_private_key</code> inside your physical machine in <strong><code>HOME_DIRECTORY/.vagrant.d</code></strong>. Copy it to <code>ansible</code> folder. </p> <h3>Hosts</h3> <p>Below <code>hosts</code> file is responsible for passing the information about hosts to Ansible:</p> <pre><code>[masters] k8s-master ansible_host=192.168.50.10 ansible_user=vagrant [nodes] k8s-node-1 ansible_host=192.168.50.11 ansible_user=vagrant [all:vars] ansible_python_interpreter=/usr/bin/python3 ansible_ssh_user=vagrant ansible_ssh_private_key_file=/ansible/insecure_private_key </code></pre> <p>Create <code>hosts</code> file inside <code>ansible</code> directory.</p> <p><strong>Please take a specific look on:</strong> <code>ansible_ssh_private_key_file=/ansible/insecure_private_key</code></p> <p>This is declaration for Ansible to use earlier mentioned key. </p> <h3>Vagrant</h3> <p><code>Vagrant</code> file is the main configuration file:</p> <pre class="lang-rb prettyprint-override"><code># -*- mode: ruby -*- # vi: set ft=ruby : IMAGE_NAME = "ubuntu/bionic64" Vagrant.configure("2") do |config| config.ssh.insert_key = false config.vm.box = IMAGE_NAME # Virtualbox configuration config.vm.provider "virtualbox" do |v| v.memory = 4096 v.cpus = 2 #v.linked_clone = true end # master and node definition boxes = [ { :name =&gt; "k8s-master", :ip =&gt; "192.168.50.10" }, { :name =&gt; "k8s-node-1", :ip =&gt; "192.168.50.11" } ] boxes.each do |opts| config.vm.define opts[:name] do |config| config.vm.hostname = opts[:name] config.vm.network :private_network, ip: opts[:ip] if opts[:name] == "k8s-node-1" config.vm.synced_folder "../ansible", "/ansible", :mount_options =&gt; ["dmode=700", "fmode=700"] config.vm.provision "ansible_local" do |ansible| ansible.compatibility_mode = "2.0" ansible.limit = "all" ansible.config_file = "/ansible/ansible.cfg" ansible.become = true ansible.playbook = "/ansible/playbook.yml" ansible.inventory_path = "/ansible/hosts" end end end end end </code></pre> <p><strong>Please take a specific look on:</strong></p> <pre class="lang-rb prettyprint-override"><code>config.vm.synced_folder "../ansible", "/ansible", :mount_options =&gt; ["dmode=700", "fmode=700"] </code></pre> <p><code>config.vm.synced_folder</code> will copy <code>ansible</code> directory to <code>k8s-node-1</code> with all the files inside. </p> <p>It will set permissions for full access only to owner (vagrant user). </p> <pre class="lang-rb prettyprint-override"><code>ansible.inventory_path = "/ansible/hosts" </code></pre> <p><code>ansible.inventory_path</code> will tell Vagrant to provide <code>hosts</code> file for Ansible. </p> <h2>Test</h2> <p>To check run the following command from the <code>vagrant</code> directory: <code>$ vagrant up</code></p> <p>The part of the output responsible for Ansible should look like that: </p> <pre><code>==&gt; k8s-node-1: Running provisioner: ansible_local... k8s-node-1: Installing Ansible... k8s-node-1: Running ansible-playbook... PLAY [Check all connections] *************************************************** TASK [Gathering Facts] ********************************************************* ok: [k8s-master] ok: [k8s-node-1] TASK [Ping] ******************************************************************** ok: [k8s-master] ok: [k8s-node-1] PLAY [Check specific connection to masters] ************************************ TASK [Gathering Facts] ********************************************************* ok: [k8s-master] TASK [Ping] ******************************************************************** ok: [k8s-master] PLAY [Check specific connection to nodes] ************************************** TASK [Gathering Facts] ********************************************************* ok: [k8s-node-1] TASK [Ping] ******************************************************************** ok: [k8s-node-1] PLAY RECAP ********************************************************************* k8s-master : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 k8s-node-1 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 </code></pre>
Dawid Kruk
<p>I followed this article on how to setup RBAC for users in Kubernetes cluster: <a href="https://medium.com/better-programming/k8s-tips-give-access-to-your-clusterwith-a-client-certificate-dfb3b71a76fe" rel="noreferrer">https://medium.com/better-programming/k8s-tips-give-access-to-your-clusterwith-a-client-certificate-dfb3b71a76fe</a>.</p> <p>This is working fine but after I have signed, approved and configured everything as explained in article, how do I revoke access to some user. For example, if the user leaves the company, I would like to remove the user from the cluster as well.</p> <p>I have setup the RoleBinding like this:</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: $NAMESPACE-role_binding namespace: $NAMESPACE subjects: - kind: Group name: $GROUP apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: $NAMESPACE-role apiGroup: rbac.authorization.k8s.io </code></pre> <p>Where the user is part of the Group in RoleBinding</p>
Mika Riekkinen
<p>Already provided answers are correct and you cannot revoke certificates at the moment of writing this answer.</p> <p>Somebody already mentioned using serviceaccount tokens and assigning RBAC Roles to users and not groups and I just want to add some details on why this approach works.</p> <p>Let's start with some theory. The process of user verifiction consists of:</p> <ul> <li><p>authentication - process of verifying who a user is. In your case client certificate is used, but other methods like bearer tokens, authentication proxy can also serve the very purpose. When using certificates, user is defined by a certificate itself. Whoever holds the certificate can act as a user.</p> </li> <li><p>authorization - process of verifying what a user have access to. In case of kubernetes it is done using RBAC roles. Rolebinding is used to add specific permissions (represented by an rbac role) to a user or group (represented by a certificate).</p> </li> </ul> <p>We know now that you can't make changes on authentication level, since signed certificate cannot be revoked. You can although make changes on authorization level that is done by removing permissions from a user (by either removing rolebinding, or removing/altering rbac role; be carefull, the same RBAC Role can be assigned to different user/groups).</p> <p>This approach, even though it's correct, can lead to some security issues that are worth to mention. When signing certificates for new users you need to remember to never sing certificates with the same username. Once permissions are revoked you shoud not use the same username for new certificates and associated rolebinding (at least until the old cert expires) to make sure the old one when used, won't be allowed to access a cluster.</p> <hr /> <p>Alternatively I would like to suggest you another solution to already proposed ones: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="noreferrer">OpenID Connect Tokens</a></p> <p>Although Kubernetes does not provide an OpenID Connect Identity Provider. You can use an existing public OpenID Connect Identity Provider (such as Google). Or, you can run your own Identity Provider, such as dex or Keycloak</p> <ul> <li><a href="https://dexidp.io/docs/kubernetes/" rel="noreferrer">dex</a></li> <li><a href="https://www.keycloak.org/docs/latest/getting_started/" rel="noreferrer">keycloak</a></li> </ul> <p>OpenId tokens are very short lived tokens (e.g. 1 minute) and once your id_token expires, kubectl will attempt to refresh your id_token using your refresh_token and client_secret storing the new values for the refresh_token and id_token in your .kube/config.</p> <p>Even if this solution is more complicated to configure, it is worth to consider when having a lot of users to manage.</p> <hr /> <p>Additionally, integrations with other authentication protocols (LDAP, SAML, Kerberos, alternate x509 schemes, etc) can be accomplished using an <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authenticating-proxy" rel="noreferrer">authenticating proxy</a> or the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication" rel="noreferrer">authentication webhook</a></p>
Matt
<p>I am new to Istio and I have following problem. I am trying to set up configuration of egress gateway for external service communicating through tls/443 like for the following example: <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-https-traffic" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-https-traffic</a>.</p> <p>Everything seems to work correctly. For outbound traffic for 'externalapi' service, I am getting istio_tcp_connections_closed_total metric. And here's my question:</p> <p>Is there any way to replace istio_tcp_connections_closed_total metric with istio_requests_total for outbound traffic going through egress gateway? I would like to get some additional information like response codes for outgoing traffic.</p> <p>Here's my configuration:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: ServiceEntry metadata: name: externalapi-egress spec: hosts: - externalapi.mydomain.com ports: - number: 443 name: tls protocol: TLS resolution: DNS location: MESH_EXTERNAL --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: externalapi-egress spec: selector: istio: egressgateway servers: - port: number: 443 name: tls protocol: TLS hosts: - externalapi.mydomain.com tls: mode: PASSTHROUGH --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: externalapi-egress spec: host: istio-egressgateway.istio-system.svc.cluster.local subsets: - name: externalapi-egress --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: externalapi spec: hosts: - externalapi.mydomain.com gateways: - externalapi-egress - mesh tls: - match: - gateways: - mesh port: 443 sniHosts: - externalapi.mydomain.com route: - destination: host: istio-egressgateway.istio-system.svc.cluster.local subset: externalapi-egress port: number: 443 weight: 100 - match: - gateways: - externalapi-egress port: 443 sniHosts: - externalapi.mydomain.com route: - destination: host: externalapi.mydomain.com port: number: 443 weight: 100 </code></pre> <p>Other configuration information:</p> <ol> <li>Istio: 1.8.0 installed through IstioOperator.</li> </ol> <p>Thank you for helping me with this, Robert</p> <p>Edit: I would like to find a way to have istio_requests_total metric generated for the traffic going to externalapi instead of istio_tcp_connections_closed_total.</p>
FlashGordon
<p><strong>tldr:</strong> you cannot do this.</p> <hr /> <p>Now the long answer.</p> <p>From <a href="https://istio.io/latest/docs/reference/config/metrics/#metrics" rel="nofollow noreferrer">istio documentnion about metrics</a>:</p> <blockquote> <p><strong>For HTTP, HTTP/2, and GRPC traffic, Istio generates the following metrics:</strong></p> <p>Request Count (istio_requests_total): This is a COUNTER <strong>incremented for every request</strong> handled by an Istio proxy.</p> <p><strong>. . .</strong></p> <p><strong>For TCP traffic, Istio generates the following metrics</strong>:</p> <p>Tcp Byte Sent (istio_tcp_sent_bytes_total): This is a COUNTER which measures the size of total bytes sent during response in case of a TCP connection.</p> <p>Tcp Byte Received (istio_tcp_received_bytes_total): This is a COUNTER which measures the size of total bytes received during request in case of a TCP connection.</p> <p>Tcp Connections Opened (istio_tcp_connections_opened_total): This is a COUNTER incremented for every opened connection.</p> <p>Tcp Connections Closed (istio_tcp_connections_closed_total): This is a COUNTER incremented for every closed connection.</p> <p><strong>. . .</strong></p> </blockquote> <p>Notice that istio_requests_total (according to documentation) counts <strong>number of requests</strong> and this metric is available only for HTTP, HTTP/2, and GRPC traffic.</p> <p>For TCP traffic there is no requests_total mertic because it would be hard to say what to define as a request. That is why for tcp you can only count bytes and number of connections.</p> <p>Now you may say: &quot;<em>hey, I am not using tcp, I am using https (http over tls) so it should be able to count the requests, right?</em>&quot; - and you would be wrong.</p> <p>Before I go further, let me first mention about &quot;HTTP persistent connection&quot; which is defined by <a href="https://en.wikipedia.org/wiki/HTTP_persistent_connection" rel="nofollow noreferrer">wikipedia</a> as:</p> <blockquote> <p>HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is <strong>the idea of using a single TCP connection to send and receive multiple HTTP requests/responses</strong>, as opposed to opening a new connection for every single request/response pair. The newer <strong>HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection.</strong></p> </blockquote> <p>Now, why am I mentioning this?</p> <p>TLS is encrypted traffic. Nothing can peek inside. In case your application is sending/receiving multiple requests/responses over single tls connection (using HTTP persistent connection), it's impossible to count every consecutive request because it is end-to-end encrypted.</p>
Matt
<pre><code> api.name: spark-history-server file.upload.path: x gcp.server.property.file.path: x git.files.update.path: x onprem.server.property.file.path: x preferred.id.deployment.file.path: x preferred.id.file.path: x server.error.whitelabel.enabled: &quot;false&quot; server.port: &quot;18080&quot; server.property.file.path: x server.servlet.context-path: / spark.history.fs.cleaner.enabled: &quot;true&quot; spark.history.fs.cleaner.interval: &quot;1h&quot; spark.history.fs.cleaner.maxAge: &quot;12h&quot; spring.thymeleaf.prefix: classpath:/templates/dev/ spring.thymeleaf.view-names: index,devForm,error temp.repo.location: x </code></pre> <p>I am trying to clear my spark history server logs which I have deployed in Kubernetes using these three parameters as mentioned, I found the answer here <a href="https://stackoverflow.com/questions/42817924/cleaning-up-spark-history-logs">Cleaning up Spark history logs</a></p> <p>it works when I restart the pods manually and deletes logs older than 12 hours but with time it starts pickingup old logs again and spark history server takes 1-2 hours to restart, is there another way I can do this so I don't have to manually restart the pods with time.</p> <p>I asked around and found that it may be because I am using a shared starage like nfs.</p>
Raj Singh
<p>The problem was that I was trying to add these parameters in Configmap.yaml file instead of Deployment.yaml file. Just add these paramters in SPARK_HISTORY_OPTS.</p> <h1 id="example-ipk8">Example</h1> <ul> <li><p>name: SPARK_HISTORY_OPTS</p> <p>value: &quot;-Dspark.history.fs.logDirectory=/FS/YOU/CREATED/ABOVE -Dspark.history.fs.cleaner.enabled=true -Dspark.history.fs.cleaner.interval=1d -Dspark.history.fs.cleaner.maxAge=7d&quot;</p> </li> </ul> <p>This article helped me <a href="https://wbassler23.medium.com/spark-history-server-on-dc-os-516fb71523a5" rel="nofollow noreferrer">https://wbassler23.medium.com/spark-history-server-on-dc-os-516fb71523a5</a></p>
Raj Singh
<p>I am trying to run a application that has around 40 microservices. How to pass 40 different docker images from values.yml file to template.yml file.</p> <h2>template file</h2> <pre><code> name:{{ .values.name }} spec: containers: - image: {{ .values.container.image }} </code></pre> <p><strong>values file</strong></p> <pre><code>name:A container: image:A name :B container : image:B </code></pre> <p>i have 40 more docker images like that, how to pass all those images to template. And will passing like that creates <strong>40 different</strong> pods?, because we would need 40 different pods. Any guidance is highly appreciated.</p>
vikas mp
<p><strong>Focusing only on images and templates</strong> you can create a helm template that will spawn an X amount of pods by:</p> <ul> <li>Creating a <code>Chart.yaml</code> file </li> <li>Creating a <code>values.yaml</code> file with the variable that store all image names</li> <li>Creating a template with a <code>{{ range }}</code> directive </li> <li>Testing</li> </ul> <p>Below is the structure of files and directories: </p> <pre class="lang-sh prettyprint-override"><code>❯ tree helm-dir helm-dir ├── Chart.yaml ├── templates │ └── pod.yaml └── values.yaml 1 directory, 3 files </code></pre> <hr> <h2>Create <code>Chart.yaml</code> file</h2> <p>Below is the <code>Chart.yaml</code> file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v2 name: helm-templates description: A Helm chart for spawning pods from images version: 0.1.0 </code></pre> <hr> <h2>Create a <code>values.yaml</code> file with the variable that store all image names</h2> <p>Below is the simple <code>values.yaml</code> file with different images name that will be used with a template: </p> <pre class="lang-yaml prettyprint-override"><code>different_images: - ubuntu - nginx </code></pre> <hr> <h2>Create a template that with a <code>{{ range }}</code> directive</h2> <p>This template is stored in <code>templates</code> directory with a name <code>pod.yaml</code></p> <p>Below <code>YAML</code> definition will be a template for all the pods: </p> <pre class="lang-yaml prettyprint-override"><code>{{- range .Values.different_images }} apiVersion: v1 kind: Pod metadata: name: {{ . }} labels: app: {{ . }} spec: restartPolicy: Never containers: - name: {{ . }} image: {{ . }} imagePullPolicy: Always command: - sleep - infinity --- {{- end }} </code></pre> <p><code>{{- range .Values.different_images }}</code> will iterate over <code>different_images</code> variable and replace the <code>{{ . }}</code> with an image name. </p> <hr> <h2>Test</h2> <p>Run below command from <code>helm-dir</code> directory to check if helm <code>YAML</code> definitions of pods are correctly created: </p> <p><code>$ helm install NAME . --dry-run --debug</code></p> <p>You should get an output with multiple pods definition that look similar to one below:</p> <pre class="lang-yaml prettyprint-override"><code># Source: helm-templates/templates/pod.yaml apiVersion: v1 kind: Pod metadata: name: ubuntu labels: app: ubuntu spec: restartPolicy: Never containers: - name: ubuntu ports: - containerPort: 3000 image: ubuntu imagePullPolicy: Always command: - sleep - infinity resources: requests: memory: 500Mi cpu: 500m </code></pre> <p>You can now run: <code>$ helm install NAME .</code> </p> <p>and check if pods spawned correctly with <code>$ kubectl get pods</code>:</p> <pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 8s ubuntu 1/1 Running 0 8s </code></pre> <p>Please take a look on additional resources: </p> <ul> <li><a href="https://helm.sh/docs/chart_best_practices/" rel="noreferrer">Helm.sh: Chart best practices</a></li> <li><a href="https://helm.sh/docs/chart_template_guide/control_structures/" rel="noreferrer">Helm.sh: Chart template guide: Control structures</a></li> <li><a href="https://stackoverflow.com/questions/51024074/how-can-i-iteratively-create-pods-from-list-using-helm">Stackoverflow.com: How can I iteratively create pods from list using helm</a></li> </ul>
Dawid Kruk
<p>Assuming deployment, replicaSet and pod are all 1:1:1 mapping.</p> <pre><code>deployment ==&gt; replicaSet ==&gt; Pod </code></pre> <p>When we do deployment, replicaSet adds <code>pod-template-hash</code> label to pods. So, this looks enough for a replicaSet to check if enough pods are running. Then what is the significance of replicaSet <code>matchLabels</code> selector? Why is it mandatory?</p> <p>To explain for better understanding</p> <p>For ex: I deploy an app with these labels. 2 pods are running</p> <pre><code>spec: replicas: 2 selector: matchLabels: app: nginx-app </code></pre> <p>Now change label value of pod-template-hash to something else for one of the pods (changing to <code>testing</code> here). Now we immediately see another pod started. So replicaSet does not seem to care about <code>selector.matchLabels</code></p> <pre><code>NAME READY STATUS RESTARTS AGE LABELS pod/nginx-app-b8b875889-cpnnr 1/1 Running 0 53s app=nginx-app,pod-template-hash=testing pod/nginx-app-b8b875889-jlk6m 1/1 Running 0 53s app=nginx-app,pod-template-hash=b8b875889 pod/nginx-app-b8b875889-xblqr 1/1 Running 0 11s app=nginx-app,pod-template-hash=b8b875889 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 151d component=apiserver,provider=kubernetes NAME READY UP-TO-DATE AVAILABLE AGE LABELS deployment.apps/nginx-app 2/2 2 2 53s app=nginx-app NAME DESIRED CURRENT READY AGE LABELS replicaset.apps/nginx-app-b8b875889 2 2 2 53s app=nginx-app,pod-template-hash=b8b875889 </code></pre>
RamPrakash
<p>Let me summarize it. The whole discussion is about: Why deployment forces me to set matchLabels selector even though it could easly live without it, since its adding pod-template-hash and it would be totally fine with using only that.</p> <p>After reading all the comments and all the discussion I decided to look in kubernetes documentation.</p> <p>I will allow myself to quote k8s documentation about replicasets: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#how-a-replicaset-works" rel="noreferrer">How a ReplicaSet works</a></p> <blockquote> <h3>How a ReplicaSet works:</h3> <p>[...]</p> <p><strong>A ReplicaSet is linked to its Pods via the Pods'</strong> <strong>metadata.ownerReferences field</strong>, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly.</p> </blockquote> <p>So does is mean that it's not using labels at all? Well, not exactly. Let's keep reading:</p> <blockquote> <p>A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a Controller and it matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet</p> </blockquote> <p>Ouh, so it looks like it is using the selector only as an alternative to the first method.</p> <p>Let's keep reading. Here is a quote from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#pod-selector" rel="noreferrer">Pod Selector</a> section:</p> <blockquote> <h3>Pod Selector</h3> <p>The .spec.selector field is a label selector. As discussed earlier <strong>these are the labels used to identify potential Pods</strong> <strong>to acquire</strong></p> </blockquote> <p>It looks like these labels are not used as a primary method to keep track of pod owned by the ReplicaSet, they are use to <em>&quot;identify potential Pods to acquire&quot;</em>. But what does it mean?</p> <p>Why would ReplicaSet acquire pods it does not own? There is a section in documentation that tries to answer this very question: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#non-template-pod-acquisitions" rel="noreferrer">Non-Template Pod acquisition</a></p> <blockquote> <h3>Non-Template Pod acquisitions</h3> <p>While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have labels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited to owning Pods specified by its template-- it can acquire other Pods in the manner specified in the previous sections.</p> <p>[...]</p> <p>As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the [...] ReplicaSet, they will immediately be acquired by it.</p> </blockquote> <p>Great, but this still does not answer the question: Why do I need to provide the selector? Couldn't it just use that hash?</p> <p>Back in the past when there was a bug in k8s: <a href="https://github.com/kubernetes/kubernetes/issues/23170" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/23170</a> so someone suggested the validation is needed: <a href="https://github.com/kubernetes/kubernetes/issues/23218" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/23218</a> And so validation appeared: <a href="https://github.com/kubernetes/kubernetes/pull/23530" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/23530</a></p> <p>And it stayed with us to this day, even if today we probably could live without it.</p> <p>Although I think its better that it's there because it minimizes the chances of overlaping labels in case of pod-template-hash collision for different RSs.</p>
Matt
<p>How can I setup a single gateway in Istio 1.9 and multiple VirtualServices (each one in a different namespace). I can't set one gateway to each virtualservice because browsers leverage HTTP/2 <a href="https://httpwg.org/specs/rfc7540.html#reuse" rel="nofollow noreferrer">connection reuse</a> to produce 404 errors.</p> <p>If I follow <a href="https://istio.io/latest/docs/ops/common-problems/network-issues/#404-errors-occur-when-multiple-gateways-configured-with-same-tls-certificate" rel="nofollow noreferrer">these instructions</a> it won't work because gateway and virtualservice can't be in different namespaces.</p> <p>These are the manifest files:</p> <p>APP1:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: app1-gateway namespace: app1 spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - &quot;app1.example.com&quot; tls: httpsRedirect: true # sends 301 redirect for http requests - port: number: 443 name: https-app1 protocol: HTTPS tls: mode: SIMPLE credentialName: sslcertificate hosts: - &quot;app1.example.com&quot; --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: app1 namespace: app1 spec: hosts: - &quot;app1.example.com&quot; gateways: - app1-gateway http: - match: - uri: prefix: / route: - destination: host: app1 port: number: 80 </code></pre> <p>APP2:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: app2-gateway namespace: app2 spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - &quot;app2.example.com&quot; tls: httpsRedirect: true # sends 301 redirect for http requests - port: number: 443 name: https-app2 protocol: HTTPS tls: mode: SIMPLE credentialName: sslcertificate hosts: - &quot;app2.example.com&quot; --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: app2 namespace: app2 spec: hosts: - &quot;app2.example.com&quot; gateways: - app2-gateway http: - match: - uri: prefix: / route: - destination: host: app2 port: number: 80 </code></pre>
brgsousa
<p>To answer your question, <code>because gateway and virtualservice can't be in different namespaces</code>, actually they can be in a different namespaces.</p> <p>If it´s not in the same namespace as virtual service you just have to specify that namespace in your virtual service <code>spec.gateways</code>.</p> <p>Check the <code>spec.gateways</code> section</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo-Mongo namespace: bookinfo-namespace spec: gateways: - some-config-namespace/my-gateway # can omit the namespace if gateway is in same namespace as virtual service. </code></pre> <hr /> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: my-gateway namespace: some-config-namespace </code></pre> <p>There is related <a href="https://istio.io/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">istio documentation</a> about that.</p>
Jakub
<p>I am trying to learn Kubernetes and so I installed Minikube on my local Windows 10 Home machine and then I tried installing the kubectl. However so far I have been unsuccessful in getting it right. So this what I have done so far: Downloaded the kubectl.exe file from <a href="https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/windows/amd64/kubectl.exe" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/windows/amd64/kubectl.exe</a></p> <p>Then I added the path of this exe in the <code>path environment variable</code> as shown below: <a href="https://i.stack.imgur.com/pkNSX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pkNSX.png" alt="enter image description here"></a></p> <p>However this didn't work when I executed <code>kubectl version</code> on the command prompt or even on pwoershell (in admin mode)</p> <p>Next I tried using the curl command as given in the docs - <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-with-curl-on-windows" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-with-curl-on-windows</a></p> <p>However that too didn't work as shown below: <a href="https://i.stack.imgur.com/CH5UH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CH5UH.png" alt="enter image description here"></a></p> <p>Upon searching for answers to fix the issue, I stumbled upon this <a href="https://stackoverflow.com/questions/39922430/cant-connect-to-container-cluster-environment-variable-home-or-kubeconfig-must">StackOverfow question</a> which explained how to create a <code>.kube</code> config folder because it didn't exist on my local machine. I followed the instructions, but that too failed.</p> <p><a href="https://i.stack.imgur.com/s5Vw5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s5Vw5.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/ZUlE1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZUlE1.png" alt="enter image description here"></a></p> <p>So right now I am completely out of ideas and not sure whats the issue here. FYI, I was able to install everything in a breeze on my Mac, however Windows is just acting crazy.</p> <p>Any help would be really helpful.</p>
RDM
<p>As user @paltaa mentioned:</p> <blockquote> <p>did you do a <code>minikube start</code> ? – <a href="https://stackoverflow.com/users/6855531/paltaa" title="405 reputation">paltaa</a> <a href="https://stackoverflow.com/questions/61144233/issue-in-setting-up-kubectl-on-windows-10-home#comment108172290_61144233">2 days ago</a></p> </blockquote> <p><strong>The fact that you did not start the <code>minikube</code> is the most probable cause why you are getting this error.</strong></p> <hr /> <p>Additionally this error message shows when the <code>minikube</code> is stopped as stopping will change the <code>current-context</code> inside the <code>config</code> file.</p> <hr /> <p>There is no need to create a <code>config</code> file inside of a <code>.kube</code> directory as the <code>minikube start</code> will create appropriate files and directories for you automatically.</p> <p>If you run <code>minikube start</code> command successfully you should get below message at the end of configuration process which will indicate that the <code>kubectl</code> is set for <code>minikube</code> automatically.</p> <blockquote> <p>Done! kubectl is not configured to use &quot;minikube&quot;</p> </blockquote> <p>Additionally if you invoke command <code>$ kubectl config</code> you will get more information how <code>kubectl</code> is looking for configuration files:</p> <pre><code> The loading order follows these rules: 1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place. 2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. 3. Otherwise, ${HOME}/.kube/config is used and no merging takes place. </code></pre> <p>Please take a special look on part:</p> <blockquote> <ol start="3"> <li>Otherwise, ${HOME}/.kube/config is used</li> </ol> </blockquote> <p>Even if you do not set the <code>KUBECONFIG</code> environment variable <code>kubectl</code> will default to <code>$USER_DIRECTORY</code> (for example <code>C:\Users\yoda\</code>.</p> <p>If for some reason your cluster is running and files got deleted/corrupted you can:</p> <ul> <li><code>minikube stop</code></li> <li><code>minikube start</code></li> </ul> <p>which will recreate a <code>.kube/config</code></p> <hr /> <p>Steps for running <code>minikube</code> on Windows in this case could be:</p> <ul> <li>Download and install <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/#install-minikube-using-an-installer-executable" rel="nofollow noreferrer">Kubernetes.io: Install minikube using an installer executable</a></li> <li>Download, install and configure a Hypervisor (for example <a href="https://www.virtualbox.org/" rel="nofollow noreferrer">Virtualbox</a>)</li> <li>Download <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">kubectl</a> <ul> <li>OPTIONAL: Add the <code>kubectl</code> directory to Windows environment variables</li> </ul> </li> <li>Run from command line or powershell from current user: <code>$ minikube start --vm-driver=virtualbox</code></li> <li>Wait for configuration to finish and invoke command like <code>$ kubectl get nodes</code>.</li> </ul>
Dawid Kruk
<p><a href="https://i.stack.imgur.com/o5Czz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o5Czz.png" alt="enter image description here" /></a></p> <p>I have a <a href="https://k3s.io/" rel="nofollow noreferrer">k3s</a> cluster with some pods running. I want to get all pods with status of <em>Running</em> containing <em>grafana</em> in its name.</p> <p>From the docs, looks like I can use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">field selector</a> flag to achieve it (<a href="https://github.com/kubernetes/kubernetes/pull/50140?w=1" rel="nofollow noreferrer">was introduced in v1.9 release</a>). But when I give it a try, it didn't work.</p> <p>I know I can force delete the pod with <em>Terminating</em> status to get what I want. What I want to know here is, why my command didn't work as expected? Did i miss something?</p> <p>Btw, the <em>Terminating</em> pod was there for some time. I believe it's stuck. But again, that's not the interest of this question.</p>
Zulhilmi Zainudin
<p>If you take a look into <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">k8s documentation on pod-lifecycle</a> you can see that status.phase can have only 5 different values and none of them is <code>Terminating</code>. This means that termination state in not reflected in this field and therefore filtering by the phase field is useless.</p> <p>Terminating status is reflected under <code>.status.containerStatuses.state</code> (<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states" rel="nofollow noreferrer">container states in docs</a>) although this field label doesn't seem to be supported by the filter.</p> <hr /> <p>So what can you do?</p> <p>The easiest thing you can do is to use grep twice:</p> <pre><code>kubectl get pods | grep grafana | grep Running </code></pre>
Matt
<p>I am developing a simple web app with web service and persistent layer. Web persistent layer is Cockroach db. I am trying to deploy my app with single command:</p> <pre><code>kubectl apply -f my-app.yaml </code></pre> <p>App is deployed successfully. However when backend has to store something in db the following error appears:</p> <pre><code>dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host </code></pre> <p>When I start my app I provide the following connection string to cockroach db and connection is successful but when I try to store something in db the above error appears:</p> <pre><code>postgresql://root@web-service-db:26257/defaultdb?sslmode=disable </code></pre> <p>For some reason web pod can not talk with db pod. My whole configuration is:</p> <pre><code> # Service for web application apiVersion: v1 kind: Service metadata: name: web-service spec: selector: app: web-service type: NodePort ports: - protocol: TCP port: 8080 targetPort: http nodePort: 30103 externalIPs: - 192.168.1.9 # &lt; - my local ip --- # Deployment of web app apiVersion: apps/v1 kind: Deployment metadata: name: web-service spec: selector: matchLabels: app: web-service replicas: 1 template: metadata: labels: app: web-service spec: hostNetwork: true containers: - name: web-service image: my-local-img:latest imagePullPolicy: IfNotPresent ports: - name: http containerPort: 8080 hostPort: 8080 env: - name: DB_CONNECT_STRING value: &quot;postgresql://root@web-service-db:26257/defaultdb?sslmode=disable&quot; --- ### Kubernetes official doc PersistentVolume apiVersion: v1 kind: PersistentVolume metadata: name: cockroach-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/tmp/my-local-volueme&quot; --- ### Kubernetes official doc PersistentVolumeClaim apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cockroach-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 4Gi --- # Cockroach used by web-service apiVersion: v1 kind: Service metadata: name: web-service-cockroach labels: app: web-service-cockroach spec: selector: app: web-service-cockroach type: NodePort ports: - protocol: TCP port: 26257 targetPort: 26257 nodePort: 30104 --- # Cockroach stateful set used to deploy locally apiVersion: apps/v1 kind: StatefulSet metadata: name: web-service-cockroach spec: serviceName: web-service-cockroach replicas: 1 selector: matchLabels: app: web-service-cockroach template: metadata: labels: app: web-service-cockroach spec: volumes: - name: cockroach-pv-storage persistentVolumeClaim: claimName: cockroach-pv-claim containers: - name: web-service-cockroach image: cockroachdb/cockroach:latest command: - /cockroach/cockroach.sh - start - --insecure volumeMounts: - mountPath: &quot;/tmp/my-local-volume&quot; name: cockroach-pv-storage ports: - containerPort: 26257 </code></pre> <p>After deployment everything looks good.</p> <pre><code>kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 50m web-service NodePort 10.111.85.64 192.168.1.9 8080:30103/TCP 6m17s webs-service-cockroach NodePort 10.96.42.121 &lt;none&gt; 26257:30104/TCP 6m8s </code></pre> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE web-service-6cc74b5f54-jlvd6 1/1 Running 0 24m web-service-cockroach-0 1/1 Running 0 24m </code></pre> <p>Thanks in advance!</p>
user2739823
<p>Looks like you have a problem with DNS.</p> <pre><code>dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host </code></pre> <p>Address <code>192.168.65.1</code> does not like a kube-dns service ip.</p> <p>This could be explaind if you where using host network, and surprisingly you do. When using <code>hostNetwork: true</code>, the default dns server used is the server that the host uses and that never is a kube-dns.</p> <hr /> <p>To solve it set:</p> <pre><code>spec: dnsPolicy: ClusterFirstWithHostNet </code></pre> <p>It sets the dns server to the k8s one for the pod.</p> <p>Have a look at kubernetes documentaion for more information about <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">Pod's DNS Policy</a>.</p>
Matt
<p>POD consist: <code>my-app-container</code> and <code>envoy (istio-proxy)</code> container</p> <p>I want to get <code>cpu_usage</code> of <code>envoy-container</code> from <code>may-app-container</code>.</p> <p>Info from: <code>http://localhost:1500/stats</code> and <code>http://localhost:1500/stats/prometheus</code> doesn't contain <code>CPU_usage</code>.</p> <p>Thanks</p>
srg321
<p>Below are 2 ways I could find to get the information about istio-proxy cpu usage.</p> <hr /> <p>You can use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top" rel="nofollow noreferrer">kubectl top</a>.</p> <pre><code>kubectl top pods -A --containers | grep istio-proxy | grep Mi default details-v1-66b6955995-g98g6 istio-proxy 3m 42Mi default productpage-v1-5d9b4c9849-v4pwv istio-proxy 3m 40Mi default ratings-v1-fd78f799f-qcpwn istio-proxy 3m 38Mi default reviews-v1-6549ddccc5-hg4sw istio-proxy 4m 64Mi default reviews-v2-76c4865449-kzknx istio-proxy 4m 55Mi default reviews-v3-6b554c875-7txzl istio-proxy 3m 53Mi istio-system istio-ingressgateway-5656d66f9f-2f25t istio-proxy 3m 36Mi </code></pre> <hr /> <p>You can also follow the Istio <a href="https://github.com/istio/istio/wiki/Analyzing-Istio-Performance" rel="nofollow noreferrer">Wiki</a> to analyze Istio performance, including the <a href="https://github.com/istio/istio/wiki/Analyzing-Istio-Performance#cpu" rel="nofollow noreferrer">cpu</a>.</p>
Jakub
<p>I'm working on EKS AWS and stuck because when ever I want to deployment the same code again after some changes I have to delete the previous deployment and create with back with new image, after pulling the new image in ECR (kubectl delete abcproj.json) which destroys the old pods(load balancer) and creates new one in the result its always give me new external IP. I want to prevent this problem and cannot find proper solution on internet.</p> <p>Thanks in Advance!</p>
Fahad
<p>From Kubernetes point of view you could try to do the following: </p> <ul> <li>Create a deployment </li> <li>Create a <strong>separate</strong> service object type of <code>LoadBalancer</code> that will point to your application </li> <li>Test it </li> <li>Create a new deployment in place of old one </li> <li>Test it again </li> </ul> <hr> <p>Example with <code>YAML</code>'s: </p> <h3>Create a deployment</h3> <p>Below is example deployment of hello application:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: selector: matchLabels: app: hello version: 1.0.0 replicas: 1 template: metadata: labels: app: hello version: 1.0.0 spec: containers: - name: hello image: "gcr.io/google-samples/hello-app:1.0" env: - name: "PORT" value: "50001" </code></pre> <p>Take a specific look on parts of <code>matchLabels</code>.</p> <h3>Create a service object type of <code>LoadBalancer</code> that will point to your application</h3> <p>Below is example service that will give access to hello application:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: hello-service spec: selector: app: hello ports: - port: 50001 targetPort: 50001 type: LoadBalancer </code></pre> <p>Take once again a specific look on <code>selector</code>. It will match the pods by label named <code>app</code> with value of <code>hello</code>.</p> <p>You could refer to official documentation: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/" rel="nofollow noreferrer">HERE!</a></p> <h3>Test it</h3> <p>Apply both <code>YAML</code> definitions and wait for <code>ExternalIP</code> to be assigned. After that check if application works. </p> <p>Output of web browser with old application version: </p> <pre class="lang-sh prettyprint-override"><code>Hello, world! Version: 1.0.0 Hostname: hello-549db57dfd-g746m </code></pre> <h3>Create a new deployment in place of old one</h3> <p>On this step you could try to:</p> <ul> <li>run <code>kubectl apply -f DEPLOYMENT.yaml</code> on new version to apply the differences </li> <li>try first to delete the <code>Deployment</code> and create a new one in place of old one. </li> </ul> <p><strong>In this step do not delete your existing <code>LoadBalancer</code>.</strong> </p> <p>Using example <code>Deployment</code> above we could simulate a version change of your image by changing the: </p> <pre><code> image: "gcr.io/google-samples/hello-app:1.0" </code></pre> <p>to: </p> <pre><code> image: "gcr.io/google-samples/hello-app:2.0" </code></pre> <p>After that you can apply it or recreate your deployment. </p> <p><code>LoadBalancer</code> should not have changed IP address as it's not getting recreated. </p> <h3>Test it again</h3> <p>Output of web browser with new application version: </p> <pre class="lang-sh prettyprint-override"><code>Hello, world! Version: 2.0.0 Hostname: hello-84d554cbdf-rbwgx </code></pre> <p>Let me know if this solution helped you. </p>
Dawid Kruk
<p>My ASP.NET Core web application uses basepath in startup like,</p> <pre><code>app.UsePathBase(&quot;/app1&quot;); </code></pre> <p>So the application will run on basepath. For example if the application is running on localhost:5000, then the <strong>app1</strong> will be accessible on '<strong>localhost:5000/app1</strong>'.</p> <p>So in nginx ingress or any ingress we can expose the entire container through service to outside of the kubernetes cluster.</p> <p>My kubernetes deployment YAML file looks like below,</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app1-deployment spec: selector: matchLabels: app: app1 replicas: 1 template: metadata: labels: app: app1 spec: containers: - name: app1-container image: app1:latest ports: - containerPort: 5001 --- apiVersion: v1 kind: Service metadata: name: app1-service labels: app: app1 spec: type: NodePort ports: - name: app1-port port: 5001 targetPort: 5001 protocol: TCP selector: app: app1 --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: app1-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - http: paths: - path: /app1(/|$)(.*) backend: serviceName: server-service servicePort: 5001 </code></pre> <p>The above ingress will expose the entire container in '<strong>localhost/app1</strong>' URL. So the application will run on '/app1' virtual path. But the <strong>app1</strong> application will be accessible on '<strong>localhost/app1/app1</strong>'.</p> <p>So I want to know if there is any way to route '<strong>localhost/app1</strong>' request to basepath in container application '<strong>localhost:5001/app1</strong>' in ingress.</p>
Sesha3
<p>If I understand correctly, now the app is accessible on <code>/app1/app1</code> and you would like it to be accessible on <code>/app1</code></p> <p>To do this, don't use rewrite:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: app1-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - http: paths: - path: /app1 backend: serviceName: server-service servicePort: 5001 </code></pre>
Matt
<p>I'm trying to migrate from ingress to istio gateway + virtual service routing, but I keep receiving a <code>404 Not Found</code> error.</p> <p>The only link that the app should be accessed to is using <code>my-todos.com</code>, configured locally.</p> <p>What am I missing here?</p> <p>Note: the ingress controller works just fine. Initially, <code>todo-lb.default.svc.cluster.local</code> in the <code>istio.yaml</code> file was just set to <code>todo-lb</code>, representing the configured load balancer, still with no success.</p> <p>Here is the <code>ingress.yaml</code> file (to migrate from):</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: todo-ingress spec: rules: - host: my-todos.com http: paths: - path: / pathType: Prefix backend: service: name: todo-lb port: number: 3001 - path: /api pathType: Prefix backend: service: name: {{ .Values.api.apiName }} port: number: {{ .Values.api.apiPort }} </code></pre> <p>Here is the <code>istio.yaml</code> file (to migrate TO):</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: todo-istio-gateway namespace: istio-system spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - my-todos.com # - &quot;*&quot; tls: httpsRedirect: true - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: tls-secret hosts: - my-todos.com --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: todo-lb spec: hosts: - my-todos.com # - &quot;*&quot; gateways: - todo-istio-gateway http: - match: - uri: prefix: / route: - destination: host: todo-lb.default.svc.cluster.local port: number: 3001 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: todo-api spec: hosts: - my-todos.com # - &quot;*&quot; gateways: - todo-istio-gateway http: - match: - uri: prefix: /api route: - destination: host: {{ .Values.api.apiName }} port: number: {{ .Values.api.apiPort }} </code></pre>
gusti
<p>From what I see you've wrong gateway configuration in your virtual service, that's why it might not work.</p> <hr /> <p>If gateway is not in the same namespace as virtual service, you have to specify that in virtual service</p> <p>Check the <code>spec.gateways</code> section</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo-Mongo spec: gateways: - some-config-namespace/my-gateway # can omit the namespace if gateway is in same namespace as virtual service. </code></pre> <hr /> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: my-gateway namespace: some-config-namespace </code></pre> <p>There is related <a href="https://istio.io/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">istio documentation</a> about that.</p> <hr /> <p>So please move your todo-istio-gateway to default namespace.</p> <p>or use</p> <pre><code>gateways: - istio-system/todo-istio-gateway </code></pre> <hr /> <p>Few things to check if that won't help:</p> <ul> <li>Is your app deployed in default namespace?</li> <li>Is your app <a href="https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/#injection" rel="nofollow noreferrer">injected</a>?</li> </ul>
Jakub
<p>Is it possible to have a common config file (e.g. ConfigMap) which is shared across multiple environments and applications? I know it's simple to do it across with multiple environment overlays but how about a level above it for apps? If I have the following structure:</p> <pre><code>Base App1 Configmaps Kustomization.yaml Global Configmaps Overlays Env1 App1 Configmaps Deployments Kustomization.yaml App2 Configmaps Deployments Kustomization.yaml Env2.. (same as above) App1.. App2.. </code></pre> <p>Is it possible to some how a static set of common config values which is references across all applications? In the above structure I can only refer to resources within the same folder or below, if I try and refer to resource in a parent folder outside of the App level then you normally get an error such as &quot; Error: AccumulateTarget: rawResources failed to read Resources: Load from path ../../configmaps/base-config.yaml failed: security; file '../../configmaps/base-config.yaml' is not in or below 'C:\Code\BUILD-ARTEFACTS\deployment-manifests\base\apps\app-ui' &quot;</p> <p>Is there anyway to share common configs at parent folder level not in the child folders? otherwise I end up repeating some of the settings across multiple apps which is not ideal.</p>
Rubans
<p>You are seeing this error because it is there to protect users from phishing attack. Check out <a href="https://github.com/kubernetes-sigs/kustomize/issues/693" rel="nofollow noreferrer">this kustomize issue</a>.</p> <p>From kustomize faq: <a href="https://kubectl.docs.kubernetes.io/faq/kustomize/#security-file-foo-is-not-in-or-below-bar" rel="nofollow noreferrer">security: file ‘foo’ is not in or below ‘bar</a>:</p> <blockquote> <p>v2.0 added a security check that prevents kustomizations from reading files outside their own directory root.</p> <p>This was meant to help protect the person inclined to download kustomization directories from the web and use them without inspection to control their production cluster (see <a href="https://github.com/kubernetes-sigs/kustomize/issues/693" rel="nofollow noreferrer">#693</a>, <a href="https://github.com/kubernetes-sigs/kustomize/pull/700" rel="nofollow noreferrer">#700</a>, <a href="https://github.com/kubernetes-sigs/kustomize/pull/995" rel="nofollow noreferrer">#995</a> and <a href="https://github.com/kubernetes-sigs/kustomize/pull/998" rel="nofollow noreferrer">#998</a>).</p> <p>Resources (including configmap and secret generators) can still be shared via the recommended best practice of placing them in a directory with their own kustomization file, and referring to this directory as a base from any kustomization that wants to use it. This encourages modularity and relocatability.</p> <p>To disable this, use v3, and the load_restrictor flag:</p> <pre><code>kustomize build --load_restrictor none $target </code></pre> </blockquote>
Matt
<p>I am new to Kubernetes. Just set up the stack one master and another slave ( two Ec2 Instances ) when trying to deploy my first pod on the slave I got below error. Could you hlep me out. Error file is attached.</p> <p><strong>Error:</strong></p> <blockquote> <p>Warning FailedCreatePodSandBox 34m kubelet, slave-node Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container &quot;26cdaf3170806455a4731218d20c482bb2a41ded6ef85c90b560058e332df684&quot; network for pod &quot;label-demo&quot;: networkPlugin cni failed to set up pod &quot;label-demo_default&quot; network: open /run/flannel/subnet.env: no such file or directory</p> </blockquote>
ramya ranjan
<p>As the state of this cluster, how exactly it was deployed as well as the output messages of <code>kubectl</code>, <code>kubelet</code> are unknown, I will try to give some troubleshooting steps and circumstances that can lead to resolving some of the problems that are encountered here. </p> <ul> <li>Kubeadm</li> <li>Flannel </li> <li>Kubelet </li> <li>Reproduction </li> <li>Additional links</li> </ul> <h3>Kubeadm</h3> <p>The first thing is provisioning process of the Kubernetes cluster with <code>kubeadm</code>: </p> <p>Check all the requirements and steps with official documentation: </p> <ul> <li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">Kubernetes.io: Install kubeadm</a>.</li> <li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">Kubernetes.io: Create cluster kubeadm</a></li> </ul> <p>When invoking <code>$ kubeadm init</code> command please make sure to add parameters like:</p> <ul> <li><code>--apiserver-advertise-address=IP_ADDRESS_OF_MASTER_NODE</code></li> <li><code>--pod-network-cidr=POD_NETWORK_CIDR_WITH_ACCORDANCE_TO_CNI</code></li> </ul> <p><strong>Provisioning cluster without <code>--pod-network-cidr</code> parameter could lead to CNI related issues.</strong></p> <p>For Flannel default pod network cidr is <code>10.244.0.0/16</code>. </p> <p>After <code>kubeadm init</code> you need to apply one of many CNI's (like Flannel or Calico) as the <code>kubeadm</code> tool does not do it automatically. </p> <p>Please check if all of the nodes are in <code>Ready</code> status with command:</p> <p><code>$ kubectl get nodes -o wide</code></p> <p>Output of this command should look like that:</p> <pre class="lang-sh prettyprint-override"><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k1 Ready master 69m v1.17.3 10.156.0.29 &lt;none&gt; Ubuntu 18.04.4 LTS 5.0.0-1031-gcp docker://19.3.7 k2 Ready &lt;none&gt; 65m v1.17.3 10.156.0.30 &lt;none&gt; Ubuntu 18.04.4 LTS 5.0.0-1031-gcp docker://19.3.7 k3 Ready &lt;none&gt; 63m v1.17.3 10.156.0.35 &lt;none&gt; Ubuntu 18.04.4 LTS 5.0.0-1031-gcp docker://19.3.7 </code></pre> <p>Additionally you can <code>$ kubectl describe node NAME_OF_THE_NODE</code> to get more detailed information about each of the nodes. </p> <h3>Flannel</h3> <p>There is official documentation about troubleshooting Flannel related issues: <a href="https://github.com/coreos/flannel/blob/master/Documentation/troubleshooting.md" rel="nofollow noreferrer">Github.com: flannel troubleshooting</a> </p> <p>The message that you received from <code>kubelet</code>:</p> <blockquote> <p>Warning FailedCreatePodSandBox 34m kubelet, slave-node Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "26cdaf3170806455a4731218d20c482bb2a41ded6ef85c90b560058e332df684" network for pod "label-demo": networkPlugin cni failed to set up pod "label-demo_default" network: open /run/flannel/subnet.env: no such file or directory</p> </blockquote> <p>Is telling that there is <code>subnet.env</code> file missing on a node that was supposed to schedule a <code>sandbox</code> pod.</p> <p>Furthermore please check if Flannel pods are running correctly. You can check it with either: </p> <ul> <li><code>$ kubectl get pods -A</code></li> <li><code>$ kubectl get pods -n kube-system</code></li> </ul> <p>Additionally you can check logs of this pods by running command: <code>$ kubectl logs NAME_OF_FLANNEL_CONTAINER</code></p> <h3>Kubelet</h3> <p>You can check <code>kubelet</code> logs by (on systemd os) with:</p> <ul> <li><code>$ systemctl status kubelet</code></li> <li><code>$ journalctl -u kubelet</code></li> </ul> <h3>Reproduction</h3> <p>I've managed to reproduce error that you encountered and it was happening when: </p> <ul> <li><code>--pod-network-cidr=CIDR</code> was not used with <code>$ kubeadm init</code> </li> <li>Flannel CNI was applied after <code>$ kubeadm init</code> without <code>--pod-network-cidr</code></li> </ul> <h3>Additional links:</h3> <p>There is an article talking about troubleshooting networking: <a href="https://kubernetes.feisky.xyz/v/en/index/network" rel="nofollow noreferrer">Kubernetes.feisky.xyz: Network</a>.</p> <p>Please let me know if you have any other questions </p>
Dawid Kruk
<p>I was trying to use <code>minikube</code> for local deployment. But I couldn't figure out where exactly minikube deploy container. <br> From what I understand <code>minikube</code> is a virtual machine that is completely different machine from host except they share resources. So when we deploy any container it should be deployed in <code>minikube vm</code> and not in host machine. But when I started minikube with</p> <pre><code>sudo minikube start --driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost </code></pre> <p>Then I deployed my container through kubernetes. After executing my <code>docker ps</code> command I can see them in my local machine.</p> <pre><code>fb41c2836a70 2f26ec35a739 &quot;/bin/bash -ce someApp&quot; 23 minutes ago Up 23 minutes k8s_someapp-7cbc9c4c87-lrrpm_default_d50b97bc-5200-404d-ad33-0d235d647614_0 a5faa3ec7539 k8s.gcr.io/pause:3.2 &quot;/pause&quot; 23 minutes ago Up 23 minutes k8s_app-7cbc9c4c87-lrrpm_default_d50b97bc-5200-404d-ad33-0d235d647614_0 e175dbd7b6ea 207456039af0 &quot;/bin/bash -ce cmd&quot; 53 minutes ago Up 53 minutes k8s_appd-69585dc55c-f7k4d_default_a6619c13-7341-4ca3-9dee-ee26b8fd0b2a_1 </code></pre> <p>What is going on here I couldn't understand.</p>
Pranjal Doshi
<p>minikube is NOT a virtual machine, minikube is a cli tool,</p> <p>minikube starts a VM with k8s installed and running. By specifying <code>--driver=none</code> you are sying: &quot;I don't want to use any vm driver&quot;, and so minikube won't use any driver and just start k8s on your host.</p> <p>If you want to start a k8s in a VM you need to use a VM driver. From <code>minikube start --help</code>:</p> <blockquote> <p>--driver='': Driver is one of: virtualbox, vmwarefusion, kvm2, vmware, none,docker, podman (experimental) (defaults to auto-detect)</p> </blockquote> <p>If you want to use virtualization, use one of the supported drivers.</p> <p>Which one? Virtualbox is free and available for most platforms. kvm2 is linux only. Docker is not a VM but a container isolation platform.</p> <p>Quick google search showed me this link: <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/drivers/</a>. Check it out to learn more about supported drivers.</p>
Matt
<p>I have Istio1.6 running in my k8 cluster. In the cluster I have also deployed sharded mongodb cluster <strong>with istio-injection disabled</strong>.</p> <p>And I have a different namespace for my app <strong>with istio-injection enabled</strong>. And from the pod if I try to connect to the mongo I get this <strong>connection reset by peer error</strong>:</p> <pre class="lang-sh prettyprint-override"><code>root@mongo:/# mongo "mongodb://mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017,mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017/?ssl=false" MongoDB shell version v4.2.8 connecting to: mongodb://mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017,mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017/?compressors=disabled&amp;gssapiServiceName=mongodb&amp;ssl=false 2020-06-18T19:59:14.342+0000 I NETWORK [js] DBClientConnection failed to receive message from mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017 - HostUnreachable: Connection reset by peer 2020-06-18T19:59:14.358+0000 I NETWORK [js] DBClientConnection failed to receive message from mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017 - HostUnreachable: Connection reset by peer 2020-06-18T19:59:14.358+0000 E QUERY [js] Error: network error while attempting to run command 'isMaster' on host 'mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017' : connect@src/mongo/shell/mongo.js:341:17 @(connect):2:6 2020-06-18T19:59:14.362+0000 F - [main] exception: connect failed 2020-06-18T19:59:14.362+0000 E - [main] exiting with code 1 </code></pre> <p>But if I <strong>disable the istio-injection</strong> to my app(pod) then I can successfully connect and use mongo as expected. </p> <p><strong>Is there a work around for this, I would like to have istio-proxy injected to my app/pod and use mongodb?</strong></p>
Prata
<p>Injecting Databases with istio is complicated.</p> <hr> <p>I would start with checking your mtls, if it´s STRICT, I would change it to permissive and check if it works. It´s well described <a href="https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode" rel="nofollow noreferrer">here</a>. </p> <blockquote> <p>You see requests still succeed, except for <strong>those from the client that doesn’t have proxy, sleep.legacy, to the server with a proxy, httpbin.foo or httpbin.bar. This is expected because mutual TLS is now strictly required</strong>, but the workload without sidecar cannot comply.</p> </blockquote> <hr> <blockquote> <p>Is there a work around for this, I would like to have istio-proxy injected to my app/pod and use mongodb?</p> </blockquote> <p>If changing mtls won´t work, then in istio You can set up database without injecting and then add it to istio registry using ServiceEntry object so it would be able to communicate with the rest of istio services.</p> <p>To add your mongodb database to istio you can use <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer">ServiceEntry</a>.</p> <blockquote> <p>ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.</p> </blockquote> <p>Example of ServiceEntry</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: external-svc-mongocluster spec: hosts: - mymongodb.somedomain # not used addresses: - 192.192.192.192/24 # VIPs ports: - number: 27018 name: mongodb protocol: MONGO location: MESH_INTERNAL resolution: STATIC endpoints: - address: 2.2.2.2 - address: 3.3.3.3 </code></pre> <p>If You have mtls enabled You will also need DestinationRule that will define how to communicate with the external service.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: mtls-mongocluster spec: host: mymongodb.somedomain trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem </code></pre> <hr> <p>Additionally take a look at this documentation</p> <ul> <li><a href="https://istiobyexample.dev/databases/" rel="nofollow noreferrer">https://istiobyexample.dev/databases/</a></li> <li><a href="https://istio.io/latest/blog/2018/egress-mongo/" rel="nofollow noreferrer">https://istio.io/latest/blog/2018/egress-mongo/</a></li> </ul>
Jakub
<p>I accidentally deleted the core DNS using kubectl delete deployment coredns -n kube-system how to do I get the coredns pods to run again....</p> <pre><code>I recreated the coredns using kubectl create deployment coredns -n kube-system --image=k8s.gcr.io/coredns:1.7.0 but nslookup is failing is there any way to correct it. root@km1:~# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default dnsutils 2/2 Running 9 22h 10.10.159.72 kw1 &lt;none&gt; &lt;none&gt; kube-system calico-kube-controllers-5c6f6b67db-mp9jg 1/1 Running 2 26h 10.10.132.201 km1 &lt;none&gt; &lt;none&gt; kube-system calico-node-fscgw 1/1 Running 2 26h 192.168.0.215 km1 &lt;none&gt; &lt;none&gt; kube-system calico-node-m78kx 1/1 Running 1 25h 192.168.0.204 kw1 &lt;none&gt; &lt;none&gt; kube-system coredns-5646d5bb85-77pxl 1/1 Running 0 12m 10.10.159.74 kw1 &lt;none&gt; &lt;none&gt; kube-system etcd-km1 1/1 Running 2 26h 192.168.0.215 km1 &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-km1 1/1 Running 2 26h 192.168.0.215 km1 &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-km1 1/1 Running 3 26h 192.168.0.215 km1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-h5pz5 1/1 Running 1 25h 192.168.0.204 kw1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-qdp4w 1/1 Running 2 26h 192.168.0.215 km1 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-km1 1/1 Running 3 26h 192.168.0.215 km1 &lt;none&gt; &lt;none&gt; </code></pre>
jana
<p>Official coredns repo on github provides deploy.sh script that generates coredns yaml file. <a href="https://github.com/coredns/deployment/tree/master/kubernetes#deploysh-and-corednsyamlsed" rel="nofollow noreferrer">Checkout this link</a>.</p> <p>All you need to do is to clone the repo, go to <em>deployment/kubernetes</em> directory and run:</p> <pre><code>$ ./deploy.sh | kubectl apply -f - </code></pre>
Matt
<p>I'm trying to set up RabbitMQ on Minikube using the <a href="https://www.rabbitmq.com/kubernetes/operator/operator-overview.html" rel="nofollow noreferrer">RabbitMQ Cluster Operator</a>:</p> <p>When I try to attach a persistent volume, I get the following error:</p> <pre><code>$ kubectl logs -f rabbitmq-rabbitmq-server-0 Configuring logger redirection 20:04:40.081 [warning] Failed to write PID file &quot;/var/lib/rabbitmq/mnesia/rabbit@rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default.pid&quot;: permission denied 20:04:40.264 [error] Failed to create Ra data directory at '/var/lib/rabbitmq/mnesia/rabbit@rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default/quorum/rabbit@rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default', file system operation error: enoent 20:04:40.265 [error] Supervisor ra_sup had child ra_system_sup started with ra_system_sup:start_link() at undefined exit with reason {error,&quot;Ra could not create its data directory. See the log for details.&quot;} in context start_error 20:04:40.266 [error] CRASH REPORT Process &lt;0.247.0&gt; with 0 neighbours exited with reason: {error,&quot;Ra could not create its data directory. See the log for details.&quot;} in ra_system_sup:init/1 line 43 20:04:40.267 [error] CRASH REPORT Process &lt;0.241.0&gt; with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,ra_system_sup,{error,&quot;Ra could not create its data directory. See the log for details.&quot;}}},{ra_app,start,[normal,[]]}} in application_master:init/4 line 138 {&quot;Kernel pid terminated&quot;,application_controller,&quot;{application_start_failure,ra,{{shutdown,{failed_to_start_child,ra_system_sup,{error,\&quot;Ra could not create its data directory. See the log for details.\&quot;}}},{ra_app,start,[normal,[]]}}}&quot;} Kernel pid terminated (application_controller) ({application_start_failure,ra,{{shutdown,{failed_to_start_child,ra_system_sup,{error,&quot;Ra could not create its data directory. See the log for details.&quot;} Crash dump is being written to: erl_crash.dump... </code></pre> <p>The issue is that RabbitMQ is not able to set up its data files in the data directory <code>/var/lib/rabbitmq/mnesia</code> due to a lack of permission.</p> <p>My initial guess was that I needed to specify the data directory as a volumeMount, but that doesn't seem to be configurable according to the <a href="https://www.rabbitmq.com/kubernetes/operator/using-operator.html" rel="nofollow noreferrer">documentation</a>.</p> <p>RabbitMQ's <a href="https://www.rabbitmq.com/kubernetes/operator/troubleshooting-operator.html" rel="nofollow noreferrer">troubleshooting</a> documentation on persistence results in a <a href="https://www.rabbitmq.com/using-cluster-operator.html#persistence" rel="nofollow noreferrer">404</a>.</p> <p>I tried to find other resources online with the same problem but none of them were using the RabbitMQ Cluster Operator. I plan on following that route if I'm not able to find a solution but I really would like to solve this issue somehow.</p> <p>Does anyone have any ideas?</p> <p>The setup that I have is as follows:</p> <pre><code>apiVersion: rabbitmq.com/v1beta1 kind: RabbitmqCluster metadata: name: rabbitmq spec: replicas: 1 service: type: NodePort persistence: storageClassName: local-storage storage: 20Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rabbitmq-pvc spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: rabbitmq-pv spec: storageClassName: local-storage accessModes: - ReadWriteOnce capacity: storage: 20Gi hostPath: path: /mnt/app/rabbitmq type: DirectoryOrCreate </code></pre> <p>To reproduce this issue on minikube:</p> <ol> <li>Install the rabbitmq operator:</li> </ol> <pre class="lang-sh prettyprint-override"><code>kubectl apply -f &quot;https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml&quot; </code></pre> <ol start="2"> <li>Apply the manifest file above</li> </ol> <pre><code>kubectl apply -f rabbitmq.yml </code></pre> <ol start="3"> <li><p>Running <code>kubectl get po</code> displays a pod named <code>rabbitmq-rabbitmq-server-0</code>.</p> </li> <li><p>Running <code>kubectl logs -f rabbitmq-rabbitmq-server-0</code> to view the logs displays the above error.</p> </li> </ol>
W.K.S
<p>As I alread suggested in comments, you can solve it running:</p> <pre><code>minikube ssh -- sudo chmod g+w /mnt/app/rabbitmq/ </code></pre> <hr /> <p>Answering to your question:</p> <blockquote> <p>Is there a way I can add that to my manifest file rather than having to do it manually?</p> </blockquote> <p>You can override the rabbitmq statefulset manifest fields to change last line in initContainer command script from <code> chgrp 999 /var/lib/rabbitmq/mnesia/</code> to this: <code>chown 999:999 /var/lib/rabbitmq/mnesia/</code>.</p> <p>Complete RabbitmqCluster yaml manifest looks like following:</p> <pre><code>apiVersion: rabbitmq.com/v1beta1 kind: RabbitmqCluster metadata: name: rabbitmq spec: replicas: 1 service: type: NodePort persistence: storageClassName: local-storage storage: 20Gi override: statefulSet: spec: template: spec: containers: [] initContainers: - name: setup-container command: - sh - -c - cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf &amp;&amp; chown 999:999 /etc/rabbitmq/rabbitmq.conf &amp;&amp; echo '' &gt;&gt; /etc/rabbitmq/rabbitmq.conf ; cp /tmp/rabbitmq/advanced.config /etc/rabbitmq/advanced.config &amp;&amp; chown 999:999 /etc/rabbitmq/advanced.config ; cp /tmp/rabbitmq/rabbitmq-env.conf /etc/rabbitmq/rabbitmq-env.conf &amp;&amp; chown 999:999 /etc/rabbitmq/rabbitmq-env.conf ; cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie &amp;&amp; chown 999:999 /var/lib/rabbitmq/.erlang.cookie &amp;&amp; chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /etc/rabbitmq/enabled_plugins &amp;&amp; chown 999:999 /etc/rabbitmq/enabled_plugins ; chown 999:999 /var/lib/rabbitmq/mnesia/ # &lt;- CHANGED THIS LINE </code></pre>
Matt
<p>I hosted Zalenium in Azure Kubernates, and I need to enable SSL. I see that in the helm charts, there is ingress.yaml with TLS setting, I tried to enable it but apparently nothing happens, does anyone knows what should done ?</p>
Bahram
<p>As I mentioned in comments, SSL/HTTPS is not just about enabling something in helm chart, it´s more complicated.</p> <hr /> <p><strong>For example</strong></p> <p>You could set up <a href="https://github.com/bitnami/charts/tree/master/bitnami/nginx-ingress-controller" rel="nofollow noreferrer">nginx ingress controller</a> with <a href="https://letsencrypt.org/" rel="nofollow noreferrer">Let's Encrypt</a>, which is a free TLS Certificate Authority and you can use it to automatically request and renew Let's Encrypt certificates for public domain names.</p> <p>Additionally you need <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">cert-manager</a>, which is a Kubernetes tool that issues certificates from various certificate providers, including Let's Encrypt.</p> <hr /> <p>There are several tutorials about this, for example take a look at these</p> <ul> <li><a href="https://cert-manager.io/docs/tutorials/acme/ingress/" rel="nofollow noreferrer">https://cert-manager.io/docs/tutorials/acme/ingress/</a></li> <li><a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></li> <li><a href="https://docs.bitnami.com/tutorials/secure-kubernetes-services-with-ingress-tls-letsencrypt/" rel="nofollow noreferrer">https://docs.bitnami.com/tutorials/secure-kubernetes-services-with-ingress-tls-letsencrypt/</a></li> </ul> <p>Additionally take a look at this <a href="https://stackoverflow.com/a/58436097/11977760">stackoverflow answer</a> provided by @Tushar Mahajan.</p>
Jakub
<p>I'm trying to rotate logs inside kubernetes by the way of using following code snippets:</p> <pre><code>containers: - name: logrotate image: docker.io/kicm/logrotate securityContext: runAsUser: 0 volumeMounts: - name: logrotate-conf mountPath: /etc/logrotate.d volumes: - name: logrotate-conf configMap: name: logrotation-config - name: app-logs persistentVolumeClaim: claimName: my-var-pvc restartPolicy: OnFailure </code></pre> <p>When the pod running stage, I've encountered following error:</p> <pre><code>Potentially dangerous mode on /etc/logrotate.conf: 0664 error: Ignoring /etc/logrotate.conf because it is writable by group or others. Reading state from file: /tmp/logrotate.status Allocating hash table for state file, size 64 entries Handling 0 logs </code></pre> <p>Please let me know how to fix that issue, thanks.</p>
PPShein
<p>If this is your image, rebuild it with proper permissions, push it to the regisrty and use it.</p> <p>If this is not your image, think twice before using someone else's image (security reasons). You still think this is good idea to use someone's image? Build new image with changed permissions using this image as a base image, push it to registry and use it.</p>
Matt
<p>I have installed a helm chart with subcharts and I want to find out which version of the subchart is installed. Is there any possible way in helm 3?</p>
Darshil Shah
<p>Following official Helm documentation: </p> <ul> <li><a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer">Helm.sh: Subcharts and globals</a></li> <li><a href="https://helm.sh/docs/topics/charts/" rel="nofollow noreferrer">Helm.sh: Charts</a></li> <li><a href="https://helm.sh/docs/helm/helm_dependency/" rel="nofollow noreferrer">Helm.sh: Helm dependency</a></li> </ul> <p>You can get the version of a subchart used by a chart by following below <strong>example</strong>: </p> <ul> <li>Download the chart with <code>$ helm pull repo/name --untar</code> </li> <li>Go inside the chart directory </li> <li>Invoke command: <code>$ helm dependency list</code> </li> </ul> <p>You can get a message that there are no dependencies: </p> <p><code>WARNING: no dependencies at gce-ingress/charts</code></p> <p>You can also get a message with dependencies and their versions: </p> <pre class="lang-sh prettyprint-override"><code>NAME VERSION REPOSITORY STATUS kube-state-metrics 2.7.* https://kubernetes-charts.storage.googleapis.com/ unpacked </code></pre> <p>Additionally you can check the content of the <code>prometheus/charts/kube-state-metrics/Chart.yaml</code> for additional information. </p> <p>Please let me know if that helped you. </p>
Dawid Kruk
<p>I have two Kubernetes clusters in AWS, each in it's own VPC.</p> <ul> <li>Cluster1 in VPC1</li> <li>Cluster2 in VPC2</li> </ul> <p><a href="https://i.stack.imgur.com/MfdKQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MfdKQ.png" alt="enter image description here" /></a></p> <p>I want to do http(s) requests from cluster1 into cluster2 through a VPC peering. The VPC peering is setup and I can ping hosts from Cluster1 to hosts in Cluster2 currently.</p> <p>How can I create a service that I can connect to from Cluster1 in Cluster2. I have experience setting up services using external ELBs and the like, but not for traffic internally in this above scenario.</p>
silverdagger
<p>You can create <a href="https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html" rel="nofollow noreferrer">internal LoadBalancer</a>.</p> <p>All you need to do is to create a regular service of type LoadBalancer and annotate it with the following annotation:</p> <pre><code>service.beta.kubernetes.io/aws-load-balancer-internal: &quot;true&quot; </code></pre>
Matt
<p>If a <code>virtualservice</code> <code>A</code> is defined in <code>namespace</code> <code>A</code> using <code>networking.istio.io</code>, how can it use a <code>gateway</code> <code>B</code> defined in another namespace, <code>namespace</code> <code>B</code>?</p> <p>Thanks</p>
imriss
<p>If it´s not in the same namespace as virtual service, you have to specify that in virtual service </p> <p>Check the <code>spec.gateways</code> section</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo-Mongo namespace: bookinfo-namespace spec: gateways: - some-config-namespace/my-gateway # can omit the namespace if gateway is in same namespace as virtual service. </code></pre> <hr> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: my-gateway namespace: some-config-namespace </code></pre> <p>There is related <a href="https://istio.io/docs/reference/config/networking/gateway/" rel="noreferrer">istio documentation</a> about that.</p>
Jakub
<p>Yesterday I created a new kubernetes cluster (v1.20.1, on prem) and I wanted to add NFS provisioning. The only NFS provisioner available (and still maintained) seems to be <a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner</a>.</p> <p>It does say to use your own provisioner and the default (quay.io/external_storage/nfs-client-provisioner:latest) is two years old, but I don't have my own provisioner.</p> <p>When I follow the deployment guide without the helm chart and check the nfs-client-provisioner log I see the following:</p> <pre><code>I1220 22:20:44.160099 1 leaderelection.go:185] attempting to acquire leader lease default/fuseim.pri-ifs... E1220 22:21:01.598029 1 event.go:259] Could not construct reference to: '&amp;v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:&quot;&quot;, APIVersion:&quot;&quot;}, ObjectMeta:v1.ObjectMeta{Name:&quot;fuseim.pri-ifs&quot;, GenerateName:&quot;&quot;, Namespace:&quot;default&quot;, SelfLink:&quot;&quot;, UID:&quot;c852ca40-471f-4019-a099-d72d32555022&quot;, ResourceVersion:&quot;134579&quot;, Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744022156, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{&quot;control-plane.alpha.kubernetes.io/leader&quot;:&quot;{\&quot;holderIdentity\&quot;:\&quot;nfs-client-provisioner-5999484954-n4tj7_94db294f-4261-11eb-9b30-c64536689731\&quot;,\&quot;leaseDurationSeconds\&quot;:15,\&quot;acquireTime\&quot;:\&quot;2020-12-20T01:21:01Z\&quot;,\&quot;renewTime\&quot;:\&quot;2020-12-20T01:21:01Z\&quot;,\&quot;leaderTransitions\&quot;:2}&quot;}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:&quot;&quot;}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-5999484954-n4tj7_94db294f-4261-11eb-9b30-c64536689731 became leader' I1220 22:21:01.598123 1 leaderelection.go:194] successfully acquired lease default/fuseim.pri-ifs I1220 22:21:01.598198 1 controller.go:631] Starting provisioner controller fuseim.pri/ifs_nfs-client-provisioner-5999484954-n4tj7_94db294f-4261-11eb-9b30-c64536689731! I1220 22:21:01.709535 1 controller.go:680] Started provisioner controller fuseim.pri/ifs_nfs-client-provisioner-5999484954-n4tj7_94db294f-4261-11eb-9b30-c64536689731! I1220 22:21:01.717419 1 controller.go:987] provision &quot;default/test-claim&quot; class &quot;managed-nfs-storage&quot;: started E1220 22:21:01.720318 1 controller.go:1004] provision &quot;default/test-claim&quot; class &quot;managed-nfs-storage&quot;: unexpected error getting claim reference: selfLink was empty, can't make reference I1220 22:36:01.615073 1 controller.go:987] provision &quot;default/test-claim&quot; class &quot;managed-nfs-storage&quot;: started E1220 22:36:01.618195 1 controller.go:1004] provision &quot;default/test-claim&quot; class &quot;managed-nfs-storage&quot;: unexpected error getting claim reference: selfLink was empty, can't make reference </code></pre> <p>Is this a problem with my cluster or with this provisioner? I really have no clue.</p> <p>I can also just ditch this provisioner, or even NFS, for something else. I just need storage per pod instance to work, e.g. volumeClaimTemplates, stored outside of my kubernetes cluster. If anyone has any suggestions, please let me know!</p> <p>Thanks in advance, Hendrik</p>
Hendrik
<p>You are seeing this error beacause of this: <a href="https://github.com/kubernetes/enhancements/issues/1164" rel="noreferrer">KEP-1164: Deprecate and Remove SelfLink</a></p> <p>Quote from mentioned KEP:</p> <blockquote> <p>In v1.16, we will deprecate the SelfLink field in both ObjectMeta and ListMeta objects by: documenting in field definition that it is deprecated and is going to be removed adding a release-note about field deprecation We will also introduce a feature gate to allow disabling setting SelfLink fields and opaque the logic setting it behind this feature gate.</p> <p><strong>In v1.20 (12 months and 4 release from v1.16) we will switch off the feature gate</strong> <strong>which will automatically disable setting SelfLinks. However it will</strong> <strong>still be possible to revert the behavior by changing value of a</strong> <strong>feature gate</strong>.</p> <p>In v1.21, we will get rid of the whole code propagating those fields and fields themselves. In the meantime, we will go over places referencing that field (see below) and get rid of those too.</p> </blockquote> <p>As you see, you can enable it back with <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="noreferrer">featureGate</a>: <code>RemoveSelfLink=false</code> although its not recommented and SelfLink will be permamently removed in v1.21</p> <p>Also check this issue on github: <a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25" rel="noreferrer">Using Kubernetes v1.20.0, getting &quot;unexpected error getting claim reference: selfLink was empty, can't make reference</a></p>
Matt
<p>env:</p> <pre><code>kubernetes provider: gke kubernetes version: v1.13.12-gke.25 grafana version: 6.6.2 (official image) </code></pre> <p>grafana deployment manifest:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: grafana namespace: monitoring spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: name: grafana labels: app: grafana spec: containers: - name: grafana image: grafana/grafana:6.6.2 ports: - name: grafana containerPort: 3000 # securityContext: # runAsUser: 104 # allowPrivilegeEscalation: true resources: limits: memory: "1Gi" cpu: "500m" requests: memory: "500Mi" cpu: "100m" volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage volumes: - name: grafana-storage persistentVolumeClaim: claimName: grafana-pvc </code></pre> <p><strong>Problem</strong></p> <p>when I deployed this grafana dashboard first time, its working fine. after sometime I restarted the pod to check whether volume mount is working or not. after restarting, I getting below error.</p> <pre><code>mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied GF_PATHS_DATA='/var/lib/grafana' is not writable. You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later </code></pre> <p>what I understand from this error, user could create these files. How can I give this user appropriate permission to start grafana successfully?</p>
Abu Hanifa
<p>I recreated your deployment with appropriate PVC and noticed that <code>grafana</code> pod was failing. </p> <p>Output of command: <code>$ kubectl get pods -n monitoring</code></p> <pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE grafana-6466cd95b5-4g95f 0/1 Error 2 65s </code></pre> <p>Further investigation pointed the same errors as yours: </p> <pre class="lang-sh prettyprint-override"><code>mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied GF_PATHS_DATA='/var/lib/grafana' is not writable. You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later </code></pre> <p>This error showed on first creation of a pod and the deployment. There was no need to recreate any pods.</p> <p>What I did to make it work was to edit your deployment: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: grafana namespace: monitoring spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: name: grafana labels: app: grafana spec: securityContext: runAsUser: 472 fsGroup: 472 containers: - name: grafana image: grafana/grafana:6.6.2 ports: - name: grafana containerPort: 3000 resources: limits: memory: "1Gi" cpu: "500m" requests: memory: "500Mi" cpu: "100m" volumeMounts: - mountPath: /var/lib/grafana name: grafana-storage volumes: - name: grafana-storage persistentVolumeClaim: claimName: grafana-pvc </code></pre> <p>Please take a specific look on part: </p> <pre class="lang-yaml prettyprint-override"><code> securityContext: runAsUser: 472 fsGroup: 472 </code></pre> <p>It is a setting described in official documentation: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="noreferrer">Kubernetes.io: set the security context for a pod</a></p> <p>Please take a look on this Github issue which is similar to yours and pointed me to solution that allowed pod to spawn correctly: </p> <ul> <li><a href="https://github.com/grafana/grafana-docker/issues/167" rel="noreferrer">https://github.com/grafana/grafana-docker/issues/167</a> </li> </ul> <p>Grafana had some major updates starting from version 5.1. Please take a look: <a href="https://grafana.com/docs/grafana/latest/installation/docker/#migrate-to-v5-1-or-later" rel="noreferrer">Grafana.com: Docs: Migrate to v5.1 or later</a></p> <p>Please let me know if this helps. </p>
Dawid Kruk
<p>Let's say that we have a Node app running inside a Docker container and listening on port 3000. We want to expose this app to be accessible in the browser, also in port 3000. So we do port forwarding like this:</p> <pre><code>docker run -p &lt;port&gt;:&lt;targetPort&gt; my-image // for example: 3000:3000 </code></pre> <p>So docker knows on which port to listen and to which process inside the container to forward the network.</p> <p>But in k8s NodePort service, its enough to provide the NodePort service port and the target port of the Pod, but not the target port inside the Pod, so we have random outside port, 30,000-32,000 (approximately) that listens to outside traffic, forward this to the port of the NodePort service and then forward it to the target port of a Pod, but.. hey, we didn't mention the target port <strong>inside the Pod</strong>.</p> <p>So how the Pod object knows to which process inside it to forward the traffic? We usually assume we have only single container in a Pod, but if we have multiple?</p> <p>Also note that in the example of exposing image in Docker - if we don't mention port forwarding the whole exposition won't work so it won't be accessible via browser (outside the container)..</p>
Raz Buchnik
<p>There is no such thing as <em>&quot;target port inside the Pod&quot;</em> in kubernetes. At least not in a way it works in Docker. In docker you have and so you can run several containers that can be exposed and be accessed with localhost. Docker opens on localhost and forwards traffic to the of a container. How else would you be able to access a container.</p> <p>k8s works differently, so forget about docker (and don't think that if they are called the same so they must work the same). Every pod in k8s has it's own IP address that can be accessed and there is no need for port:targetport in a form you know from docker.</p> <p>So to answer your question:</p> <blockquote> <p>So how the Pod object knows to which process inside it to forward the traffic?</p> </blockquote> <p>In k8s, process is accessible by the port it opens from inside and from outside. There is no port translation on a pod level.</p> <p>So how do service's port and targetPort in k8s service object relate to that? Service acts as a load balancer in front of pod replicas and holds static IP address. So when thinking about service, think it's a loadbalancer, beacause that's its primary function.</p> <p>There are 3 fields in service of type NodePort:</p> <pre><code>port - a port on which service(loadbalancer) is serving traffic. targetPort - a port that is opened by the process in pod (where to forward the trafic). nodePort - a port that is open on every node. </code></pre> <p>When you don't specify targetPort, k8s assumes that port and targetPort are the same.</p> <hr /> <blockquote> <p>We usually assume we have only single container in a Pod, but if we have multiple?</p> </blockquote> <p>In case of multiple containers in one pod, all these containers share the same network interface so if one container has already opened e.g. port 80, other pod trying to open port 80 won't be able to do it.</p> <p>In <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model" rel="nofollow noreferrer">k8s docs</a> you can read:</p> <blockquote> <p>Every Pod gets its own IP address. This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports. This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.</p> </blockquote> <p>To summarize, think that a pod is a VM and when you open a port it's automatically accessible from the outside, and every container in pod is nothing different than a process on that VM, so you can't have several processes running on a VM with the same port open.</p>
Matt
<p>I have a self hosted gitlab and I want to deploy applications. I am trying to deploy a runner so that gitlab can connect to the kubernetes cluster. My kubernetes cluster is RBAC enabled.</p> <p>Here is my gitlab-ci.yml for the runner:</p> <pre class="lang-yaml prettyprint-override"><code>services: - docker:18-dind stages: - deploy deploy: image: name: thorstenhans/helm3:latest entrypoint: [&quot;/bin/sh&quot;, &quot;-c&quot;] stage: deploy environment: name: staging kubernetes: namespace: runners script: - echo ${CI_JOB_NAME} - helm version - kubectl version - helm repo add gitlab https://charts.gitlab.io/ - helm install gitlab/gitlab-runner --version 0.20.0 -f values.yml </code></pre> <p>Here is the values.yml file:</p> <pre class="lang-yaml prettyprint-override"><code>## The GitLab Server URL (with protocol) that want to register the runner against ## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-register ## gitlabUrl: https://gitlab.mydomain.com/ ## The registration token for adding new Runners to the GitLab server. This must ## be retrieved from your GitLab instance. ## ref: https://docs.gitlab.com/ee/ci/runners/ ## runnerRegistrationToken: &quot;registration code here&quot; ## Set the certsSecretName in order to pass custom certificates for GitLab Runner to use ## Provide resource name for a Kubernetes Secret Object in the same namespace, ## this is used to populate the /etc/gitlab-runner/certs directory ## ref: https://docs.gitlab.com/runner/configuration/tls-self-signed.html#supported-options-for-self-signed-certificates ## #certsSecretName: ## Configure the maximum number of concurrent jobs ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section ## concurrent: 10 ## Defines in seconds how often to check GitLab for a new builds ## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section ## checkInterval: 30 ## For RBAC support: rbac: create: false ## Run the gitlab-bastion container with the ability to deploy/manage containers of jobs ## cluster-wide or only within namespace clusterWideAccess: true ## If RBAC is disabled in this Helm chart, use the following Kubernetes Service Account name. ## # serviceAccountName: default ## Configuration for the Pods that the runner launches for each new job ## runners: ## Default container image to use for builds when none is specified ## image: ubuntu:18.04 ## Run all containers with the privileged flag enabled ## This will allow the docker:stable-dind image to run if you need to run Docker ## commands. Please read the docs before turning this on: ## ref: https://docs.gitlab.com/runner/executors/kubernetes.html#using-docker-dind ## privileged: false ## Namespace to run Kubernetes jobs in (defaults to 'default') ## # namespace: ## Build Container specific configuration ## builds: # cpuLimit: 200m # memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi ## Service Container specific configuration ## services: # cpuLimit: 200m # memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi ## Helper Container specific configuration ## helpers: # cpuLimit: 200m # memoryLimit: 256Mi cpuRequests: 100m memoryRequests: 128Mi </code></pre> <p>How can I create a service account for gitlab so that it can deploy applications cluster wide?</p>
Jacob
<p>You can find this informations in gitlab <a href="https://docs.gitlab.com/13.3/runner/install/kubernetes.html" rel="nofollow noreferrer">documentation</a>.</p> <blockquote> <p><strong>Enabling RBAC support</strong></p> <p>If your cluster has RBAC enabled, you can choose to either have the chart create its own service account or provide one on your own.</p> <p>To have the chart create the service account for you, set rbac.create to true:</p> </blockquote> <pre><code>rbac: create: true </code></pre> <blockquote> <p>To use an already existing service account, use:</p> </blockquote> <pre><code>rbac: create: false serviceAccountName: your-service-account </code></pre> <hr /> <p>So you would have to change your values.yaml to</p> <pre><code>rbac: create: false clusterWideAccess: true serviceAccountName: your-service-account </code></pre> <hr /> <p>There is related documentation about rbac and service accounts.</p> <ul> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions</a></li> <li><a href="https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/" rel="nofollow noreferrer">https://unofficial-kubernetes.readthedocs.io/en/latest/admin/authorization/rbac/</a></li> <li><a href="https://www.magalix.com/blog/kubernetes-authorization" rel="nofollow noreferrer">https://www.magalix.com/blog/kubernetes-authorization</a></li> </ul> <p>There is another stackoverflow post and a gitlab issue which should help with creating the service account for your use case.</p> <ul> <li><a href="https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3841" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-runner/-/issues/3841</a></li> <li><a href="https://stackoverflow.com/questions/63283438">how can I create a service account for all namespaces in a kubernetes cluster?</a></li> </ul>
Jakub
<p>At my company the kubernetes cluster is managed by a team, we must provision a namespace and then create our resources. We cannot use features such as <code>hostPath</code> volumes and we cannot create new roles or namespaces, etc.</p> <p>So looking at an example implementation of the <code>fluentd-elasticsearch</code> container as a <code>DaemonSet</code>, they all appear to be using hostPath volume mounting but I'm not sure why.</p> <p>For example, I ran through this: <a href="https://www.howtoforge.com/create-a-daemonset-in-kubernetes/" rel="nofollow noreferrer">https://www.howtoforge.com/create-a-daemonset-in-kubernetes/</a></p> <p>And created this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: my-fluentd-elasticsearch-daemonset namespace: kube-system labels: k8s-app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: fluentd-elasticsearch image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers </code></pre> <p>But got this error:</p> <pre><code>Error creating: pods &quot;fluentd-elasticsearch-&quot; is forbidden: unable to validate against any pod security policy: [spec.volumes[0]: Invalid value: &quot;hostPath&quot;: hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: &quot;hostPath&quot;: hostPath volumes are not allowed to be used] </code></pre> <p>So I have a couple of questions:</p> <ol> <li>Is fluentd mounting volumes then reading files in those volumes that get pushed out to elasticsearch?</li> <li>Can I just remove the volume mounting or is that essential to it functioning?</li> <li>Is fluentd using the kubernetes API at all?</li> <li>Are there any non-daemonset containers which would just use the kubernetes API to get the pods then use log api to forward to a log db?</li> </ol>
justin.m.chase
<blockquote> <p>Is fluentd mounting volumes then reading files in those volumes that get pushed out to elasticsearch?</p> </blockquote> <p>Docker is storing logs on the node's disk. Fluentd needs to acces this log files somehow; this is why its running as daemonset, you need it to run on every node with hostpath to access log files.</p> <blockquote> <p>Can I just remove the volume mounting or is that essential to it functioning?</p> </blockquote> <p>No, you can't &quot;just remove&quot; volume mounting (hostpath) because fluentd will loose access to the log files that docker keeps on nodes.</p> <blockquote> <p>Is fluentd using the kubernetes API at all?</p> </blockquote> <p>There is no straightforward answer to this question. There are plugins I have found that can access k8s metadata using k8s api but I haven't found any plugin that would use k8s api to pull logs.</p> <blockquote> <p>Are there any non-daemonset containers which would just use the kubernetes API to get the pods then use log api to forward to a log db?</p> </blockquote> <p>Some similar to this is describled in k8s documentation: <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-a-logging-agent" rel="nofollow noreferrer">sidecar container with a logging agent</a></p> <p>So yes, you could possibly deploy fluentd as a sidecar to gather and forward logs to the db. Check the docs for more details.</p>
Matt
<p>Using minikube, when running the following command:</p> <pre><code>kubectl -v=11 --kubeconfig /dev/null --insecure-skip-tls-verify -s http://localhost:8001 --token &quot;invalid&quot; -n namespace get pods </code></pre> <p>I have an answer when I don't want one. And I don't know how it was authorized. Moreover, if I use a valid token with specific rights, these are not used.</p> <p><a href="https://stackoverflow.com/questions/60083889/kubectl-token-token-doesnt-run-with-the-permissions-of-the-token">kubectl --token=$TOKEN doesn&#39;t run with the permissions of the token</a> doesn't answer my question as I specified to used <strong>/dev/null</strong> as a config file.</p> <p>Any idea ?</p>
Neok
<p>I will try to summarize the answer I provided in the comments.</p> <p>The question was: <em>Why does running <code>kubectl -s http://localhost:8001 --kubeconfig /dev/null --token &lt;invalid_token&gt;</code> (where :8001 is a port opened by kubectl proxy) repoonds as if I was authorized, when it shouldn't beacause I set all possible authorization options to null or incorrect values?</em></p> <p>The answer is that <code>kubectl proxy</code> opens a port and handles all authorization for you so you dont have to. Now to access REST api of kubernetes all you need to do is to use <code>curl localhost:8001/...</code>. No tokens and certificates.</p> <p>Because you are already authorized with <code>kubectl proxy</code>, using kubectl and pointing it to localhost:8001 is causing that it won't need to authorize and you won't need any tokens to access k8s.</p> <hr /> <p>As an alternative you can check what happens when you run the same but instead of connecting through <code>kubectl proxy</code> you use kubernetes port directly.</p> <p>You mentioned that you are using minikube so by default that would be port 8443</p> <pre><code>$ kubectl --kubeconfig /dev/null -s https://$(minikube ip):8443 --token &quot;invalid&quot; --insecure-skip-tls-verify get pods error: You must be logged in to the server (Unauthorized) </code></pre> <p>As you see now it works as expected.</p>
Matt
<p>I have a GKE cluster I'm trying to switch the default node machine type on.</p> <p>I have already tried:</p> <ol> <li>Creating a new node pool with the machine type I want</li> <li>Deleting the default-pool. GKE will process for a bit, then not remove the default-pool. I assume this is some undocumented behavior where you cannot delete the default-pool.</li> </ol> <p>I'd prefer to not re-create the cluster and re-apply all of my deployments/secrets/configs/etc.</p> <p>k8s version: <code>1.14.10-gke.24</code> (Stable channel)</p> <p>Cluster Type: Regional</p>
Jacque006
<p>The best approach to change/increase/decrease your <code>node pool</code> specification would be with: </p> <ul> <li><strong>Migration</strong> </li> </ul> <p>To migrate your workloads without incurring downtime, you need to:</p> <ul> <li>Create a new <code>node pool</code>. </li> <li>Mark the existing <code>node pool</code> as unschedulable.</li> <li>Drain the workloads running on the existing <code>node pool</code>.</li> <li><strong>Check if the workload is running correctly on a new <code>node pool</code>.</strong></li> <li>Delete the existing <code>node pool</code>.</li> </ul> <p>Your workload will be scheduled automatically onto a new <code>node pool</code>. </p> <blockquote> <p><a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a>, which is the cluster orchestration system of GKE clusters, automatically reschedules the evicted <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="noreferrer">Pods</a> to the new node pool as it drains the existing node pool.</p> </blockquote> <p>There is official documentation about migrating your workload: </p> <blockquote> <p>This tutorial demonstrates how to migrate workloads running on a GKE cluster to a new set of nodes within the same cluster without incurring downtime for your application. Such a migration can be useful if you want to migrate your workloads to nodes with a different machine type.</p> <p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="noreferrer">GKE: Migrating workloads to different machine types</a></em> </p> </blockquote> <p>Please take a look at above guide and let me know if you have any questions in that topic. </p>
Dawid Kruk
<p>Getting error on Kubernetes container, No module named 'requests' even though I installed it using pip and also test multiple Docker images.</p> <p>Docker file:- </p> <pre><code>FROM jfloff/alpine-python:2.7 MAINTAINER "Gaurav Agnihotri" #choosing /usr/src/app as working directory WORKDIR /usr/src/app # Mentioned python module name to run application COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt RUN pip install requests==2.7.0 # Exposing applicaiton on 80 so that it can be accessible on 80 EXPOSE 80 #Copying code to working directory COPY . . #Making default entry as python will launch api.py CMD [ "python", "app-1.py" ] </code></pre> <p>app-1.py </p> <pre><code>#!/usr/bin/env python import random import requests from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/api', methods=['POST']) def api(): user_data = request.get_json() data = user_data['message'] r = requests.post('http://localhost:5000/reverse', json={'message': data }) json_resp = r.json() a = random.uniform(0, 10) return jsonify({"rand": a, "message": json_resp.get("message")}) if __name__ == "__main__": </code></pre>
gaurav agnihotri
<p>Try this, I hope this may help you.</p> <p>Dockerfile:</p> <pre><code>FROM ubuntu:18.10 RUN apt-get update -y &amp;&amp; \ apt-get install -y python-pip python-dev # Set the working directory to /usr/src/app WORKDIR /usr/src/app # Copy the current directory contents into the container at /usr/src/app ADD . /usr/src/app RUN pip install -r requirements.txt ENTRYPOINT [ "python" ] CMD [ "app-1.py" ] </code></pre>
Neda Peyrone
<p>Below is the output from <code>ip route</code> command in one of the worker nodes of Kubernetes cluster (aws based):</p> <pre><code>$ip route default via 10.6.16.1 dev eth0 10.6.16.0/21 dev eth0 proto kernel scope link src 10.6.22.111 111.97.95.0/26 via 10.6.145.224 dev tunl0 proto bird onlink 111.98.108.64/26 via 10.6.144.128 dev tunl0 proto bird onlink 111.98.163.0/26 via 10.6.147.100 dev tunl0 proto bird onlink 111.101.172.128/26 via 10.6.86.141 dev tunl0 proto bird onlink 111.103.57.192/26 via 10.6.17.44 dev eth0 proto bird 111.103.80.128/26 via 10.6.85.178 dev tunl0 proto bird onlink 111.105.231.0/26 via 10.6.23.120 dev eth0 proto bird 111.115.208.128/26 via 10.6.80.11 dev tunl0 proto bird onlink blackhole 111.126.117.128/26 proto bird 111.126.117.129 dev cali8934275ty scope link 111.126.117.132 dev cali983hfsdf4 scope link 111.126.117.140 dev cali443gfby45 scope link </code></pre> <p>I am quite new to Kubernetes and would like to understand a couple of things related to this output and Calico networking in general:</p> <ol> <li>what kind of ip address is 10.6.16.1 if eth0 has IP of 10.6.22.111/21 - is it Internet Gateway ?</li> <li>Another worker node has two pods with the same IP=10.6.145.224 (pods calico-node-74hde и kube-proxy-internal) - how this is working/possible?</li> <li>Why do we need blackhole route?</li> </ol>
Viji
<blockquote> <p>what kind of ip address is 10.6.16.1 if eth0 has IP of 10.6.22.111/21 - is it Internet Gateway ?</p> </blockquote> <p>Yes, you are correct, this is indeed default(internet) gateway. So for example on your local computer default route would hold an IP of your home router.</p> <hr /> <blockquote> <p>Another worker node has two pods with the same IP=10.6.145.224 (pods calico-node-74hde и kube-proxy-internal) - how this is working/possible?</p> </blockquote> <p>This is possible beacause they have set <code>hostNetwork: true</code>. Check it yourself running e.g.:</p> <pre><code>kubectl get po -n kube-system calico-node-74hde </code></pre> <p>and look for <code>hostNetwork</code> field. If this field is set to <code>true</code>, the pod (more specificaly containers within the pod) will not be <em>network isolated</em> and will have access to the host network interface, and this is why these pod have host IP.</p> <hr /> <blockquote> <p>Why do we need blackhole route?</p> </blockquote> <p>I belive <a href="https://github.com/projectcalico/calico/issues/3498" rel="nofollow noreferrer">this calico issue</a> may give us some answers.</p> <p>I will try to explaint it. Imagine situation when there are 2 pod running and sending data over the network to each other.</p> <p>When one of these pods gets deleted, the other pod may not recognise it and keep sending data to the IP address that does not exist (and because there is no pod, there is also no interface with the address).</p> <p>So what shoud the node do if receives a packet with destination address that no longer exists?</p> <p>Normally it would forward the packet according to the route rules. Now that there is no route rule associated with the pod (that just got deleted), the packet will get send according to the best match rule. If the blackhole rule exists, the packet will be dropped, but if there is no blackhole, packet will get forwarded (according to the best match rule) through the default gateway and you don't usually want this.</p> <hr /> <p>Let me know if it answers your questions.</p>
Matt
<p>I have an issue, I am deploying an application on [hostname]/product/console, but the .css .js files are being requested from [hostname]/product/static, hence they are not being loaded and I get 404.</p> <p>I have tried <code>nginx.ingress.kubernetes.io/rewrite-target:</code> to no avail.</p> <p>I also tried using: <code>nginx.ingress.kubernetes.io/location-snippet: | location = /product/console/ { proxy_pass http://[hostname]/product/static/; }</code></p> <p>But the latter does not seem to be picked up by the nginx controller at all. This is my ingress.yaml</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-resource annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/enable-rewrite-log: "true" # nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/location-snippet: | location = /product/console/ { proxy_pass http://[hostname]/product/static/; } spec: rules: - host: {{.Values.HOSTNAME}} http: paths: - path: /product/console backend: serviceName: product-svc servicePort: prod ##25022 - path: /product/ backend: serviceName: product-svc servicePort: prod #25022 </code></pre> <p>-- Can I ask for some pointers? I have been trying to google this out and tried some different variations, but I seem to be doing something wrong. Thanks!</p>
voidcraft
<p><strong>TL;DR</strong></p> <p>To diagnose the reason why you get error 404 you can check in <code>nginx-ingress</code> controller pod logs. You can do it with below command: </p> <p><code>kubectl logs -n ingress-nginx INGRESS_NGINX_CONTROLLER_POD_NAME</code></p> <p>You should get output similar to this (depending on your use case): </p> <pre class="lang-sh prettyprint-override"><code>CLIENT_IP - - [12/May/2020:11:06:56 +0000] "GET / HTTP/1.1" 200 238 "-" "REDACTED" 430 0.003 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 276 0.003 200 CLIENT_IP - - [12/May/2020:11:06:56 +0000] "GET /assets/styles/style.css HTTP/1.1" 200 22 "http://SERVER_IP/" "REDACTED" 348 0.002 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 22 0.002 200 </code></pre> <p>With above logs you can check if the requests are handled properly by <code>nginx-ingress</code> controller and where they are sent. </p> <p>Also you can check the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="noreferrer">Kubernetes.github.io: ingress-nginx: Ingress-path-matching</a>. It's a document describing how <code>Ingress</code> matches paths with regular expressions. </p> <hr> <p>You can experiment with <code>Ingress</code>, by following below example: </p> <ul> <li>Deploy <code>nginx-ingress</code> controller</li> <li>Create a <code>pod</code> and a <code>service</code></li> <li>Run example application </li> <li>Create an <code>Ingress</code> resource </li> <li>Test</li> <li>Rewrite example </li> </ul> <h3>Deploy <code>nginx-ingress</code> controller</h3> <p>You can deploy your <code>nginx-ingress</code> controller by following official documentation: </p> <p><a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">Kubernetes.github.io: Ingress-nginx</a></p> <h3>Create a <code>pod</code> and a <code>service</code></h3> <p>Below is an example definition of a pod and a service attached to it which will be used for testing purposes: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: ubuntu-deployment spec: selector: matchLabels: app: ubuntu replicas: 1 template: metadata: labels: app: ubuntu spec: containers: - name: ubuntu image: ubuntu command: - sleep - "infinity" --- apiVersion: v1 kind: Service metadata: name: ubuntu-service spec: selector: app: ubuntu ports: - name: ubuntu-port port: 8080 targetPort: 8080 nodePort: 30080 type: NodePort </code></pre> <h3>Example page</h3> <p>I created a basic <code>index.html</code> with one <code>css</code> to simulate the request process. You need to create this files inside of a pod (manually or copy them to pod). </p> <p>The file tree looks like this: </p> <ul> <li><strong>index.html</strong></li> <li>assets/styles/<strong>style.css</strong></li> </ul> <p><strong>index.html</strong>: </p> <pre class="lang-html prettyprint-override"><code>&lt;!DOCTYPE html&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;meta charset="UTF-8"&gt; &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt; &lt;link rel="stylesheet" href="assets/styles/style.css"&gt; &lt;title&gt;Document&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Hi&lt;/h1&gt; &lt;/body&gt; </code></pre> <p>Please take a specific look on a line: </p> <pre class="lang-html prettyprint-override"><code> &lt;link rel="stylesheet" href="assets/styles/style.css"&gt; </code></pre> <p><strong>style.css</strong>:</p> <pre class="lang-css prettyprint-override"><code>h1 { color: red; } </code></pre> <p>You can run above page with <code>python</code>: </p> <ul> <li><code>$ apt update &amp;&amp; apt install -y python3</code></li> <li><code>$ python3 -m http.server 8080</code> where the <code>index.html</code> and <code>assets</code> folder is stored. </li> </ul> <h2>Create an <code>Ingress</code> resource</h2> <p>Below is an example <code>Ingress</code> resource configured to use <code>nginx-ingress</code> controller: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress-example annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: http: paths: - path: / backend: serviceName: ubuntu-service servicePort: ubuntu-port </code></pre> <p>After applying above resource you can start to test. </p> <h3>Test</h3> <p>You can go to your browser and enter the external IP address associated with your <code>Ingress</code> resource. </p> <p><strong>As I said above you can check the logs of <code>nginx-ingress</code> controller pod to check how your controller is handling request.</strong></p> <p>If you run command mentioned earlier <code>python3 -m http.server 8080</code> you will get logs too: </p> <pre class="lang-sh prettyprint-override"><code>$ python3 -m http.server 8080 Serving HTTP on 0.0.0.0 port 8080 (http://0.0.0.0:8080/) ... 10.48.0.16 - - [12/May/2020 11:06:56] "GET / HTTP/1.1" 200 - 10.48.0.16 - - [12/May/2020 11:06:56] "GET /assets/styles/style.css HTTP/1.1" 200 - </code></pre> <h3>Rewrite example</h3> <p>I've edited the <code>Ingress</code> resource to show you an example of a path rewrite: </p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress-example annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: http: paths: - path: /product/(.*) backend: serviceName: ubuntu-service servicePort: ubuntu-port </code></pre> <p>Changes were made to lines: </p> <pre class="lang-yaml prettyprint-override"><code> nginx.ingress.kubernetes.io/rewrite-target: /$1 </code></pre> <p>and: </p> <pre class="lang-yaml prettyprint-override"><code> - path: /product/(.*) </code></pre> <p>Steps: </p> <ul> <li>The browser sent: <code>/product/</code></li> <li>Controller got <code>/product/</code> and had it rewritten to <code>/</code></li> <li>Pod got <code>/</code> from a controller. </li> </ul> <p>Logs from the<code>nginx-ingress</code> controller: </p> <pre class="lang-sh prettyprint-override"><code>CLIENT_IP - - [12/May/2020:11:33:23 +0000] "GET /product/ HTTP/1.1" 200 228 "-" "REDACTED" 438 0.002 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 276 0.001 200 fb0d95e7253335fc82cc84f70348683a CLIENT_IP - - [12/May/2020:11:33:23 +0000] "GET /product/assets/styles/style.css HTTP/1.1" 200 22 "http://SERVER_IP/product/" "REDACTED" 364 0.002 [default-ubuntu-service-ubuntu-port] [] 10.48.0.13:8080 22 0.002 200 </code></pre> <p>Logs from the pod: </p> <pre class="lang-sh prettyprint-override"><code>10.48.0.16 - - [12/May/2020 11:33:23] "GET / HTTP/1.1" 200 - 10.48.0.16 - - [12/May/2020 11:33:23] "GET /assets/styles/style.css HTTP/1.1" 200 - </code></pre> <p>Please let me know if you have any questions in that. </p>
Dawid Kruk
<p>If I only want to use K8s master to manage daemonsets running in worker nodes (no load balancing, no HTTP request processing, each worker node runs the same pods), is the kube-proxy installation necessary? I only want to use kubernetes to make sure that each worker node is running one copy of the container specified in the daemonset manifest.</p> <p>I am hoping to save disk space and not install images onto worker nodes unnecessarily.</p>
Isabel
<p>As mentioned on <a href="https://medium.com/faun/kubernetes-without-kube-proxy-1c5d25786e18" rel="nofollow noreferrer">medium</a></p> <blockquote> <p>One of the most critical (if not the most) is kubernetes networking. There are many layers for kubernetes networking — pod networking, service IP, external IP cluster IP etc. Somewhere along this, kube-proxy plays an important role.</p> <p>What is eBPF ? A fully deep and technical understanding is beyond the scope of this experiment and even beyond the scope of my own skillset but in simplistic terms eBPF (extended Berkeley Packet Filter) is a virtual machine which runs in the kernel of a linux machine. It is capable of running natively just-in-time compiled “bpf programs” which have access to certain kernel functions. In other words a user can inject these programs to run in the kernel on demand in runtime . These programs follow a specific instruction set offered by bpf and have certain rules which they need to follow and it will run only programs which are safe to run . This is unlike linux modules which also run in the kernel but can potential cause issues to the kernel if not properly written . I will defer the details of these to the plethora of articles on BPF. But this virtual machine can be attached to any kernel subsystem like a network device and the BPF program is executed in response to events on those subsystems.One of the oldest and most popular linux tools — tcpdump utilizes BPF. I am tempted to say that new technologies like smart nics etc utilize BPF but its just a wild guess on my part. Replacing kube-proxy with CNI drivers utilizing eBPF</p> <p><strong>The cilium project utilizes eBPF for its network policy enforcement and also offers a kube-proxy replacement</strong> . Project Calico also has a tech preview using eBPF but for this experiment we will just use Cilium.</p> </blockquote> <p>So AFAIK it´s neccesary for kubernetes to work, if you don´t want to use kube-proxy maybe you could try an alernative like <a href="https://docs.cilium.io/en/latest/gettingstarted/kubeproxy-free/" rel="nofollow noreferrer">cilium</a>, take a look at above medium tutorial about it. Worth mentioning it´s not lighter than kube-proxy, it´s 147 MB.</p>
Jakub
<p>Install Rancher 2.x HA cluster flow the <a href="https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/" rel="nofollow noreferrer">offical document</a>. But I can't install it without public DNS - hostname. Is there any way to avoid this? I try to use /etc/hosts file but it seems like there an issue with agent docker doesn't get config from custom DNS.<br> I want to access the load-balancing cluster via IP, not via public DNS.</p>
Đinh Anh Huy
<p>Indeed, in a standard installation of Kubernetes, access to the API is done in HTTPS, and you need a certificate.</p> <p>You can have a look at this doc: <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/" rel="nofollow noreferrer">Controlling Access to the Kubernetes API</a>.</p> <p>If your goal is just running a lab, maybe you can use <a href="http://xip.io/" rel="nofollow noreferrer">xip.io</a> if you do not have a DNS server on which you have the hand.</p>
Stéphane Beuret
<p>I am trying to set up a metalLB external load balancer with the intention to access an nginx pod from outside the cluster using a publicly browseable IP address. I have follwed all the steps provided in <a href="https://www.youtube.com/watch?v=xYiYIjlAgHY" rel="nofollow noreferrer">here</a>. I have managed to get the External-IP for service/nginx type:LoadBalancer, but when I try to browse the IP address, I get nothing and it says &quot;This site can’t be reached&quot;.</p> <p><a href="https://i.stack.imgur.com/SltRK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SltRK.png" alt="running kubectl get nodes -o wide shows" /></a></p> <p><a href="https://i.stack.imgur.com/CwsnY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CwsnY.png" alt="enter image description here" /></a></p> <p>I wonder whether is it even allowed on the docker-desktop win10 environment to access the k8s cluster resources from outside with a public IP address?</p>
Muhammad Arslan Akhtar
<p>Kuberntes provided by Docker Desktop is running in a VM and all network traffic is being NATed to a virtual machine. Even if you had everything properly configured on your network for using layer 2 and DHCP, your work or home router would not even know how to reach Docker which makes any services only available on the localhost. </p> <p>To make it work you coud try using minikube with VirtualBox driver and set network interface to bridge, so that the minikube VM is visible to your router as a standalone instance and therefore arp requests can reach minikube.</p>
Matt
<p>I configure Ingress on google Kubernetes engine. I am new on ingress but as i understood ingress can serve different Loadbalancers and different LBs should be differently configured. </p> <p>I have started with a simple ingress config on GKE :</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: basic-ingress spec: rules: - http: paths: - path: /* backend: serviceName: web-np servicePort: 8080 - path: /v2/keys backend: serviceName: etcd-np servicePort: 2379 </code></pre> <p>And it works fine so I have 2 different NodePort services web-np and <code>etcd-np</code> . But now I need to extend this logic with some rewrite rules so that request that points to <code>/service1</code> - will be redirected to the other <code>service1-np</code> service but before <code>/service1/hello.html</code> must be replaced to <code>/hello.html</code>. That's why I have the following questions: </p> <ul> <li>How can I configure rewrite in ingress and if it is possible with default load balancer.</li> <li>What is default load balancer on GKE.</li> <li>Where can I find a list of all annotations to it. I have thought that the full list is on <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/</a> but there is a completly different list and there is no <code>kubernetes.io/ingress.global-static-ip-name</code> annotation that is widely used in google examples. </li> </ul>
Oleg
<blockquote> <p><code>Ingress</code> - API object that manages external access to the services in a cluster, typically HTTP.</p> <p>Ingress may provide load balancing, SSL termination and name-based virtual hosting.</p> <p> <em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Kubernetes.io: Ingress</a></em> </p> </blockquote> <p>Kubernetes can have multiple <code>Ingress</code> controllers. This controllers are different from each other. The <code>Ingress</code> controllers mentioned by you in this particular question are:</p> <ul> <li><code>Ingress-GCE</code> - a default <code>Ingress</code> resource for <code>GKE</code> cluster: <ul> <li><a href="https://github.com/kubernetes/ingress-gce" rel="noreferrer">Github.com: Kubernetes: Ingress GCE</a></li> </ul></li> <li><code>Ingress-nginx</code> - an alternative <code>Ingress</code> controller which can be deployed to your <code>GKE</code> cluster: <ul> <li><a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">Github.com: Kubernetes: Ingress-nginx</a></li> </ul></li> </ul> <p><code>Ingress</code> configuration you pasted will use the <code>Ingress-GCE</code> controller. If you want to switch to <code>Ingress-nginx</code> one, you will need to deploy it and set an annotation like: </p> <ul> <li><code>kubernetes.io/ingress.class: "nginx"</code></li> </ul> <hr> <blockquote> <p>How can I configure rewrite in ingress and if it is possible with default load balancer.</p> </blockquote> <p>There is an ongoing feature request to support rewrites with <code>Ingress-GCE</code> here: <a href="https://github.com/kubernetes/ingress-gce/issues/109" rel="noreferrer">Github.com: Ingress-GCE: Rewrite</a>.</p> <p><strong>You can use <code>Ingress-nginx</code> to have support for rewrites.</strong> There is an official documentation about deploying it: <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">Kubernetes.github.io: Ingress-nginx: Deploy</a> </p> <p>For more resources about rewrites you can use: </p> <ul> <li><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">Kubernetes.github.io: Ingress nginx: Examples: Rewrite</a></li> <li><a href="https://stackoverflow.com/questions/61541812/ingress-nginx-how-to-serve-assets-to-application/61751019#61751019">Stackoverflow.com: Ingress nginx how to serve assests to application</a> - this is an answer which shows an example on how to configure a playground for experimenting with rewrites </li> </ul> <hr> <blockquote> <p>What is default load balancer on GKE.</p> </blockquote> <p>If you create an <code>Ingress</code> resource with a default <code>Ingress-GCE</code> option you will create a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="noreferrer">L7 HTTP&amp;HTTPS LoadBalancer</a>. </p> <p>If you create a service of type <code>LoadBalancer</code> in <code>GKE</code> you will create an <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#creating_a_service_of_type_loadbalancer" rel="noreferrer">L4 Network Load Balancer</a></p> <p>If you deploy an <code>Ingress-nginx</code> controller in <code>GKE</code> cluster you will create a L4 Network Loadbalancer pointing to the <code>Ingress-nginx</code> controller which after that will route the traffic accordingly to your <code>Ingress</code> definition. If you are willing to use <code>Ingress-nginx</code> you will need to specify:</p> <ul> <li><code>kubernetes.io/ingress.class: "nginx"</code></li> </ul> <p>in your <code>Ingress</code> definition. </p> <p>Please take a look on this article: <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="noreferrer">Medium.com: Google Cloud: Kubernetes Nodeport vs Loadbalancer vs Ingress</a></p> <hr> <blockquote> <p>Where can I find a list of all annotations to it. I have thought that the full list is on <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/</a> but there is a completly different list and there is no kubernetes.io/ingress.global-static-ip-name annotation that is widely used in google examples.</p> </blockquote> <p><strong>The link that you provided with annotations is specifically for <code>Ingress-nginx</code>. This annotations will not work with <code>Ingress-GCE</code></strong>. </p> <p>The annotations used in <code>GCP</code> examples are specific to <code>Ingress-GCE</code>. </p> <p>You can create a Feature Request for a list of available annotations for <code>Ingress-GCE</code> on <a href="https://issuetracker.google.com" rel="noreferrer">Issuetracker.google.com</a>. </p>
Dawid Kruk
<p>I'm trying to run percona xtradb cluster. The output from the percona server is as followings:</p> <pre><code>mysqld: [Warning] World-writable config file '/etc/mysql/my.cnf' is ignored. 2019-06-10T07:24:28.000875Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2019-06-10T07:24:28.000942Z 0 [Warning] WSREP: Node is running in bootstrap/initialize mode. Disabling pxc_strict_mode checks 2019-06-10T07:24:28.187210Z 0 [Warning] InnoDB: New log files created, LSN=45790 2019-06-10T07:24:28.218407Z 0 [Warning] InnoDB: Creating foreign key constraint system tables. 2019-06-10T07:24:28.273235Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: c8208edd-8b50-11e9-92e6-0242ac110005. 2019-06-10T07:24:28.274902Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2019-06-10T07:24:28.421386Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option. Finished --initialize-insecure + echo 'Finished --initialize-insecure' + pid=60 + mysql=(mysql --protocol=socket -uroot) + for i in '{30..0}' + echo 'SELECT 1' + mysql --protocol=socket -uroot + mysqld --user=mysql --datadir=/var/lib/mysql/ --skip-networking + echo 'MySQL init process in progress...' + sleep 1 MySQL init process in progress... mysqld: [Warning] World-writable config file '/etc/mysql/my.cnf' is ignored. 2019-06-10T07:24:35.939976Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2019-06-10T07:24:35.941003Z 0 [Warning] WSREP: Node is not a cluster node. Disabling pxc_strict_mode 2019-06-10T07:24:35.941026Z 0 [Note] mysqld (mysqld 5.7.25-28-57) starting as process 60 ... 2019-06-10T07:24:35.943891Z 0 [Note] InnoDB: PUNCH HOLE support available 2019-06-10T07:24:35.943931Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2019-06-10T07:24:35.943934Z 0 [Note] InnoDB: Uses event mutexes 2019-06-10T07:24:35.943937Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2019-06-10T07:24:35.943939Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8 2019-06-10T07:24:35.943941Z 0 [Note] InnoDB: Using Linux native AIO 2019-06-10T07:24:35.944094Z 0 [Note] InnoDB: Number of pools: 1 2019-06-10T07:24:35.944167Z 0 [Note] InnoDB: Using CPU crc32 instructions 2019-06-10T07:24:35.945273Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2019-06-10T07:24:35.949427Z 0 [Note] InnoDB: Completed initialization of buffer pool 2019-06-10T07:24:35.950944Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2019-06-10T07:24:35.961258Z 0 [ERROR] InnoDB: The innodb_system data file 'ibdata1' must be writable 2019-06-10T07:24:35.961301Z 0 [ERROR] InnoDB: The innodb_system data file 'ibdata1' must be writable 2019-06-10T07:24:35.961308Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error MySQL init process in progress... 2019-06-10T07:24:37.066708Z 0 [ERROR] Plugin 'InnoDB' init function returned error. 2019-06-10T07:24:37.067259Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2019-06-10T07:24:37.067993Z 0 [ERROR] Failed to initialize builtin plugins. 2019-06-10T07:24:37.068107Z 0 [ERROR] Aborting 2019-06-10T07:24:37.068125Z 0 [Note] Binlog end 2019-06-10T07:24:37.068278Z 0 [Note] Shutting down plugin 'CSV' 2019-06-10T07:24:37.068293Z 0 [Note] Shutting down plugin 'MyISAM' 2019-06-10T07:24:37.071449Z 0 [Note] mysqld: Shutdown complete + for i in '{30..0}' + mysql --protocol=socket -uroot + echo 'SELECT 1' + echo 'MySQL init process in progress...' + sleep 1 &lt;30 times same for the above for loop&gt; + exit 1 </code></pre> <p>The solutions i got are not helpful.</p> <p>Oh, the docker file is:</p> <pre><code>FROM debian:jessie MAINTAINER Percona Development &lt;[email protected]&gt; RUN groupadd -g 1001 mysql RUN useradd -u 1001 -r -g 1001 -s /sbin/nologin \ -c "Default Application User" mysql RUN apt-get update -qq &amp;&amp; apt-get install -qqy --no-install-recommends \ apt-transport-https ca-certificates \ pwgen wget \ &amp;&amp; rm -rf /var/lib/apt/lists/* RUN wget https://repo.percona.com/apt/percona-release_0.1-6.jessie_all.deb \ &amp;&amp; dpkg -i percona-release_0.1-6.jessie_all.deb # the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql) # also, we set debconf keys to make APT a little quieter ENV DEBIAN_FRONTEND noninteractive RUN apt-get update -qq \ &amp;&amp; apt-get install -qqy --force-yes \ percona-xtradb-cluster-57 curl \ &amp;&amp; rm -rf /var/lib/apt/lists/* \ # comment out any "user" entires in the MySQL config ("docker-entrypoint.sh" or "--user" will handle user switching) &amp;&amp; sed -ri 's/^user\s/#&amp;/' /etc/mysql/my.cnf \ # purge and re-create /var/lib/mysql with appropriate ownership &amp;&amp; rm -rf /var/lib/mysql \ &amp;&amp; mkdir -p /var/lib/mysql /var/log/mysql /var/run/mysqld \ # &amp;&amp; chown -R mysql:mysql /var/lib/mysql /var/run/mysqld \ # &amp;&amp; chown -R 1001:1001 /etc/mysql/ /var/log/mysql /var/lib/mysql /var/run/mysqld \ &amp;&amp; chown -R mysql:mysql /etc/mysql/ /var/log/mysql /var/lib/mysql /var/run/mysqld \ # &amp;&amp; chmod -R g=u /etc/mysql/ /var/log/mysql /var/lib/mysql # ensure that /var/run/mysqld (used for socket and lock files) is writable regardless of the UID our mysqld instance ends up having at runtime &amp;&amp; chmod -R 777 /etc/mysql/ /var/log/mysql /var/lib/mysql /var/run/mysqld RUN sed -ri 's/^bind-address/#&amp;/' /etc/mysql/my.cnf # &amp;&amp; echo 'skip-host-cache\nskip-name-resolve' | awk '{ print } $1 == "[mysqld]" &amp;&amp; c == 0 { c = 1; system("cat") }' /etc/mysql/my.cnf &gt; /tmp/my.cnf \ # &amp;&amp; mv /tmp/my.cnf /etc/mysql/my.cnf VOLUME ["/var/lib/mysql", "/var/log/mysql"] RUN sed -ri 's/^log_error/#&amp;/' /etc/mysql/my.cnf ADD node.cnf /etc/mysql/conf.d/node.cnf RUN echo '!include /etc/mysql/conf.d/node.cnf' &gt;&gt; /etc/mysql/my.cnf COPY entrypoint.sh /entrypoint.sh COPY dockerdir / #COPY jq /usr/bin/jq #COPY clustercheckcron /usr/bin/clustercheckcron #RUN chmod a+x /usr/bin/jq #RUN chmod a+x /usr/bin/clustercheckcron EXPOSE 3306 4567 4568 LABEL vendor=Percona LABEL com.percona.package="Percona XtraDB Cluster" LABEL com.percona.version="5.7" ENTRYPOINT ["/entrypoint.sh"] EXPOSE 3306 USER 1001 CMD [""] </code></pre>
Shudipta Sharma
<p>That means your MySQL service is already running</p>
AJ Mndz