Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>As you all know if we use docker to build image inside container we have to mount <code>"-v</code> <code>/var/run/docker.sock:/var/run/docker.sock"</code>, how does nerdctl handle that with container-d ?
I am planning to use nerdctl Instead of kaniko , my workflows changes are heavy if I use kaniko .</p>
| Rajendar Talatam | <p>Though it's not recommended, by mounting the containerd's socket, you can do the same thing.</p>
<pre><code>-v /var/run/containerd/containerd.sock:/var/run/containerd/containerd.sock
</code></pre>
<p>And you also need to install nerdctl on the container in some way (copying binary files in the Dockerfile, or mounting a directory which nerdctl.tar.gz exists and extracting it just the same as the host).</p>
| Daigo |
<p>I'm running an app on Kubernetes / GKE.</p>
<p>I have a bunch of devices without a public IP. I need to access SSH and VNC of those devices from the app.</p>
<p>The initial thought was to run an OpenVPN server within the cluster and have the devices connect, but then I hit the problem:</p>
<p>There doesn't seem to be any elegant / idiomatic way to route traffic from the app to the VPN clients.</p>
<p>Basically, all I need is to be able to tell <code>route 10.8.0.0/24 via vpn-pod</code></p>
<p>Possible solutions I've found:</p>
<ul>
<li><p>Modifying routes on the nodes. I'd like to keep nodes ephemeral and have everything in K8s manifests only.</p></li>
<li><p><code>DaemonSet</code> to add the routes on nodes with K8s manifests. It's not clear how to keep track of OpenVPN pod IP changes, however.</p></li>
<li><p>Istio. Seems like an overkill, and I wasn't able to find a solution to my problem in the documentation. L3 routing doesn't seem to be supported, so it would have to involve port mapping.</p></li>
<li><p>Calico. It is natively supported at GKE and it does support L3 routing, but I would like to avoid introducing such far-reaching changes for something that could have been solved with a single custom route.</p></li>
<li><p>OpenVPN client sidecar. Would work quite elegantly and it wouldn't matter where and how the VPN server is hosted, as long as the clients are allowed to communicate with each other. However, I'd like to isolate the clients and I might need to access the clients from different pods, meaning having to place the sidecar in multiple places, polluting the deployments. The isolation could be achieved by separating clients into classes in different IP ranges.</p></li>
<li><p>Routes within GCP / GKE itself. They only allow to specify a node as the next hop. This also means that both the app and the VPN server must run within GCP.</p></li>
</ul>
<p>I'm currently leaning towards running the OpenVPN server on a bare-bones VM and using the GCP routes. It works, I can ping the VPN clients from the K8s app, but it still seems brittle and hard-wired.</p>
<p>However, only the sidecar solution provides a way to fully separate the concerns.</p>
<p>Is there an idiomatic solution to accessing the pod-private network from other pods?</p>
| amq | <p>Solution you devised - with the OpenVPN server acting as a gateway for multiple devices (I assume there will be dozens or even hundreds simultaneous connections) is the best way to do it.</p>
<p>GCP's VPN unfortunatelly doesn't offer needed functionality (just Site2site connections) so we can't use it.</p>
<p>You could simplify your solution by putting OpenVPN in the GCP (in the same VPC network as your application) so your app could talk directly to the server and then to the clients. I believe by doing this you would get rid of that "brittle and hardwired" part.</p>
<p>You will have to decide which solution works best for you - Open VPN in or out of GCP. </p>
<p>In my opinion if you go for hosting Open VPN server in GCP it will be more elegant and simple but not necessarily cheaper.</p>
<p>Regardless of the solution you can put the clients in different ip ranges but I would go for configuring some iptables rules (on Open VPN server) to block communication and allow clients to reach only a few IP's in the network. That way if in the future you needed some clients to communicate it would just be a matter of iptable configuration.</p>
| Wojtek_B |
<p>I am trying to host a simple html static site on kubernetes and was able to set everything working as expected but when I used <strong>load balancer</strong> in the service its exposed to public.</p>
<p>I tried to use <strong>nodeport</strong> option but our node dont have public IP to access it.</p>
<p>With just custer IP Iam unable to access it as well.</p>
<p>How do we host site only internally and which approach is the right one.</p>
<p>I started to look in to ingress controllers but then could not ways to install it to use it.</p>
<p>Any help is appreciated.</p>
| mamidala86 | <p>Posting this answer as a community wiki to give more of a baseline approach and to point to the possible solutions.</p>
<p>Feel free to edit/expand.</p>
<hr />
<p>Answering the question from the title:</p>
<blockquote>
<p>How do I host a simple html code on kubernetes within our network?</p>
</blockquote>
<p>This question is mainly related to the <code>Services</code> and how they work. Assuming that you already have a <code>Deployment</code> with it, the best course of actions would be to reach to the documentation of your cloud provider on the support of various <code>Services</code> and how you can connect to them. In general you would connect to this <code>Deployment</code> (with some <code>html</code> code) either by:</p>
<ul>
<li><code>Service</code> of type <code>NodePort</code> - port (<code>30000</code>-<code>32767</code>) on each <code>Node</code> will be opened for accessing the <code>Deployment</code></li>
<li><code>Service</code> of type <code>LoadBalancer</code> - IP address will be requested for the <code>Service</code> which you can access to get to the <code>Deployment</code> (can be <strong>internal</strong> or external)</li>
</ul>
<hr />
<p>As this topic is related to cloud-managed solutions and it's requirement is to connect to the <code>Service</code> from the network hosted by a cloud-provider, I'd reckon one of the solutions would be to look for objects like:</p>
<ul>
<li><em>Internal Ingress</em></li>
<li><em>Internal LoadBalancer</em></li>
</ul>
<p>This objects will be created in a way that you could access them only from the internal network (which I'm assuming you are connected to with your <code>VPN</code>).</p>
<p>Examples of such implementations across some cloud providers:</p>
<ul>
<li><code>GKE</code>: <em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: How to: Internal load balancing</a></em></li>
<li><code>EKS</code>: <em><a href="https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html" rel="nofollow noreferrer">Docs.aws.amazon.com: Latest: Userguide: Network load balancing</a></em></li>
<li><code>AKS</code>: <em><a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">Docs.microsoft.com: Azure: AKS: Internal lb</a></em></li>
</ul>
<blockquote>
<p>A side note!</p>
<p>You can use <code>Service</code> of type <code>Loadbalancer</code> (internal one) to be the entrypoint for your <code>Ingress</code> controller like <code>ingress-nginx</code>:</p>
<ul>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: Deploy</a></em></li>
</ul>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Service networking: Service</a></em></li>
</ul>
| Dawid Kruk |
<p>Folks,
When trying to increase a GKE cluster from 1 to 3 nodes, running in separate zones (us-centra1-a, b, c). The following seems apparent:</p>
<p>Pods scheduled on new nodes can not access resources on the internet... i.e. not able to connect to stripe apis, etc. (potentially kube-dns related, have not tested traffic attempting to leave without a DNS lookup).</p>
<p>Similarly, am not able to route between pods in K8s as expected. I.e. it seems cross-az calls could be failing? When testing with openvpn, unable to connect to pods scheduled on new nodes.</p>
<p>A separate issue I noticed was Metrics server seems wonky. <code>kubectl top nodes</code> shows unknown for the new nodes.</p>
<p>At the time of writing, master k8s version <code>1.15.11-gke.9</code></p>
<p>The settings am paying attention to:</p>
<pre><code>VPC-native (alias IP) - disabled
Intranode visibility - disabled
</code></pre>
<p>gcloud container clusters describe cluster-1 --zone us-central1-a</p>
<pre><code>clusterIpv4Cidr: 10.8.0.0/14
createTime: '2017-10-14T23:44:43+00:00'
currentMasterVersion: 1.15.11-gke.9
currentNodeCount: 1
currentNodeVersion: 1.15.11-gke.9
endpoint: 35.192.211.67
initialClusterVersion: 1.7.8
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/skilful-frame-180217/zones/us-central1-a/instanceGroupManagers/gke-cluster-1-default-pool-ff24932a-grp
ipAllocationPolicy: {}
labelFingerprint: a9dc16a7
legacyAbac:
enabled: true
location: us-central1-a
locations:
- us-central1-a
loggingService: none
....
masterAuthorizedNetworksConfig: {}
monitoringService: none
name: cluster-1
network: default
networkConfig:
network: .../global/networks/default
subnetwork: .../regions/us-central1/subnetworks/default
networkPolicy:
provider: CALICO
nodeConfig:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-2
...
nodeIpv4CidrSize: 24
nodePools:
- autoscaling: {}
config:
diskSizeGb: 100
diskType: pd-standard
imageType: COS
machineType: n1-standard-2
...
initialNodeCount: 1
locations:
- us-central1-a
management:
autoRepair: true
autoUpgrade: true
name: default-pool
podIpv4CidrSize: 24
status: RUNNING
version: 1.15.11-gke.9
servicesIpv4Cidr: 10.11.240.0/20
status: RUNNING
subnetwork: default
zone: us-central1-a
</code></pre>
<p>Next troubleshooting step is creating a new pool and migrating to it. Maybe the answer is staring at me right in the face... could it be <code>nodeIpv4CidrSize</code> a /24?</p>
<p>Thanks!</p>
| Cmag | <ul>
<li>In your question, the description of your cluster have the following Network Policy:</li>
</ul>
<pre><code>name: cluster-1
network: default
networkConfig:
network: .../global/networks/default
subnetwork: .../regions/us-central1/subnetworks/default
networkPolicy:
provider: CALICO
</code></pre>
<ul>
<li>I deployed a cluster as similar as I could:</li>
</ul>
<pre><code>gcloud beta container --project "PROJECT_NAME" clusters create "cluster-1" \
--zone "us-central1-a" \
--no-enable-basic-auth \
--cluster-version "1.15.11-gke.9" \
--machine-type "n1-standard-1" \
--image-type "COS" \
--disk-type "pd-standard" \
--disk-size "100" \
--metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--num-nodes "1" \
--no-enable-ip-alias \
--network "projects/owilliam/global/networks/default" \
--subnetwork "projects/owilliam/regions/us-central1/subnetworks/default" \
--enable-network-policy \
--no-enable-master-authorized-networks \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--enable-autoupgrade \
--enable-autorepair
</code></pre>
<ul>
<li>After that I got the same configuration as yours, I'll point two parts:</li>
</ul>
<pre><code>addonsConfig:
networkPolicyConfig: {}
...
name: cluster-1
network: default
networkConfig:
network: projects/owilliam/global/networks/default
subnetwork: projects/owilliam/regions/us-central1/subnetworks/default
networkPolicy:
enabled: true
provider: CALICO
...
</code></pre>
<ul>
<li>In the comments you mention "in the UI, it says network policy is disabled...is there a command to drop calico?". Then I gave you the command, for which you got the error stating that <code>Network Policy Addon is not Enabled</code>.</li>
</ul>
<p>Which is weird, because it's applied but not enabled. I <code>DISABLED</code> it on my cluster and look:</p>
<pre><code>addonsConfig:
networkPolicyConfig:
disabled: true
...
name: cluster-1
network: default
networkConfig:
network: projects/owilliam/global/networks/default
subnetwork: projects/owilliam/regions/us-central1/subnetworks/default
nodeConfig:
...
</code></pre>
<ul>
<li><p><code>NetworkPolicyConfig</code> went from <code>{}</code> to <code>disabled: true</code> and the section <code>NetworkPolicy</code> above <code>nodeConfig</code> is now gone. So, i suggest you to enable and disable it again to see if it updates the proper resources and fix your network policy issue, here is what we will do:</p></li>
<li><p>If your cluster is not on production, I'd suggest you to resize it back to 1, do the changes and then scale again, the update will be quicker. but if it is in production, leave it as it is, but it might take longer depending on your pod disrupting policy. (<code>default-pool</code> is the name of my cluster pool), I'll resize it on my example:</p></li>
</ul>
<pre><code>$ gcloud container clusters resize cluster-1 --node-pool default-pool --num-nodes 1
Do you want to continue (Y/n)? y
Resizing cluster-1...done.
</code></pre>
<ul>
<li>Then enable the network policy addon itself (it will not activate it, only make available):</li>
</ul>
<pre><code>$ gcloud container clusters update cluster-1 --update-addons=NetworkPolicy=ENABLED
Updating cluster-1...done.
</code></pre>
<ul>
<li>and we enable (activate) the network policy:</li>
</ul>
<pre><code>$ gcloud container clusters update cluster-1 --enable-network-policy
Do you want to continue (Y/n)? y
Updating cluster-1...done.
</code></pre>
<ul>
<li>Now let's undo it:</li>
</ul>
<pre><code>$ gcloud container clusters update cluster-1 --no-enable-network-policy
Do you want to continue (Y/n)? y
Updating cluster-1...done.
</code></pre>
<ul>
<li>After disabling it, wait until the pool is ready and run the last command:</li>
</ul>
<pre><code>$ gcloud container clusters update cluster-1 --update-addons=NetworkPolicy=DISABLED
Updating cluster-1...done.
</code></pre>
<ul>
<li>Scale it back to 3 if you had downscaled:</li>
</ul>
<pre><code>$ gcloud container clusters resize cluster-1 --node-pool default-pool --num-nodes 3
Do you want to continue (Y/n)? y
Resizing cluster-1...done.
</code></pre>
<ul>
<li>Finally check again the description to see if it matches the right configuration and test the communication between the pods.</li>
</ul>
<p>Here is the reference for this configuration:
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy" rel="nofollow noreferrer">Creating a Cluster Network Policy</a></p>
<p>If you still got the issue after that, update your question with the latest cluster description and we will dig further.</p>
| Will R.O.F. |
<p>I am trying to pass JVM args to Docker image of Spring boot app on Kubernetes. Specifically, I wanted to pass these three arguments:</p>
<pre><code>-Djavax.net.ssl.trustStore=/certs/truststore/cacerts
-Djavax.net.ssl.trustStorePassword=password
-Djavax.net.debug=ssl
</code></pre>
<p>I tried adding it to "env" section with name as "JAVA_OPTS", "JDK_JAVA_OPTIONS" and "JAVA_TOOL_OPTIONS", none of which seemed like they were working.</p>
<p>I also tried adding it under "args" section, that did not work either. At best I get no change in behaviour at all, at worst my pods won't start at all with this error:</p>
<blockquote>
<p>Error: failed to create containerd task: OCI runtime create failed:
container_linux.go:380: starting container process caused:
process_linux.go:545: container init caused: setenv: invalid argument:
unknown</p>
</blockquote>
<p>Entry point in Dockerfile is defined as such:</p>
<pre><code>ENTRYPOINT ["java","-jar","/app/appname-exec.jar"]
</code></pre>
<p>Any ideas?</p>
| lovrodoe | <p>To override the container's default <code>ENTRYPOINT</code> setting, I sometimes do the following:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: my-container
image: mycontainer:latest
command: ["java"]
args: ["-Djavax...", "-Djavax...", "-jar", "myapp.jar"]
</code></pre>
<p>You can define content in the manifest that you would describe in a Dockerfile. In <code>args</code> section, you can describe as many settings as you want.</p>
| Daigo |
<h1>What i have</h1>
<p>I have a Kubernetes cluster as follow:</p>
<ul>
<li>Single control plane (but plan to extend to 3 control plane for HA)</li>
<li>2 worker nodes</li>
</ul>
<p><br><br>
On this cluster i deployed (following this doc from traefik <a href="https://docs.traefik.io/user-guides/crd-acme/" rel="nofollow noreferrer">https://docs.traefik.io/user-guides/crd-acme/</a>):</p>
<ul>
<li><p>A deployment that create two pods :</p>
<ul>
<li>traefik itself: which will be in charge of routing with exposed port 80, 8080</li>
<li>whoami:a simple http server thats responds to http requests</li>
</ul>
</li>
<li><p>two services</p>
<ul>
<li>traefik service: <a href="https://i.stack.imgur.com/U1Zub.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U1Zub.png" alt="" /></a></li>
<li>whoami servic: <a href="https://i.stack.imgur.com/hoIQt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hoIQt.png" alt="" /></a></li>
</ul>
</li>
<li><p>One traefik IngressRoute:
<a href="https://i.stack.imgur.com/x5OIW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x5OIW.png" alt="" /></a></p>
</li>
</ul>
<h1>What i want</h1>
<p>I have multiple services running in the cluster and i want to expose them to the outside using Ingress.
More precisely i want to use the new <strong>Traefik 2.x</strong> CDR ingress methods.</p>
<p>My ultimate goal is to use new traefiks 2.x CRD to expose resources on port 80, 443, 8080 using <code>IngressRoute</code> Custom resource definitions</p>
<h1>What's the problem</h1>
<p>If i understand well, classic Ingress controllers allow exposition of every ports we want to the outside world (including 80, 8080 and 443).</p>
<p>But with the new traefik CDR ingress approach on it's own it does not exports anything at all.
One solution is to define the Traefik service as a loadbalancer typed service and then expose some ports. But you are forced to use the 30000-32767 ports range (same as nodeport), and i don't want to add a reverse proxy in front of the reverse proxy to be able to expose port 80 and 443...</p>
<p>Also i've seed from the doc of the new igress CRD (<a href="https://docs.traefik.io/user-guides/crd-acme/" rel="nofollow noreferrer">https://docs.traefik.io/user-guides/crd-acme/</a>) that:</p>
<p><code>kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default</code></p>
<p>is required, and i understand that now. You need to map the host port to the service port.
But mapping the ports that way feels clunky and counter intuitive. I don't want to have a part of the service description in a yaml and at the same time have to remember that i need to map port with <code>kubectl</code>.</p>
<p>I'm pretty sure there is a neat and simple solution to this problem, but i can't understand how to keep things simple. Do you guys have an experience in kubernetes with the new traefik 2.x CRD config?</p>
| Anthony Raymond | <pre><code>apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8000
- protocol: TCP
name: admin
port: 8080
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 4443
selector:
app: traefik
</code></pre>
<p>have you tried to use tragetPort where every request comes on 80 redirect to 8000 but when you use port-forward you need to always use service instead of pod</p>
| Bhavya Jain |
<p>I've deployed <a href="https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka" rel="nofollow noreferrer">https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka</a> on my on prem k8s cluster.
I'm trying to expose it my using a TCP controller with nginx.</p>
<p>My TCP nginx configmap looks like </p>
<pre><code>data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
</code></pre>
<p>And i've made the corresponding entry in my nginx ingress controller</p>
<pre><code> - name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
</code></pre>
<p>Now I'm trying to connect to my kafka instance.
When i just try to connect to the IP and port using kafka tools, I get the error message </p>
<pre><code>Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
</code></pre>
<p>When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep </p>
<pre><code>[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
</code></pre>
<p>From the pod <code>kafka-zookeeper-0</code> I'm gettting loads of </p>
<pre><code>[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
</code></pre>
<p>Though I'm not sure these have anything to do with it?</p>
<p>Any ideas on what I'm doing wrong?
Thanks in advance. </p>
| t3ng1l | <p><strong>TL;DR:</strong></p>
<ul>
<li>Change the value <code>nodeport.enabled</code> to <code>true</code> inside <code>cp-kafka/values.yaml</code> before deploying.</li>
<li>Change the service name and ports in you TCP NGINX Configmap and Ingress object.</li>
<li>Set <code>bootstrap-server</code> on your kafka tools to <code><Cluster_External_IP>:31090</code></li>
</ul>
<hr>
<p><strong>Explanation:</strong></p>
<blockquote>
<p>The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless Service</a> was created alongside the StatefulSet. The created service will <strong>not</strong> be given a <code>clusterIP</code>, but will instead simply include a list of <code>Endpoints</code>.
These <code>Endpoints</code> are then used to generate instance-specific DNS records in the form of:
<code><StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local</code></p>
</blockquote>
<p>It creates a DNS name for each pod, e.g:</p>
<pre><code>[ root@curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
</code></pre>
<ul>
<li>This is what makes this services connect to each other inside the cluster.</li>
</ul>
<hr>
<p>I've gone through a lot of trial and error, until I realized how it was supposed to be working. Based your TCP Nginx Configmap I believe you faced the same issue.</p>
<ul>
<li>The <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">Nginx ConfigMap</a> asks for: <code><PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>"</code>.</li>
<li>I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.</li>
<li>I also realized that you are trying to expose <code>cp-kafka:9092</code> which is the headless service, also only used internally, as I explained above.</li>
<li>In order to get outside access <strong>you have to set the parameters <code>nodeport.enabled</code> to <code>true</code></strong> as stated here: <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/README.md#external-access" rel="nofollow noreferrer">External Access Parameters</a>.</li>
<li>It adds one service to each kafka-N pod during chart deployment.</li>
<li>Then you change your configmap to map to one of them:</li>
</ul>
<pre><code>data:
"31090": default/demo-cp-kafka-0-nodeport:31090
</code></pre>
<p>Note that the service created has the selector <code>statefulset.kubernetes.io/pod-name: demo-cp-kafka-0</code> this is how the service identifies the pod it is intended to connect to.</p>
<ul>
<li>Edit the nginx-ingress-controller:</li>
</ul>
<pre><code>- containerPort: 31090
hostPort: 31090
protocol: TCP
</code></pre>
<ul>
<li>Set your kafka tools to <code><Cluster_External_IP>:31090</code></li>
</ul>
<hr>
<p><strong>Reproduction:</strong>
- Snippet edited in <code>cp-kafka/values.yaml</code>:</p>
<pre><code>nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
</code></pre>
<ul>
<li>Deploy the chart:</li>
</ul>
<pre><code>$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
</code></pre>
<ul>
<li>Create the TCP configmap:</li>
</ul>
<pre><code>$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
</code></pre>
<ul>
<li>Edit the Nginx Ingress Controller:</li>
</ul>
<pre><code>$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
</code></pre>
<ul>
<li>My ingress is on IP <code>35.226.189.123</code>, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can use <code>kafka-client</code> pod to test:</li>
</ul>
<pre><code>user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user@minikube:~$ kubectl exec kafka-client -it -- bin/bash
root@kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root@kafka-client:/#
</code></pre>
<p>As you can see, I was able to access the kafka from outside.</p>
<ul>
<li>If you need external access to Zookeeper as well I'll leave a service model for you:</li>
</ul>
<p><code>zookeeper-external-0.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
</code></pre>
<ul>
<li>It will create a service for it:</li>
</ul>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
</code></pre>
<ul>
<li>Patch your configmap:</li>
</ul>
<pre><code>data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
</code></pre>
<ul>
<li>Add the Ingress rule:</li>
</ul>
<pre><code> ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
</code></pre>
<ul>
<li>Test it with your external IP:</li>
</ul>
<pre><code>pod/zookeeper-client created
user@minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root@zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
</code></pre>
<p>If you have any doubts, let me know in the comments!</p>
| Will R.O.F. |
<p>Sounds like a silly question but I see Mi in yml files but what I'm familiar with is MiB. Are the two the same?</p>
| james pow | <p>As described in the official reference, <code>Mi</code> is just a prefix so the actual unit will be <code>MiB</code>.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#memory-units" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#memory-units</a></p>
| Daigo |
<p>In python k8s client, i use below code</p>
<p>yaml file</p>
<pre><code>apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
namespace: test
spec:
volumeSnapshotClassName: snapshotclass
source:
persistentVolumeClaimName: test-pvc
</code></pre>
<p>python code</p>
<pre><code>res = utils.create_from_dict(k8s_client, yaml_file)
</code></pre>
<p>However, i got this message:</p>
<pre><code>AttributeError: module 'kubernetes.client' has no attribute 'SnapshotStorageV1Api'
</code></pre>
<p>I want to take a volumesnapshot in k8s.
How can i do that?</p>
<p>Please give me some advices!</p>
| sun | <p>As I pointed in part of the comment I made under the question:</p>
<blockquote>
<p>Have you seen this github issue comment: <a href="https://github.com/kubernetes-client/python/issues/1195#issuecomment-699510580" rel="nofollow noreferrer">github.com/kubernetes-client/python/issues/…</a>?</p>
</blockquote>
<p>The link posted in the comments is a github issue for:</p>
<ul>
<li><code>VolumeSnapshot</code>,</li>
<li><code>VolumeSnapshotClass</code>,</li>
<li><code>VolumeSnapshotContent</code></li>
</ul>
<p>support in the Kubernetes python client.</p>
<p>Citing the comment made in this github issue by user @roycaihw that explains a way how you can make a snapshot:</p>
<blockquote>
<p>It looks like those APIs are CRDs: <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/#the-volumesnapshotclass-resource" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/#the-volumesnapshotclass-resource</a></p>
<p>If that's the case, you could use the CustomObject API to send requests to those APIs once they are installed. Example: <a href="https://github.com/kubernetes-client/python/blob/master/examples/custom_object.py" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/examples/custom_object.py</a></p>
<p>-- <em><a href="https://github.com/kubernetes-client/python/issues/1195#issuecomment-662190528" rel="nofollow noreferrer">Github.com: Kubernetes client: Python: Issues: 1995: Issue comment: 2</a></em></p>
</blockquote>
<hr />
<h3>Example</h3>
<p>An example of a Python code that would make a <code>VolumeSnapshot</code> is following:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
def main():
config.load_kube_config()
api = client.CustomObjectsApi()
# it's my custom resource defined as Dict
my_resource = {
"apiVersion": "snapshot.storage.k8s.io/v1beta1",
"kind": "VolumeSnapshot",
"metadata": {"name": "python-snapshot"},
"spec": {
"volumeSnapshotClassName": "example-snapshot-class",
"source": {"persistentVolumeClaimName": "example-pvc"}
}
}
# create the resource
api.create_namespaced_custom_object(
group="snapshot.storage.k8s.io",
version="v1beta1",
namespace="default",
plural="volumesnapshots",
body=my_resource,
)
if __name__ == "__main__":
main()
</code></pre>
<p><strong>Please change the values inside of this code to support your particular setup (i.e. <code>apiVersion</code>, <code>.metadata.name</code>, <code>.metadata.namespace</code>, etc.).</strong></p>
<blockquote>
<p>A side note!</p>
<p>This Python code was tested with <code>GKE</code> and it's <code>gce-pd-csi-driver</code>.</p>
</blockquote>
<p>After running this code the <code>VolumeSnapshot</code> should be created:</p>
<ul>
<li><code>$ kubectl get volumesnapshots</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME AGE
python-snapshot 19m
</code></pre>
<ul>
<li><code>$ kubectl get volumesnapshotcontents</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME AGE
snapcontent-71380980-6d91-45dc-ab13-4b9f42f7e7f2 19m
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Github.com: Kubernetes client: Python</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Storage: Volume snapshots</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm working on a SaaS app that will be running in Kubernetes. We're using a Helm chart that deploys all the components into the cluster (for simplicity's sake let's assume it's a frontend service, a backend and a database). App architecture is multi-tenant (we have a single instance of each service that are being shared by all tenants) <strong>and we would like to keep it that way</strong>. What I'm currently struggling with and would like to ask for advice/best practice on is how does one go about automating the provisioning of custom sub-domains for the tenants?</p>
<p>Imagine the app is hosted at <code>exampleapp.com</code>.
A brand new customer comes and registers a new organisation <code>some-company</code>. At that moment, in addition to creating new tenant in the system, I would also like to provision a new subdomain <code>some-company.exampleapp.com</code>. I would like this provisioning to be done automatically and not require any manual intervention. </p>
<ul>
<li>What options do I have for implementing automated sub-domain provisioning in Kubernetes? </li>
<li>How does our (<code>exampleapp.com</code>) domain registrar/nameserver provider fit into the solution?
Does it have to provide an API for dynamic DNS record creation/modification?</li>
</ul>
<p>I appreciate that the questions I'm asking are quite broad so I'm not expecting anything more than a high-level conceptual answer or pointers to some services/libraries/tools that might help me achieve this.</p>
| IvanR | <p><strong>Note:</strong> Since this is more of a theoretical question, I'll give you some points from a Kubernetes Engineer, I divided your question in blocks to ease the understanding.</p>
<ul>
<li>About your multi-tenancy architecture:
Keeping "it that way" is achievable. It simplifies the Kubernetes structure, on the other hand it relies more on your app.</li>
</ul>
<p><strong>Question 1:</strong></p>
<blockquote>
<p>Imagine the app is hosted at <code>exampleapp.com</code>. A brand new customer comes and registers a new organisation <code>some-company</code>. At that moment, in addition to creating new tenant in the system, I would also like to provision a new subdomain <code>some-company.exampleapp.com</code>. I would like this provisioning to be done automatically and not require any manual intervention.</p>
</blockquote>
<p><strong>Suggestion:</strong></p>
<ul>
<li>For that, you will have to give your app admin privileges and the tools required for it to add Ingress Rules entries to your Ingress when a new client is added. A script using <code>kubectl patch</code> is the simpler solution from my viewpoint.</li>
</ul>
<p>For this approach I suggest installing the <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">Nginx Ingress Controller</a> for it's versatility.</p>
<p><strong>Here is an Example:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: <ingress-name>
spec:
rules:
- host: client1.exampleapp.com
http:
paths:
- path: /client1
backend:
serviceName: <main-service>
servicePort: <main-service-port>
- host: client2.exampleapp.com
http:
paths:
- path: /client2
backend:
serviceName: <main-service>
servicePort: <main-service-port>
</code></pre>
<ul>
<li>And here is the one-liner command using <code>kubectl patch</code> on how to add new rules:</li>
</ul>
<pre><code>kubectl patch ingress demo-ingress --type "json" -p '[{"op":"add","path":"/spec/rules/-","value":{"host":"client3.exampleapp.com","http":{"paths":[{"path":"/client3","backend":{"serviceName":"main-service","servicePort":80}}]}}}]'
</code></pre>
<p>POC:</p>
<pre><code>$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
demo-ingress client1.exampleapp.com,client2.exampleapp.com 192.168.39.39 80 15m
$ kubectl patch ingress demo-ingress --type "json" -p '[{"op":"add","path":"/spec/rules/-","value":{"host":"client3.exampleapp.com","http":{"paths":[{"path":"/client3","backend":{"serviceName":"main-service","servicePort":80}}]}}}]'
ingress.extensions/demo-ingress patched
$ kubectl describe ingress demo-ingress
Rules:
Host Path Backends
---- ---- --------
client1.exampleapp.com
/client1 main-service:80 (<none>)
client2.exampleapp.com
/client2 main-service:80 (<none>)
client3.exampleapp.com
/client3 main-service:80 (<none>)
</code></pre>
<p>This rule redirects the traffic incoming from the subdomains to subpaths inside your main app.</p>
<ul>
<li>Also, to add TLS handling, refer to: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="noreferrer">Nginx Ingress TLS Guide</a></li>
</ul>
<hr>
<p><strong>Question2 :</strong></p>
<blockquote>
<p>How does our (<code>exampleapp.com</code>) domain registrar/nameserver provider fit into the solution? Does it have to provide an API for dynamic DNS record creation/modification?</p>
</blockquote>
<p><strong>Suggestion:</strong></p>
<ul>
<li>I believe you already has something similar, but you need a wildcard record in your DNS to point <code>*.example.app</code> to the IP of the ingress, I don't believe you need anything more than that, because it redirects all to the ingress and the ingress forwards it internally.</li>
</ul>
<p><strong>Question 3:</strong></p>
<blockquote>
<p>If there are some strong arguments why multi-tenancy + Kubernetes don't go along very well, those are welcome as well. </p>
</blockquote>
<p><strong>Opinion:</strong></p>
<ul>
<li>I don't see any major reason why it would be a problem. You just have, once again, to adequate your app to handle scaling because I believe in the long run you would want your app to be able to scale to multi-pod structure to provide elastic availability.</li>
</ul>
<p>These are my 2 cents to your question, I hope it helps you!</p>
| Will R.O.F. |
<p>I have a web App, that I am trying to deploy with <code>Kubernetes</code>. It's working correctly, but when I try to add <strong>resource limits</strong>, the <code>ElasticSearch</code> will not deploy.</p>
<h3>elasticsearch-deployment.yaml:</h3>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch-service
spec:
type: NodePort
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: 9200
name: serving
- port: 9300
targetPort: 9300
name: node-to-node
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch-deployment
labels:
app: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.9.0
ports:
- containerPort: 9200
- containerPort: 9300
env:
- name: discovery.type
value: single-node
# resources:
# limits:
# memory: 8Gi
# cpu: "4"
# requests:
# memory: 4Gi
# cpu: "2"
</code></pre>
<p>If I uncomment the resources section of the file, the pod is stuck in pending:</p>
<pre><code>> kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-deployment-bd4f98697-rxsz8 1/1 Running 1 (6m9s ago) 6m40s
elasticsearch-deployment-644475545b-t75pp 0/1 Pending 0 6m40s
frontend-deployment-8bc989f89-4g6v7 1/1 Running 0 6m40s
mysql-0 1/1 Running 0 6m40s
</code></pre>
<p>If I check the events:</p>
<pre><code>> kubectl get events
...
Warning FailedScheduling pod/elasticsearch-deployment-54d9cdd879-k69js 0/1 nodes are available: 1 Insufficient cpu.
Warning FailedScheduling pod/elasticsearch-deployment-54d9cdd879-rjj24 0/1 nodes are available: 1 Insufficient cpu.
...
</code></pre>
<p>The events says that the pod has <strong>Insufficient cpu</strong>, but I tried to change the resource limits to :</p>
<pre><code>resources:
limits:
memory: 8Gi
cpu: "18"
requests:
memory: 4Gi
cpu: "18"
</code></pre>
<p>Still doesn't works, the only way for it to works is to remove the resource limit, but why?</p>
| Hamza Ince | <p>It is because of the request, not the limit. It means your node doesn't have enough memory to schedule a pod that requests 2 CPUs. You need to set the value to a lower one (e.g. 500m).</p>
<p>You can check your server's allocatable CPUs. The sum of all Pod's CPU requests should be lower than this.</p>
<pre class="lang-sh prettyprint-override"><code># kubectl describe nodes
...
Allocatable:
cpu: 28
...
</code></pre>
| Daigo |
<p>I am making test Cluster following this instructions:
<a href="https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/</a>
and </p>
<p><a href="https://kubernetes.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/" rel="noreferrer">https://kubernetes.io/docs/getting-started-guides/fedora/flannel_multi_node_cluster/</a>
unfortunately when I check my nodes following occurs:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kubectl get no
NAME STATUS ROLES AGE VERSION
pccshost2.lan.proficom.de NotReady <none> 19h v1.10.3
pccshost3.lan.proficom.de NotReady <none> 19h v1.10.3</code></pre>
</div>
</div>
</p>
<p>so far as I get this problem is connected with not working kubelet.service on master-node:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>systemctl status kubelet.service
kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2019-03-06 10:38:30 CET; 32min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 14057 ExecStart=/usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KU>
Main PID: 14057 (code=exited, status=255)
CPU: 271ms
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Consumed 271ms CPU time
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Service RestartSec=100ms expired, scheduling restart.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: Stopped Kubernetes Kubelet Server.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Consumed 271ms CPU time
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Start request repeated too quickly.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 06 10:38:30 pccshost1.lan.proficom.de systemd[1]: Failed to start Kubernetes Kubelet Server.</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>~kubectl describe node
Normal Starting 9s kubelet, pccshost2.lan.proficom.de Starting kubelet.
Normal NodeHasSufficientDisk 9s kubelet, pccshost2.lan.proficom.de Node pccshost2.lan.proficom.de status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 9s kubelet, pccshost2.lan.proficom.de Node pccshost2.lan.proficom.de status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9s kubelet, pccshost2.lan.proficom.de Node pccshost2.lan.proficom.de status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9s kubelet, pccshost2.lan.proficom.de Node pccshost2.lan.proficom.de status is now: NodeHasSufficientPID</code></pre>
</div>
</div>
</p>
<p>can somebody give an advice what is happening here and how can I fix it? Thx</p>
| Roger | <p>When you install k8s cluster using <code>kubeadm</code> and install kubelet in master(Ubuntu), it creates file <em>"10-kubeadm.conf"</em> located at <code>/etc/systemd/system/kubelet.service.d</code></p>
<pre><code>### kubelet contents
ExecStart=/usr/bin/kubelet
$KUBELET_KUBECONFIG_ARGS
$KUBELET_CONFIG_ARGS
$KUBELET_KUBEADM_ARGS
$KUBELET_EXTRA_ARGS
</code></pre>
<p>The value of variable <code>$KUBELET_KUBECONFIG_ARGS</code> is <code>/etc/kubernetes/kubelet.conf</code> which contains the certificate signed by CA. Now, you need to verify the validity of the certificate. If the certificate has been expired then create the certificate using <strong>openssl</strong> and sign it with your CA.</p>
<h3>Steps to verify the certificate</h3>
<ol>
<li>Copy the value <code>client-certificate-data</code>.</li>
<li>Decode the certificate ( echo -n "copied_certificate_value" | base64 --decode)</li>
<li>Save the output in a file (vi kubelet.crt)</li>
<li>Verify the validity (openssl x509 -in kubelet.crt -text -noout)</li>
</ol>
<p>If the Vadility has expired then create a new certificate</p>
<p>Note: It is always safe to take backup before you start making any change
<code>cp -a /etc/kubernetes/ /root/</code></p>
<h3>Steps to generate a new certificate</h3>
<pre><code>openssl genrsa -out kubelet.key 2048
openssl req -new -key kubelet.key -subj "/CN=kubelet" -out kubelet.csr
openssl x509 -req -in kubelet.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -out kubelet.crt -days 300
</code></pre>
<p>Encode the certificate files</p>
<pre><code>cat kubelet.crt | base64
cat kubelet.key | base64
</code></pre>
<p>Copy the encoded content and update in <code>/etc/kubernetes/kubelet.conf</code>.</p>
<h4>Now, check the status of kubelet on master node</h4>
<pre><code>systemctl status kubelet
systemctl restart kubelet #restart kubelet
</code></pre>
| Israrul Haque |
<p>I'm using the pre-packaged Kubernetes cluster that comes with docker desktop. I'm on a windows machine, running the Kubernetes on a Ubuntu-18.04 VM using WSL 2. On my Kubernetes cluster I run:</p>
<pre><code>istioctl install --set profile=demo --set values.global.jwtPolicy=third-party-jwt
</code></pre>
<p>But I get the message:</p>
<pre><code>Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/v1.9/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
</code></pre>
<p>After that, it freezes on this until it times out:</p>
<pre><code>Processing resources for Istiod. Waiting for Deployment/istio-system/istiod
</code></pre>
<p>Is there a way of enabling third party JWT in my cluster?</p>
| João Areias | <p>In the error message that you've received there is a link that points to the documentation on that specific issue:</p>
<ul>
<li><em><a href="https://istio.io/latest/docs/ops/best-practices/security/#configure-third-party-service-account-tokens" rel="nofollow noreferrer">Istio.io: Latest: Docs: Ops: Best practices: Security: Configure third party service account tokens</a></em></li>
</ul>
<p>Citing the official documentation:</p>
<blockquote>
<h3>Configure third party service account tokens</h3>
<p>To authenticate with the Istio control plane, the Istio proxy will use a Service Account token. Kubernetes supports two forms of these tokens:</p>
<p>Third party tokens, which have a scoped audience and expiration.
First party tokens, which have no expiration and are mounted into all pods.
Because the properties of the first party token are less secure, Istio will default to using third party tokens. However, this feature is not enabled on all Kubernetes platforms.</p>
<p>If you are using istioctl to install, support will be automatically detected. This can be done manually as well, and configured by passing <code>--set values.global.jwtPolicy=third-party-jwt</code> or <code>--set values.global.jwtPolicy=first-party-jwt</code>.</p>
<p>To determine if your cluster supports third party tokens, look for the TokenRequest API. If this returns no response, then the feature is not supported:</p>
<p><code>$ kubectl get --raw /api/v1 | jq '.resources[] | select(.name | index("serviceaccounts/token"))'</code></p>
<pre><code>{
"name": "serviceaccounts/token",
"singularName": "",
"namespaced": true,
"group": "authentication.k8s.io",
"version": "v1",
"kind": "TokenRequest",
"verbs": [
"create"
]
}
</code></pre>
<p>While most cloud providers support this feature now, many local development tools and custom installations may not prior to Kubernetes 1.20. To enable this feature, please refer to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="nofollow noreferrer">Kubernetes documentation</a>.</p>
</blockquote>
<hr />
<p>I'm not sure if this feature is supported by Kubernetes (<code>1.19.7</code>) created with Docker Desktop but Kubernetes documentation shows a way how you could enable it:</p>
<blockquote>
<h3>Service Account Token Volume Projection</h3>
<p><strong>FEATURE STATE:</strong> Kubernetes v1.20 [stable]</p>
<p><strong>Note:</strong></p>
<p>To enable and use token request projection, you must specify each of the following command line arguments to kube-apiserver:</p>
<pre><code>--service-account-issuer
--service-account-key-file
--service-account-signing-key-file
--api-audiences
</code></pre>
</blockquote>
<p>You can edit your <code>kubeapi-server</code> manifest in a Docker Desktop with Kubernetes in a somewhat hacky way. You can read more about it by following this answer (which I am the author of):</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/64758012/location-of-kubernetes-config-directory-with-docker-desktop-on-windows/64774052#64774052">Stackoverflow.com: Questions: Location of Kubernetes config directory with Docker Desktop on Windows</a></em></li>
</ul>
<hr />
<h3>Workaround</h3>
<p>As you are using Docker on your host, you could also consider using <code>minikube</code> with its <code>--driver=docker</code>. It supports third party JWT tokens as described in the Istio documentation (it produces the response from the <code>$ kubectl get --raw ...</code> command).</p>
<blockquote>
<p>A side note!</p>
<p>I've tested this on a <code>minikube</code> version <code>1.19.0</code> and Kubernetes version <code>1.20.2</code>.</p>
</blockquote>
| Dawid Kruk |
<p>I'm operating simple kubernetes cluster(CRI is containerd).
In the cluster, "crictl ps" command doesn't return "kubelet" container.</p>
<p>In another kubernetes cluster(CRI is docker), "docker ps" command returns "kubelet" container.</p>
<p>What is the difference between these two commands(docker ps / crictl ps)?</p>
<p>Is there any way to see "kubelet" container by "crictl ps" command?</p>
| cook | <p><code>crictl</code> isn't a replacement for <code>docker</code>. The result of <code>crictl ps</code> doesn't include some containers (e.g. <code>pause</code>).</p>
<p>Try <code>ctr -n k8s.io c ls</code> to see all the containers running on k8s with containerd.</p>
| Daigo |
<p>I have installed grafana in kubernetes cluster and i am trying to add sysdig datasource.
But,it shows sysdig plugin not found, i tried setting up grafana with sysdig plugin using below command:</p>
<pre><code>docker run -d -p 3000:3000 --name grafana sysdiglabs/grafana:latest
</code></pre>
<p>But, i am unable to open grafana dashboard in browser using :</p>
<pre><code>http://localhost:3000
</code></pre>
<p>I also installed grafana in kubernetes cluster as below:</p>
<pre><code>kubectl get services -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 179.9.17.16 192.168.1.23 3000:32001/TCP 96m
prometheus-service NodePort 172.29.3.43 <none> 8080:30000/TCP 6d21h
</code></pre>
<p>I used sysdiglabs/grafana:latest image in above but still unable to find the sysdig plugin in grafana datasource.</p>
<p>In local laptop setup of grafana works and shows sysdig plugin , but i want to use grafana installed in cluster with sysdig plugin. Please help.</p>
| witty_minds | <p>I did it using LoadBalancer. Cluster firewall settings was causing the connectivity issue.</p>
| witty_minds |
<p>I'm trying to expose my backend API service using the nginx Ingress controller. Here is the Ingress service that I have defined:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: plant-simulator-ingress
namespace: plant-simulator-ns
annotations:
ingress.kubernetes.io/enable-cors: "true"
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/rewrite-target: /
prometheus.io/scrape: 'true'
prometheus.io/path: /metrics
prometheus.io/port: '80'
spec:
rules:
- host: grafana.local
http:
paths:
- backend:
serviceName: grafana-ip-service
servicePort: 8080
- host: prometheus.local
http:
paths:
- backend:
serviceName: prometheus-ip-service
servicePort: 8080
- host: plant-simulator.local
http:
paths:
- backend:
serviceName: plant-simulator-service
servicePort: 9000
</code></pre>
<p>The plant-simulator-service is defined as a service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: plant-simulator-service
namespace: plant-simulator-ns
labels:
name: plant-simulator-service
spec:
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: plant-simulator-service-port
selector:
app: plant-simulator
type: LoadBalancer
</code></pre>
<p>I successfully deployed this on my Minikube and here is the set of pods running:</p>
<pre><code>Joes-MacBook-Pro:~ joesan$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-cvblh 1/1 Running 0 39m
kube-system coredns-6955765f44-xh2wg 1/1 Running 0 39m
kube-system etcd-minikube 1/1 Running 0 39m
kube-system kube-apiserver-minikube 1/1 Running 0 39m
kube-system kube-controller-manager-minikube 1/1 Running 0 39m
kube-system kube-proxy-n6scg 1/1 Running 0 39m
kube-system kube-scheduler-minikube 1/1 Running 0 39m
kube-system storage-provisioner 1/1 Running 0 39m
plant-simulator-ns flux-5476b788b9-g7xtn 1/1 Running 0 20m
plant-simulator-ns memcached-86bdf9f56b-zgshx 1/1 Running 0 20m
plant-simulator-ns plant-simulator-6d46dc89cb-xsjgv 1/1 Running 0 65s
</code></pre>
<p>Here is the list of services:</p>
<pre><code>Joes-MacBook-Pro:~ joesan$ minikube service list
|--------------------|-------------------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|--------------------|-------------------------|-----------------------------|-----|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| plant-simulator-ns | memcached | No node port |
| plant-simulator-ns | plant-simulator-service | http://192.168.99.103:32638 |
|--------------------|-------------------------|-----------------------------|-----|
</code></pre>
<p>What I wanted to achieve is that my application backend is reachable via the dns entry that I have configured in my Ingress - </p>
<blockquote>
<p>plant-simulator.local</p>
</blockquote>
<p>Any ideas as to what I'm missing?</p>
| joesan | <p>OP reported the case was solved by adding the IP and Hostname in the <code>/etc/hosts</code></p>
<pre><code>$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
192.168.99.103 plant-simulator.local
</code></pre>
| Will R.O.F. |
<p>Inside the virtual service, I have routed 2 paths to my service as follows -</p>
<pre class="lang-yaml prettyprint-override"><code>- match:
- uri:
prefix: /jaeger/
- uri:
regex: \/oauth2\/.*jaeger.*
route:
- destination:
host: oauth2-proxy
port:
number: 80
</code></pre>
<p>But the gateway returns 404 when I send request on the path <code>/oauth2/callback?code=3QxQLUqCwxVtH_GS6mWteICfisIe32yE7RE6wQIZZVw&state=wmZSZ0BHMHq3vmS_1YBWIn72pG6FkChFQbNUNipGotQ%3A%2Fjaeger%2F</code></p>
<p>Now, I could also just use the prefix <code>/oauth2/</code> to handle such URLs but currently I have multiple applications that are being authenticated by their own oauth2-proxies and this regex would match all of them. So I need to use a regex which contains the application name inside it, such as jaeger is this case.</p>
<p>I even checked on <a href="https://regex101.com/" rel="nofollow noreferrer">regex101</a> that this path is indeed matching the regex I used.</p>
<p>Also, the gateway routes the request successfully when I use the regex <code>\/oauth2\/.*</code>. But as I explained I can't use this regex either. Am I missing something here?</p>
<p>Edit:
After further testing, I found out that if remove the "?" from the path, istio accepts this as valid and forwards the request to the service.
I also tried the regex <code>/oauth2\/callback\?code=.*jaeger.*</code> but that isn't working either.</p>
| Hemabh | <p>I did not realize that "?" marks the end of a URL and everything after that are query parameters. Istio provides matching of query parameters as well. The following code worked in my case -</p>
<pre><code> - match:
- uri:
prefix: /jaeger/
- uri:
regex: \/oauth2\/callback?.*
queryParams:
state:
regex: .*jaeger.*
route:
- destination:
host: oauth2-proxy
port:
number: 80
</code></pre>
| Hemabh |
<p>I am trying to use <strong>Oracle Object Storage</strong> as a persistent Volume in <strong>Oracle Kubernetes Engine</strong>.</p>
<p>I have created a Kubernetes cluster and created a public bucket named <code>test-bucket</code>.</p>
<p>My yaml files are:</p>
<p><strong>storage-class.yaml</strong></p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: object-storage
provisioner: oci.oraclecloud.com/object-storage
parameters:
compartment-id:
bucket-name: test-bucket
access-key:
secret-key:
</code></pre>
<p><strong>pvc.yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: object-storage-pvc
spec:
storageClassName: object-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<p><strong>pod.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: object-storage-pod
spec:
containers:
- name: object-storage-container
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- name: object-storage
mountPath: /home/user/data
volumes:
- name: object-storage
persistentVolumeClaim:
claimName: object-storage-pvc
</code></pre>
<p>I have applied all of the files but I am receiving this error when I create the PVC.</p>
<pre><code>Normal ExternalProvisioning 3m2s (x26 over 8m53s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "oci.oraclecloud.com/object-storage" or manually created by system administrator
</code></pre>
<p>I also tried to create the volume by myself but that also didn't work.</p>
<p>I have tried many different things but I am not sure If I am missing anything here.</p>
<p>Any help would be appreciated.</p>
<p>Thanks</p>
| Zunnurain Badar | <p>I don't think that would work, as using object storage as a PVC is currently not supported by OCI.</p>
<p>Here's an alternative that would work, but currently involves manual steps:</p>
<p>Install s3fs on each of your nodes. This will allow you to mount s3 bucket (including OCI's S3) as a mount folder. Then you can mount the volume into your container, using host path. If you restart your nodes, you will need to remmount the bucket, or setup a service to startup on reboot. If you have multiple nodes, or constantly delete/add nodes, it could be a pain.</p>
| Phong Phuong |
<p>I'm trying to wrap my head around exposing internal loadbalancing to outside world on bare metal k8s cluster.</p>
<p>Let's say we have a basic cluster:</p>
<ol>
<li><p>Some master nodes and some worker nodes, that has two interfaces, one public facing (eth0) and one local(eth1) with ip within 192.168.0.0/16 network</p>
</li>
<li><p>Deployed MetalLB and configured 192.168.200.200-192.168.200.254 range for its internal ips</p>
</li>
<li><p>Ingress controller with its service with type LoadBalancer</p>
</li>
</ol>
<p>MetalLB now should assign one of the ips from 192.168.200.200-192.168.200.254 to ingress service, as of my current understanding.</p>
<p>But I have some following questions:</p>
<p>On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?</p>
<p>What are my options to pass incoming <strong>external</strong> traffic to eth0 to an ingress listening on eth1 network?</p>
<p>Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?</p>
| Alex Smith | <p>Assuming that we are talking about <code>Metallb</code> using <code>Layer2</code>.</p>
<p>Addressing the following questions:</p>
<blockquote>
<p>On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?</p>
</blockquote>
<blockquote>
<p>Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?</p>
</blockquote>
<p>Dividing the solution on the premise of preserving the source IP, this question could go both ways:</p>
<hr />
<h3>Preserve the source IP address</h3>
<p>To do that you would need to set the <code>Service of type LoadBalancer</code> of your <code>Ingress controller</code> to support "Local traffic policy" by setting (in your <code>YAML</code> manifest):</p>
<ul>
<li><code>.spec.externalTrafficPolicy: Local</code></li>
</ul>
<p>This setup will be valid as long as on each <code>Node</code> there is replica of your <code>Ingress controller</code> as all of the networking coming to your controller will be contained in a single <code>Node</code>.</p>
<p>Citing the official docs:</p>
<blockquote>
<p>With the <code>Local</code> traffic policy, <code>kube-proxy</code> on the node that received the traffic sends it only to the service’s pod(s) that are on the same node. There is no “horizontal” traffic flow between nodes.</p>
<p>Because <code>kube-proxy</code> doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.</p>
<p>The downside of this policy is that incoming traffic only goes to some pods in the service. Pods that aren’t on the current leader node receive no traffic, they are just there as replicas in case a failover is needed.</p>
<p><em><a href="https://metallb.universe.tf/usage/#local-traffic-policy" rel="nofollow noreferrer">Metallb.universe.tf: Usage: Local traffic policy</a></em></p>
</blockquote>
<hr />
<h3>Do not preserve the source IP address</h3>
<p>If your use case does not require you to preserve the source IP address, you could go with the:</p>
<ul>
<li><code>.spec.externalTrafficPolicy: Cluster</code></li>
</ul>
<p>This setup won't require that the replicas of your <code>Ingress controller</code> will be present on each <code>Node</code>.</p>
<p>Citing the official docs:</p>
<blockquote>
<p>With the default <code>Cluster</code> traffic policy, <code>kube-proxy</code> on the node that received the traffic does load-balancing, and distributes the traffic to all the pods in your service.</p>
<p>This policy results in uniform traffic distribution across all pods in the service. However, <code>kube-proxy</code> will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the service’s leader node.</p>
<p><em><a href="https://metallb.universe.tf/usage/#cluster-traffic-policy" rel="nofollow noreferrer">Metallb.universe.tf: Usage: Cluster traffic policy</a></em></p>
</blockquote>
<hr />
<p>Addressing the 2nd question:</p>
<blockquote>
<p>What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?</p>
</blockquote>
<p>Metallb listen by default on all interfaces, all you need to do is to specify the address pool from this <code>eth</code> within Metallb config.</p>
<p>You can find more reference on this topic by following:</p>
<ul>
<li><em><a href="https://metallb.universe.tf/faq/#in-layer-2-mode-how-to-specify-the-host-interface-for-an-address-pool" rel="nofollow noreferrer">Metallb.universe.tf: FAQ: In layer 2 mode how to specify the host interface for an address pool</a></em></li>
</ul>
<p>An example of such configuration, could be following:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools: # HERE
- name: my-ip-space
protocol: layer2
addresses:
- 192.168.1.240/28
</code></pre>
| Dawid Kruk |
<p>I have a Kubernetes Deployment, in whose Pods I need a command run periodically.
The Kubernetes <code>CronJob</code> object creates a new Pod.
I would prefer to specify a cronjob, that runs inside the container of the Pod.
Is there any way, I can specify this in the deployment yaml?</p>
<p>I have no access to the Dockerfile, but am using pre-built Images.</p>
| sekthor | <p>You can try as into <code>deploy.yaml</code>, where this job will run on 25th of every month and copy <code>app.xml</code> file to the <code>/mount</code> directory of the <code>PersistentVolume</code> on <code>Kubernetes Cluster</code> as below:-</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-feed-fio
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 100Gi
storageClassName: fileshare
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-fio-scripts
data:
appupdater.sh: |
#!/bin/bash
set -e
set -x
curl https://example.com/xyz.xml -o /mount/app.xml -v
ls -la /mount
cat /mount/app.xml
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: app-feed-job
spec:
schedule: "00 00 25 * *"
jobTemplate:
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1500
fsGroup: 1500
# supplementalGroups: [0]
restartPolicy: OnFailure
containers:
- name: appupdater
image: {container registry path}/app-feed-fio:v0.0.0
command: ["/scripts/appupdater.sh"]
resources:
requests:
memory: 100Mi
cpu: 0.1
limits:
memory: 100Mi
cpu: 0.1
volumeMounts:
- mountPath: /mount
name: fileshare
- mountPath: /scripts
name: scripts
volumes:
- name: fileshare
persistentVolumeClaim:
claimName: app-fio-pvc
- name: scripts
configMap:
name: app-fio-scripts
defaultMode: 0777
</code></pre>
<p>And, in <code>Dockerfile</code> as below:-</p>
<pre><code>from ubuntu:focal
run apt-get -y update && \
apt-get install -y fio curl && \
apt-get autoremove --purge && \
apt-get clean && \
mkdir /mount && \
groupadd -g 1500 fio && \
useradd -u 1500 -g 1500 fio
user 1500:1500
volume /mount
</code></pre>
| N K Shukla |
<p>I am running kubernetes cluster with multi master (3 master nodes) with HA Proxy and also I am Using external etcd in this projects for ssl generate I'm using cfssl (cloudflare)</p>
<p>I Create etcd service in each master node </p>
<pre><code>[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
--name 192.168.1.21 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://192.168.1.21:2380 \
--listen-peer-urls https://192.168.1.21:2380 \
--listen-client-urls https://192.168.1.21:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.1.21:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster 192.168.1.21=https://192.168.1.21:2380,192.168.1.22=https://192.168.1.22:2380,192.168.1.23=https://192.168.1.23:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
</code></pre>
<p>and run kubeadm init with config file</p>
<pre><code>kubeadm init --config config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "192.168.1.20:6443"
etcd:
external:
endpoints:
- https://192.168.1.21:2379
- https://192.168.1.22:2379
- https://192.168.1.23:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
</code></pre>
<p>after that my cluster are ready </p>
<pre><code>kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master1 Ready master 25h v1.17.2 192.168.1.21 <none> Ubuntu 16.04.6 LTS 4.4.0-173-generic docker://19.3.5
master2 Ready master 25h v1.17.2 192.168.1.22 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://19.3.5
master3 Ready master 25h v1.17.2 192.168.1.23 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://19.3.5
worker1 Ready worker 25h v1.17.2 192.168.1.27 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://19.3.5
worker2 Ready worker 25h v1.17.2 192.168.1.28 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://19.3.5
worker3 Ready worker 25h v1.17.2 192.168.1.29 <none> Ubuntu 16.04.6 LTS 4.4.0-142-generic docker://19.3.5
</code></pre>
<p>after that I'm trying to apply flannel with command</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<p>now I want to see my problem and help me</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-246cj 0/1 ContainerCreating 0 51m
kube-system coredns-6955765f44-xrwh4 0/1 ContainerCreating 0 24h
kube-system coredns-7f85fdfc6b-t7jdr 0/1 ContainerCreating 0 48m
kube-system kube-apiserver-master1 1/1 Running 0 25h
kube-system kube-apiserver-master2 1/1 Running 1 25h
kube-system kube-apiserver-master3 1/1 Running 0 25h
kube-system kube-controller-manager-master1 1/1 Running 0 56m
kube-system kube-controller-manager-master2 1/1 Running 0 25h
kube-system kube-controller-manager-master3 1/1 Running 0 25h
kube-system kube-flannel-ds-amd64-6j6lb 0/1 Error 285 25h
kube-system kube-flannel-ds-amd64-fdbxg 0/1 CrashLoopBackOff 14 25h
kube-system kube-flannel-ds-amd64-mjfjf 0/1 CrashLoopBackOff 286 25h
kube-system kube-flannel-ds-amd64-r46fk 0/1 CrashLoopBackOff 285 25h
kube-system kube-flannel-ds-amd64-t8tfg 0/1 CrashLoopBackOff 284 25h
kube-system kube-proxy-6h6k9 1/1 Running 0 25h
kube-system kube-proxy-cjgmv 1/1 Running 0 25h
kube-system kube-proxy-hblk8 1/1 Running 0 25h
kube-system kube-proxy-wdvc9 1/1 Running 0 25h
kube-system kube-proxy-z48zn 1/1 Running 0 25h
kube-system kube-scheduler-master1 1/1 Running 0 25h
kube-system kube-scheduler-master2 1/1 Running 0 25h
kube-system kube-scheduler-master3 1/1 Running 0 25h
</code></pre>
| Mahdi Khosravi | <p>I understood the mistake I Should add network rang in my config.yaml </p>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- 192.168.1.20
controlPlaneEndpoint: "192.168.1.20:6443"
etcd:
external:
endpoints:
- https://192.168.1.21:2379
- https://192.168.1.22:2379
- https://192.168.1.23:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
networking:
podSubnet: 10.244.0.0/16
apiServerExtraArgs:
apiserver-count: "3"
</code></pre>
| Mahdi Khosravi |
<p>Kubernetes creates one PersistentVolume for each VolumeClaimTemplate definition on an statefulset. That makes each statefulset pod have its own storage that is not shared across the replicas. However, I would like to share the same volume across all the statefulset replicas.</p>
<p>It looks like the approach should be the following:</p>
<ol>
<li>Create a PVC on the same namespace.</li>
<li>On the statefulset use Volumes to bound the PVC</li>
<li>Ensure that the PVC is ReadOnlyMany or ReadWriteMany</li>
</ol>
<p>Assuming that my application is able to deal with any concurrency on the shared volume, is there any technical problem if I have one PVC to share the same volume across all statefulset replicas?</p>
| yborgess | <p>I wholeheartedly agree with the comments made by @Jonas and @David Maze:</p>
<blockquote>
<p>You can do this, it should work. There is no need to use volumeClaimTemplates unless your app needs it.</p>
</blockquote>
<blockquote>
<p>Two obvious problems are that ReadWriteMany volumes are actually a little tricky to get (things like AWS EBS volumes are only ReadWriteOnce), and that many things you want to run in StatefulSets (like databases) want exclusive use of their filesystem space and use file locking to enforce this.</p>
</blockquote>
<hr />
<p>Answering on the question:</p>
<blockquote>
<p>Is there any technical problem if I have one PVC to share the same volume across all statefulset replicas?</p>
</blockquote>
<p>I'd say that this would mostly depend on the:</p>
<ul>
<li>How the application would handle such scenario where it's having single PVC (writing concurrency).</li>
<li>Which storage solution are supported by your Kubernetes cluster (or could be implemented).</li>
</ul>
<p>Subjectively speaking, I don't think there should be an issue when above points are acknowledged and aligned with the requirements and configuration that the cluster/applications allows.</p>
<hr />
<p>From the application perspective, there is an inherent lack of the software we are talking about. Each application could behave differently and could require different tuning (look on the David Maze comment).</p>
<p>We do not also know anything about your infrastructure so it could be hard to point you potential issues. From the hardware perspective (Kubernetes cluster), this would inherently go into making a research on the particular storage solution that you would like to use. It could be different from cloud provider to cloud provider as well as on-premise solutions. You would need to check the requirements of your app to align it to the options you have.</p>
<p>Continuing on the matter of <code>Volumes</code>, I'd reckon the one of the important things would be <code>accessModes</code>.</p>
<p>Citing the official docs:</p>
<blockquote>
<h3>Access Modes</h3>
<p>A PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.</p>
<p>The access modes are:</p>
<ul>
<li>ReadWriteOnce -- the volume can be mounted as read-write by a single node</li>
<li>ReadOnlyMany -- the volume can be mounted read-only by many nodes</li>
<li>ReadWriteMany -- the volume can be mounted as read-write by many nodes</li>
</ul>
<p>In the CLI, the access modes are abbreviated to:</p>
<ul>
<li>RWO - ReadWriteOnce</li>
<li>ROX - ReadOnlyMany</li>
<li>RWX - ReadWriteMany</li>
</ul>
<hr />
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes: Access modes</a></em></li>
</ul>
</blockquote>
<p>One of the issues you can run into is with the <code>ReadWriteOnce</code> when the <code>PVC</code> is mounted to the <code>Node</code> and <code>sts-X</code> (<code>Pod</code>) is scheduled onto a different <code>Node</code> but from the question, I'd reckon you already know about it.</p>
<hr />
<blockquote>
<p>However, I would like to share the same volume across all the statefulset replicas.</p>
</blockquote>
<p>An example of a <code>StatefulSet</code> with a <code>Volume</code> that would be shared across all of the replicas could be following (modified example from Kubernetes documentation):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
# VOLUME START
volumeMounts:
- name: example-pvc
mountPath: /usr/share/nginx/html
volumes:
- name: example-pvc
persistentVolumeClaim:
claimName: pvc-for-sts
# VOLUME END
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Controllers: Statefulset</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes</a></em></li>
</ul>
| Dawid Kruk |
<p>I am using kops as kubernetes deployment.</p>
<p>I noticed that whenever an image with same tag number is entered in deployment file the system takes the previous image if <code>imagepullpolicy</code> is not set to <code>always</code></p>
<p>Is there any way in which I can see all the cached images of a container in kubernetes environment ?</p>
<p>Like suppose I have an image <code>test:56</code> currently running in a deployment and <code>test:1</code> to <code>test:55</code> were used previously, so does kubernetes cache those images ? and if yes where can those be found ?</p>
| Shahid ali Khan | <ul>
<li><strong>Comments on your environment:</strong>
<blockquote>
<p>I noticed that whenever an image with same tag number is entered in deployment file the system takes the previous image if <code>imagepullpolicy</code> is not set to <code>always</code></p>
</blockquote></li>
</ul>
<p>A <a href="https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images" rel="nofollow noreferrer">pre-pulled image</a> can be used to preload certain images for speed or as an alternative to authenticating to a private registry, optimizing performance.</p>
<p>The docker will always cache all images that were used locally.</p>
<p>Since you are using EKS, keep in mind that if you have node health management (meaning a node will be replaced if it fails) the new node won't have the images cached from the old one so it's always a good idea to store your images on a Registry like <a href="https://kubernetes.io/docs/concepts/containers/images/#using-amazon-elastic-container-registry" rel="nofollow noreferrer">your Cloud Provider Registry</a> or a local registry.</p>
<ul>
<li><strong>Let's address your first question:</strong>
<blockquote>
<p>Is there any way in which I can see all the cached images of a container in kubernetes environment ?</p>
</blockquote></li>
</ul>
<p>Yes, you must use <code>docker images</code> to list the images stored in your environment.</p>
<ul>
<li><strong>Second question:</strong>
<blockquote>
<p>Like suppose I have an image <code>test:56</code> currently running in a deployment and <code>test:1</code> to <code>test:55</code> were used previously, so does Kubernetes cache those images ? and if yes where can those be found ?</p>
</blockquote></li>
</ul>
<p>I prepared an example for you:</p>
<ul>
<li>I deployed several pods based on the official busybox image:</li>
</ul>
<pre><code>$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.28.4
pod/busy284 created
$ kubectl run busy293 --generator=run-pod/v1 --image=busybox:1.29.3
pod/busy284 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.28
pod/busy28 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.29
pod/busy29 created
$ kubectl run busy284 --generator=run-pod/v1 --image=busybox:1.30
pod/busy284 created
$ kubectl run busybox --generator=run-pod/v1 --image=busybox
pod/busybox created
</code></pre>
<p>Now let's check the images stored in <code>docker images</code></p>
<pre><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.3 ae853e93800d 5 weeks ago 116MB
k8s.gcr.io/kube-controller-manager v1.17.3 b0f1517c1f4b 5 weeks ago 161MB
k8s.gcr.io/kube-apiserver v1.17.3 90d27391b780 5 weeks ago 171MB
k8s.gcr.io/kube-scheduler v1.17.3 d109c0821a2b 5 weeks ago 94.4MB
kubernetesui/dashboard v2.0.0-beta8 eb51a3597525 3 months ago 90.8MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 4 months ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 4 months ago 288MB
kubernetesui/metrics-scraper v1.0.2 3b08661dc379 4 months ago 40.1MB
busybox latest 83aa35aa1c79 10 days ago 1.22MB
busybox 1.30 64f5d945efcc 10 months ago 1.2MB
busybox 1.29 758ec7f3a1ee 15 months ago 1.15MB
busybox 1.29.3 758ec7f3a1ee 15 months ago 1.15MB
busybox 1.28 8c811b4aec35 22 months ago 1.15MB
busybox 1.28.4 8c811b4aec35 22 months ago 1.15MB
</code></pre>
<p>You can see all the pushed images listed.</p>
<p>It's good to clean old resources from your system using the command <code>docker system prune</code> to free space on your server from time to time. </p>
<p>If you have any doubt, let me know in the comments.</p>
| Will R.O.F. |
<p>I have deployed prometheus server(2.13.1) on kubernetes(1.17.3), I am able to access it on <code>http://my.prom.com:9090</code></p>
<p>But i want to access it on <code>http://my.prom.com:9090/prometheus</code> so i added following ingress rules but its not
working</p>
<p><strong>First Try:</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: /prometheus
name: approot
namespace: default
spec:
rules:
- host: my.prom.com
http:
paths:
- backend:
serviceName: prometheus-svc
servicePort: 9090
path: /
</code></pre>
<p>This results in 404 error</p>
<p><strong>Second Try:</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: my.prom.com
http:
paths:
- backend:
serviceName: prometheus-svc
servicePort: 9090
path: /prometheus(/|$)(.*)
</code></pre>
<p>Now when i access URL <code>http://my.prom.com:9090/prometheus</code> in browser it get changed to <code>http://my.prom.com:9090/graph</code> and show 404 error</p>
| ImranRazaKhan | <p>Prometheus is not aware of what you are trying to achieve and that's why it's redirecting to unknown destination. </p>
<p>You have to tell prometheus to accept traffic on the new path as can be seen <a href="https://www.robustperception.io/external-urls-and-path-prefixes" rel="noreferrer">here</a> and <a href="http://elatov.github.io/2020/02/nginx-ingress-with-alertmanager-and-prometheus/" rel="noreferrer">here</a>.</p>
<p>Highlight to the second link, you have to include <code>- "--web.route-prefix=/"</code> and <code>- "--web.external-url=http://my.prom.com:9090/prometheus"</code> in your prometheus deployment.</p>
<blockquote>
<p>Then I had to modify the <strong>prometheus</strong> deployment to accept traffic
on the new <em>path</em> (<strong>/prom</strong>). This was covered in the <a href="https://prometheus.io/docs/guides/basic-auth/#nginx-configuration" rel="noreferrer">Securing
Prometheus API and UI Endpoints Using Basic
Auth</a>
documentation:</p>
</blockquote>
<p>In your env it should look like this:</p>
<pre><code>> grep web deploy.yaml
- "--web.enable-lifecycle"
- "--web.route-prefix=/"
- "--web.external-url=http://my.prom.com:9090/prometheus"
</code></pre>
| Mark Watney |
<p><strong>General Cluster Information:</strong></p>
<ul>
<li>Kubernetes version: 1.19.13</li>
<li>Cloud being used: private</li>
<li>Installation method: kubeadm init</li>
<li>Host OS: Ubuntu 20.04.1 LTS</li>
<li>CNI and version: Weave Net: 2.7.0</li>
<li>CRI and version: Docker: 19.3.13</li>
</ul>
<p>I am trying to get <code>kube-prometheus-stack</code> helm chart to work. This seems for most targets to work, however, some targets stay down as shown in the screenshot below.
<a href="https://i.stack.imgur.com/vznkY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vznkY.png" alt="enter image description here" /></a></p>
<p>Are there any suggestions, how I can get <code>kube-etcd</code>, <code>kube-controller-manager</code> and <code>kube-scheduler</code> monitored by <code>Prometheus</code>?</p>
<p>I deployed the helm chart as mentioned <a href="https://docs.nvidia.com/datacenter/cloud-native/kubernetes/dcgme2e.html#setting-up-dcgm" rel="noreferrer">here</a> and applied the suggestion <a href="https://github.com/helm/charts/issues/16476#issuecomment-528681476" rel="noreferrer">here</a> to get the kube-proxy monitored by <code>Prometheus</code>.</p>
<p><strong>Thanks in advance for any help!</strong></p>
<p>EDIT 1:</p>
<pre class="lang-yaml prettyprint-override"><code>- job_name: monitoring/my-stack-kube-prometheus-s-kube-controller-manager/0
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_app]
separator: ;
regex: kube-prometheus-stack-kube-controller-manager
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_release]
separator: ;
regex: my-stack
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: http-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: job
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_service_label_jobLabel]
separator: ;
regex: (.+)
target_label: job
replacement: ${1}
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: http-metrics
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- kube-system
- job_name: monitoring/my-stack-kube-prometheus-s-kube-etcd/0
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_app]
separator: ;
regex: kube-prometheus-stack-kube-etcd
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_release]
separator: ;
regex: my-stack
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: http-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: job
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_service_label_jobLabel]
separator: ;
regex: (.+)
target_label: job
replacement: ${1}
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: http-metrics
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- kube-system
- job_name: monitoring/my-stack-kube-prometheus-s-kube-scheduler/0
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_app]
separator: ;
regex: kube-prometheus-stack-kube-scheduler
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_release]
separator: ;
regex: my-stack
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: http-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: job
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_service_label_jobLabel]
separator: ;
regex: (.+)
target_label: job
replacement: ${1}
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: http-metrics
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- kube-system
</code></pre>
| skynet1010 | <p>This is because <code>Prometheus</code> is monitoring wrong endpoints of those targets and/or targets don't expose metrics endpoint.</p>
<p>Take <code>controller-manager</code> for example:</p>
<ol>
<li>Change bind-address (default: 127.0.0.1):</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>$ sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- command:
- kube-controller-manager
...
- --bind-address=<your control-plane IP or 0.0.0.0>
...
</code></pre>
<p>If you are using control-plane IP, you need to change <code>livenessProbe</code> and <code>startupProbe</code> host, too.</p>
<ol start="2">
<li>Change endpoint(service) port (default: 10252):</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>$ kubectl edit service prometheus-kube-prometheus-kube-controller-manager -n kube-system
apiVersion: v1
kind: Service
...
spec:
clusterIP: None
ports:
- name: http-metrics
port: 10257
protocol: TCP
targetPort: 10257
...
</code></pre>
<ol start="3">
<li>Change servicemonitor scheme (default: http):</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>$ kubectl edit servicemonitor prometheus-kube-prometheus-kube-controller-manager -n prometheus
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
...
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
jobLabel: jobLabel
...
</code></pre>
| Cary |
<p>I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service</p>
<pre class="lang-sh prettyprint-override"><code>k create -f service-cluster-definition.yaml
➜ minikube service myapp-frontend --url
😿 service default/myapp-frontend has no node port
</code></pre>
<p>And if I try to add NodePort into the <strong>ports</strong> section of service-cluster-definition.yaml it complains with error, that such key is deprecated.</p>
<h3>What am I missing or doing wrong?</h3>
<p><strong>service-cluster-definition.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
</code></pre>
<p><strong>deployment-definition.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
env: experiment
type: etl
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
env: experiment
type: etl
spec:
containers:
- name: nginx-container
image: nginx:1.7.1
replicas: 3
selector:
matchLabels:
type: etl
</code></pre>
<pre><code>➜ k get pods --selector="app=myapp,type=etl" -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deployment-59856c4487-2g9c7 1/1 Running 0 45m 172.17.0.9 minikube <none> <none>
myapp-deployment-59856c4487-mb28z 1/1 Running 0 45m 172.17.0.4 minikube <none> <none>
myapp-deployment-59856c4487-sqxqg 1/1 Running 0 45m 172.17.0.8 minikube <none> <none>
(⎈ |minikube:default)
Projects/experiments/kubernetes
➜ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
(⎈ |minikube:default)
</code></pre>
| DmitrySemenov | <h3>First let's clear some concepts from <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="noreferrer">Documentation</a>:</h3>
<blockquote>
<ul>
<li><p>ClusterIP: Exposes the Service on a cluster-internal IP.
Choosing this value <strong>makes the Service only reachable from within</strong> the cluster.</p></li>
<li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">NodePort</a>: Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the NodePort Service, <strong>from outside the cluster</strong>, by requesting <code>NodeIP:NodePort</code>.</p></li>
</ul>
</blockquote>
<hr>
<p><strong>Question 1:</strong></p>
<blockquote>
<p>I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service.</p>
</blockquote>
<ul>
<li>Since Minikube is a virtualized environment on a single host we tend to forget that the cluster is isolated from the host computer. If you set a service as <code>ClusterIP</code>, Minikube will not give external access.</li>
</ul>
<p><strong>Question 2:</strong></p>
<blockquote>
<p>And if I try to add NodePort into the <strong>ports</strong> section of <em>service-cluster-definition.yaml</em> it complains with error, that such key is deprecated.</p>
</blockquote>
<ul>
<li>Maybe you were pasting on the wrong position. You should just substitute the field <code>type: ClusterIP</code> for <code>type: NodePort</code>. Here is the correct form of your yaml:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
</code></pre>
<hr>
<p><strong>Reproduction:</strong></p>
<pre><code>user@minikube:~$ kubectl apply -f deployment-definition.yaml
deployment.apps/myapp-deployment created
user@minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deployment-59856c4487-7dw6x 1/1 Running 0 5m11s
myapp-deployment-59856c4487-th7ff 1/1 Running 0 5m11s
myapp-deployment-59856c4487-zvm5f 1/1 Running 0 5m11s
user@minikube:~$ kubectl apply -f service-cluster-definition.yaml
service/myapp-frontend created
user@minikube:~$ kubectl get service myapp-frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-frontend NodePort 10.101.156.113 <none> 80:32420/TCP 3m43s
user@minikube:~$ minikube service list
|-------------|----------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|----------------|-----------------------------|-----|
| default | kubernetes | No node port | |
| default | myapp-frontend | http://192.168.39.219:32420 | |
| kube-system | kube-dns | No node port | |
|-------------|----------------|-----------------------------|-----|
user@minikube:~$ minikube service myapp-frontend --url
http://192.168.39.219:32420
user@minikube:~$ curl http://192.168.39.219:32420
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...{{output suppressed}}...
</code></pre>
<ul>
<li>As you can see, with the service set as <code>NodePort</code> the minikube start serving the app using <code>MinikubeIP:NodePort</code> routing the connection to the matching pods.
<ul>
<li>Note that <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">nodeport</a> will be chosen by default between 30000:32767</li>
</ul></li>
</ul>
<p>If you have any question let me know in the comments.</p>
| Will R.O.F. |
<p>I have access to only one namespace inside the cluster and that too is restricted.</p>
<pre><code>kind: Role
kind: ClusterRole
kind: RoleBinding
kind: ClusterRoleBinding
</code></pre>
<p>are forbidden to me. So im not able to create kubernetes dashboard as per the recommended yaml.</p>
<p>How to get around this?</p>
| ss301 | <p>It's not possible to achieve it unless you ask someone with enough rights to create the objects you can't for you. </p>
<p><a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml" rel="nofollow noreferrer">Here</a> is a sample manifest used to apply the dashboard to a cluster. As you can see you have to be able to manage Role, ClusterRole, RoleBinding and ClusterRoleBinding to apply it. </p>
<p>So it's impossible to create it with the rights you have as they are essential in this case. </p>
<p>Here is the part affected by lack of your rights: </p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
```
</code></pre>
| Mark Watney |
<p>Is possible to change the IP of the Ingress ?</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
defaultBackend:
service:
name: test-reg
port:
number: 5000
</code></pre>
<p>now is assigned in automatic the 23 but I would like to change it and keep static</p>
<pre><code>[ciuffoly@master-node ~]$ kubectl get ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress <none> * 192.168.1.23 80 5m9s
</code></pre>
| andrea ciuffoli | <p>The possibility of "changing the IP address" will heavily depend on the Kubernetes solution used and how <code>Ingress</code> is handled within the cluster.</p>
<p>Having in mind the lack of the information about the setup it could be hard to point to the right direction. I've decided to post a community wiki answer to give more of a baseline idea and point where to look for possible answer.</p>
<p>Feel free to expand it.</p>
<hr />
<p>If the Kubernetes cluster is managed (<code>GKE</code>, <code>EKS</code>, <code>AKS</code>) and you are using <code>Ingress</code> that is managed by your cloud provider, the best option would be to refer to its documentation on IP allocation. Some providers through <code>Ingress</code> annotations allow you to assign a specific IP address that was previously allocated (in <code>Web UI</code> for example).</p>
<p>If the Kubernetes cluster is provisioned on premise (<code>kubespray</code> or <code>kubeadm</code> or the <code>hard-way</code>) or it's something like <code>minikube</code>, <code>microk8s</code> or a <code>Docker Desktop</code> with Kubernetes it will still be the best course of action to check how it handles <code>Ingress</code> resource. There will be differences for example between the <code>kubespray</code> cluster with <code>metallb</code> and the <code>microk8s</code>.</p>
<p>What I mean by that is for the most part the <code>Ingress</code> resource is handled by an <code>Ingress controller</code> (a <code>Pod</code> or set of <code>Pods</code> inside the cluster). Depending on the solution used, it in most cases work like following:</p>
<ul>
<li>Resources are created for <code>Ingress</code> controller (<code>ClusterRoles</code>, <code>ValidatingWebhookConfiguration</code>, <code>RoleBindings</code>, etc.)</li>
<li><code>Service</code> of specific type (depending on the solution) is created to point to the <code>Deployment</code>/<code>Daemonset</code> that is an <code>Ingress controller</code> (like <code>nginx-ingress</code>, <code>kong</code>, etc).</li>
<li><code>Deployment</code>/<code>Daemonset</code> of an <code>Ingress controller</code> is created to receive the traffic and send it to <code>Services</code> that are specified in the <code>Ingress</code> resource.</li>
</ul>
<p><strong>For IP address changing you would need to take a look specifically on the part of how this <code>Ingress controller</code> is exposed to the external sources and if/how you can alter its configuration.</strong></p>
<p>There will also be inherent differences between on-premise clusters created with <code>kubeadm</code>, <code>kubespray</code> and local development solutions like <code>minikube</code> and <code>Docker Desktop</code> with Kubernetes.</p>
<p>From my experience I've encountered <code>Ingress controllers</code> that were exposed either via:</p>
<ul>
<li><code>Service</code> of type <code>LoadBalancer</code> - assuming that your setup allows for such <code>Service</code> you could use <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">metallb</a></li>
<li><code>HostNetwork</code> (binding to the port on a <code>Node</code>, hence using it's IP address)</li>
</ul>
<hr />
<p>Additional reference:</p>
<ul>
<li><em><a href="https://www.ibm.com/cloud/blog/kubernetes-ingress" rel="nofollow noreferrer">Ibm.com: Cloud: Blog: Kubernetes Ingress</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
</ul>
| Dawid Kruk |
<p>I would like to know if port-forward blocks other requests to the pod like redis, mongo, etc.</p>
<p>If I use kubectl port-foward command with mongo port for example, the mongo server will not receive the data ? </p>
| BruceOverflow | <p>As per your comment, <code>kubectl port-forward</code> does not block any exposed traffic to the target pod. So adding portforward won't affect others.</p>
<p>What <code>portforward</code> does is simply making a specific request to the API server. (see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#create-connect-portforward-pod-v1-core" rel="nofollow noreferrer">doc</a>)</p>
<p>Going further, I don't think <code>port-forward</code> makes pod more "dangerous" (vulnerable against security), however it is generally used for debugging to scope into the pod, not to expose a service in the pod. Use <code>Nodeport</code> service for production.
Plus, <code>port-forward</code> has default <a href="https://stackoverflow.com/questions/47484312/kubectl-port-forwarding-timeout-issue">timeout setting</a> in kubelet.</p>
| dropyourcoffee |
<p>I am dealing with CRDs and creating Custom resources. I need to keep lots of information about my application in the Custom resource. As per the official doc, etcd works with request up to 1.5MB. I am hitting errors something like</p>
<blockquote>
<p>"error": "Request entity too large: limit is 3145728"</p>
</blockquote>
<p>I believe the specified limit in the error is 3MB. Any thoughts around this? Any way out for this problem?</p>
| Yudi | <ul>
<li>The <code>"error": "Request entity too large: limit is 3145728"</code> is probably the default response from kubernetes handler for objects larger than 3MB, as you can see <a href="https://github.com/kubernetes/kubernetes/blob/db1990f48b92d603f469c1c89e2ad36da1b74846/test/integration/master/synthetic_master_test.go#L315" rel="noreferrer">here at L305</a> of the source code:</li>
</ul>
<pre><code>expectedMsgFor1MB := `etcdserver: request is too large`
expectedMsgFor2MB := `rpc error: code = ResourceExhausted desc = trying to send message larger than max`
expectedMsgFor3MB := `Request entity too large: limit is 3145728`
expectedMsgForLargeAnnotation := `metadata.annotations: Too long: must have at most 262144 bytes`
</code></pre>
<ul>
<li><p>The <a href="https://github.com/etcd-io/etcd/issues/9925" rel="noreferrer">ETCD</a> has indeed a 1.5MB limit for processing a file and you will find on <a href="https://github.com/etcd-io/etcd/blob/master/Documentation/dev-guide/limit.md#request-size-limit" rel="noreferrer">ETCD Documentation</a> a suggestion to try the<code>--max-request-bytes</code> flag but it would have no effect on a GKE cluster because you don't have such permission on master node.</p></li>
<li><p>But even if you did, it would not be ideal because usually this error means that you are <a href="https://github.com/kubeflow/pipelines/issues/3134#issuecomment-591278230" rel="noreferrer">consuming the objects</a> instead of referencing them which would degrade your performance.</p></li>
</ul>
<p>I highly recommend that you consider instead these options:</p>
<ul>
<li><strong>Determine whether your object includes references that aren't used;</strong></li>
<li><strong>Break up your resource;</strong></li>
<li><strong>Consider a volume mount instead;</strong></li>
</ul>
<hr>
<p>There's a request for a <a href="https://github.com/kubernetes/kubernetes/issues/88709" rel="noreferrer">new API Resource: File (orBinaryData)</a> that could apply to your case. It's very fresh but it's good to keep an eye on.</p>
<p>If you still need help let me know.</p>
| Will R.O.F. |
<p>I am mounting a k8s secret as a volume mount, and the files in the pod have the wrong permissions. </p>
<p>In my <code>Deployment</code> I have this entry in the <code>volumes</code> array: </p>
<pre><code> - name: ssh-host-keys
secret:
secretName: ftp-ssh-host-keys
defaultMode: 0600
</code></pre>
<p>which is then mounted like this:</p>
<pre><code> - mountPath: /etc/ssh/ssh_host_rsa_key
name: ssh-host-keys
subPath: ssh_host_rsa_key
readOnly: true
</code></pre>
<p>However, when I look at the files in the <code>Pod</code> the file permissions are incorrect:</p>
<pre><code>rw-r--r-- 1 root root 553122 Aug 21 2018 moduli
-rw-r--r-- 1 root root 1723 Aug 21 2018 ssh_config
-rw-r----- 1 root 1337 410 May 11 10:33 ssh_host_ed25519_key
-rw-r----- 1 root 1337 3242 May 11 10:33 ssh_host_rsa_key
-rw-r--r-- 1 root 1337 465 May 11 10:33 sshd_config
</code></pre>
<p>i.e. the keys have permissions 0644 instead of 0600.</p>
<p>I don't know why this might be happening.</p>
| the_witch_king_of_angmar | <p>According to the <a href="https://kubernetes.io/docs/concepts/configuration/secret/#secret-files-permissions" rel="noreferrer">documentation</a>, owing to JSON limitations, you must specify the mode in decimal notation.</p>
<p>Look to the example provided in the documentation: </p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
volumes:
- name: foo
secret:
secretName: mysecret
defaultMode: 256
</code></pre>
<p>256 decimal is equivalent to 0400 in octal. In your specific case, you should use <code>defaultMode: 384</code> to get 0600 to have the desired permissions. </p>
<p>You can convert octal permissions <a href="https://www.rapidtables.com/convert/number/octal-to-decimal.html" rel="noreferrer">here</a>. </p>
| Mark Watney |
<p>I am using golang lib <a href="https://github.com/kubernetes/client-go" rel="noreferrer">client-go</a> to connect to a running local kubrenets. To start with I took code from the example: <a href="https://github.com/kubernetes/client-go/tree/master/examples/out-of-cluster-client-configuration" rel="noreferrer">out-of-cluster-client-configuration</a>.</p>
<p>Running a code like this:
<code>$ KUBERNETES_SERVICE_HOST=localhost KUBERNETES_SERVICE_PORT=6443 go run ./main.go</code> results in following error:</p>
<pre><code>panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
goroutine 1 [running]:
/var/run/secrets/kubernetes.io/serviceaccount/
</code></pre>
<p>I am not quite sure which part of configuration I am missing. I've researched following links :</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins</a></p></li>
<li><p><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/</a></p></li>
</ul>
<p>But with no luck.
I guess I need to either let the client-go know which token/serviceAccount to use, or configure kubectl in a way that everyone can connect to its api.</p>
<p>Here's status of my kubectl though some commands results:</p>
<p><strong>$ kubectl config view</strong></p>
<pre><code>apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<p><strong>$ kubectl get serviceAccounts</strong></p>
<pre><code>NAME SECRETS AGE
default 1 3d
test-user 1 1d
</code></pre>
<p><strong>$ kubectl describe serviceaccount test-user</strong></p>
<pre><code>Name: test-user
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: test-user-token-hxcsk
Tokens: test-user-token-hxcsk
Events: <none>
</code></pre>
<p><strong>$ kubectl get secret test-user-token-hxcsk -o yaml</strong></p>
<pre><code>apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0......=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSX......=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: test-user
kubernetes.io/service-account.uid: 984b359a-6bd3-11e8-8600-XXXXXXX
creationTimestamp: 2018-06-09T10:55:17Z
name: test-user-token-hxcsk
namespace: default
resourceVersion: "110618"
selfLink: /api/v1/namespaces/default/secrets/test-user-token-hxcsk
uid: 98550de5-6bd3-11e8-8600-XXXXXX
type: kubernetes.io/service-account-token
</code></pre>
| shershen | <p>This answer could be a little outdated but I will try to give more perspective/baseline for future readers that encounter the same/similar problem.</p>
<p><strong>TL;DR</strong></p>
<p>The following error:</p>
<pre class="lang-golang prettyprint-override"><code>panic: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
</code></pre>
<p>is most likely connected with the lack of <code>token</code> in the <code>/var/run/secrets/kubernetes.io/serviceaccount</code> location when using <strong><code>in-cluster-client-configuration</code></strong>. Also, it could be related to the fact of using <strong><code>in-cluster-client-configuration</code></strong> code outside of the cluster (for example running this code directly on a laptop or in pure Docker container).</p>
<p>You can check following commands to troubleshoot your issue further (assuming this code is running inside a <code>Pod</code>):</p>
<ul>
<li><code>$ kubectl get serviceaccount X -o yaml</code>:
<ul>
<li>look for: <code>automountServiceAccountToken: false</code></li>
</ul>
</li>
<li><code>$ kubectl describe pod XYZ</code>
<ul>
<li>look for: <code>containers.mounts</code> and <code>volumeMounts</code> where <code>Secret</code> is mounted</li>
</ul>
</li>
</ul>
<p>Citing the official documentation:</p>
<blockquote>
<h3>Authenticating inside the cluster</h3>
<p>This example shows you how to configure a client with client-go to authenticate to the Kubernetes API from an application running <strong>inside the Kubernetes cluster.</strong></p>
<p>client-go uses the <a href="https://kubernetes.io/docs/admin/authentication/#service-account-tokens" rel="nofollow noreferrer">Service Account token</a> <strong>mounted inside the Pod</strong> at the <code>/var/run/secrets/kubernetes.io/serviceaccount</code> path when the <code>rest.InClusterConfig()</code> is used.</p>
<p>-- <em><a href="https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration#authenticating-inside-the-cluster" rel="nofollow noreferrer">Github.com: Kubernetes: client-go: Examples: in cluster client configuration</a></em></p>
</blockquote>
<p>If you are authenticating to the Kubernetes API with <code>~/.kube/config</code> you should be using the <strong><code>out-of-cluster-client-configuration</code></strong>.</p>
<hr />
<h3>Additional information:</h3>
<p>I've added additional information for more reference on further troubleshooting when the code is run inside of a <code>Pod</code>.</p>
<ul>
<li><code>automountServiceAccountToken: false</code></li>
</ul>
<blockquote>
<p>In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: go-serviceaccount
automountServiceAccountToken: false
</code></pre>
<p>In version 1.6+, you can also opt out of automounting API credentials for a particular pod:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: sdk
spec:
serviceAccountName: go-serviceaccount
automountServiceAccountToken: false
</code></pre>
<p>-- <em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Configure pod container: Configure service account</a></em></p>
</blockquote>
<ul>
<li><code>$ kubectl describe pod XYZ</code>:</li>
</ul>
<p>When the <code>servicAccount</code> token is mounted, the <code>Pod</code> definition should look like this:</p>
<pre><code><-- OMITTED -->
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from go-serviceaccount-token-4rst8 (ro)
<-- OMITTED -->
Volumes:
go-serviceaccount-token-4rst8:
Type: Secret (a volume populated by a Secret)
SecretName: go-serviceaccount-token-4rst8
Optional: false
</code></pre>
<p>If it's not:</p>
<pre><code><-- OMITTED -->
Mounts: <none>
<-- OMITTED -->
Volumes: <none>
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access authn authz: Authentication</a></em></li>
</ul>
| Dawid Kruk |
<p>When I try to run any kubectl command including kubectl version, I get a pop-up saying "This app can't run on your PC, To find a version for your PC, check with the software publisher" when this is closed, the terminal shows "access denied"</p>
<p>The weird thing is, when I run the "kubectl version" command in the directory where I have downloaded kubectl.exe, it works fine.</p>
<p>I have even added this path to my PATH variables.</p>
| Vineet Kekatpure | <p>thank you for the answer, <a href="https://stackoverflow.com/a/74862706/12332734">@rally</a></p>
<p>apparently, in my machine, it was an issue of administrative rights during installation. My workplace's IT added the permission and it worked for me.</p>
<p>Adding this answer here so that if anyone else comes across this problem they can try this solution as well.</p>
| Vineet Kekatpure |
<p>Hey I am new to kubernetes and playing around with jenkins deployment. I have deployed jenkins master pod through the <code>deployment.yaml</code> as well service and <code>pvc.yaml</code>. </p>
<p>I set the service as node port and but how do I secure and manage jenkins secret ? Do I need to create some sort of configmap for this ? I usually get jenkins secrets from kubectl logs . Any help or suggestion will be greatly appreciated to make this more secure :) </p>
| jahmedcode | <p><strong>First let's clear some concepts and background:</strong></p>
<p>Since you are new to kubernetes, I'll help you understand the scenario better and give you suggestions to achieve your goal.</p>
<blockquote>
<p>A <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#understanding-configmaps-and-pods" rel="nofollow noreferrer">ConfigMap</a> stores configuration data as key-value pairs. ConfigMap is <strong>similar</strong> to Secrets, but provides a means of working with strings that <strong>don’t contain</strong> sensitive information.</p>
</blockquote>
<p><em>I'm posting the description of Configmap to help you understand that it's powerful to handle data but it's not applicable for storing sensitive information, hence will not be mentioned below.</em></p>
<blockquote>
<p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Kubernetes Secrets</a> lets you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.</p>
</blockquote>
<p><em>Natively, Kubernetes uses secrets to handle objects that store sensitive data and it's authenticated by kubernetes-api, keeping it safe from external access unless it have a valid credential to cluster administration.</em></p>
<p>By default Jenkins stores the password in <strong>Secrets</strong>.</p>
<hr>
<p><strong>Deployment:</strong></p>
<ul>
<li><a href="https://opensource.com/article/19/6/jenkins-admin-password-helm-kubernetes" rel="nofollow noreferrer">This is the 101 guide to deploy Jenkins on Kubernetes</a>
<ul>
<li>It will show you the best way to extract the admin password extracting from the secret</li>
<li>It will show you how to access Jenkins UI.</li>
<li>The deployment is automated with <a href="https://helm.sh/docs/intro/quickstart/" rel="nofollow noreferrer">Helm</a> which is a powerful tool on Kubernetes. </li>
</ul></li>
</ul>
<hr>
<p><strong>Addressing your Questions:</strong></p>
<blockquote>
<p>how do I secure and manage jenkins secret?</p>
</blockquote>
<ul>
<li>The Jenkins secret is secure by kubernetes credentials, only those who have access to the cluster can extract it, so it's relatively safe by default.
<ul>
<li>You can learn how Kubernetes manages authentication <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Here</a>.</li>
<li>Using this approach you can manage your users and password from Jenkins UI. You can learn about Jenkins Credentials <a href="https://jenkins.io/doc/book/using/using-credentials/" rel="nofollow noreferrer">here</a>.</li>
</ul></li>
</ul>
<blockquote>
<p>Any help or suggestion will be greatly appreciated to make this more secure</p>
</blockquote>
<ul>
<li>After gaining some experience with Jenkins and Kubernetes and as your setup grows, you can start using several steps to enhance your overall security. Thus far we relied on the native security tools, but you can consider new approaches for distributed authentication. Here are some guides you can learn more:
<ul>
<li><a href="https://www.cyberark.com/threat-research-blog/configuring-and-securing-credentials-in-jenkins/" rel="nofollow noreferrer">Configuring and Securing Credentials in Jenkins</a></li>
<li><a href="https://jenkins.io/doc/book/using/using-credentials/" rel="nofollow noreferrer">Official Jenkins Credentials Documentation</a></li>
<li><a href="https://wiki.jenkins.io/display/JENKINS/OAuth+Credentials" rel="nofollow noreferrer">Using OAuth Credentials</a></li>
<li><a href="https://joostvdg.github.io/blogs/kubernetes-sso-keycloak/" rel="nofollow noreferrer">Third-party SSO Login with KeyCloak</a></li>
</ul></li>
</ul>
<p>If I can help you further let me know in the comments!</p>
| Will R.O.F. |
<p>I am new to Kubernetes. I followed <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">Kubernetes the hard way</a> from Kesley Hightower and also <a href="https://github.com/ivanfioravanti/kubernetes-the-hard-way-on-azure" rel="nofollow noreferrer">this</a> to set up Kubernetes in Azure. Now all the services are up and running fine. But I am not able to expose the traffic using Load balancer. I tried to add a <code>Service</code> object of type <code>LoadBalancer</code> but the external IP is showing as <code><pending></code>. I need to add ingress to expose the traffic.</p>
<p>nginx-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-service
name: nginx-service
spec:
type: LoadBalancer
externalIPs:
- <ip>
ports:
- name: "80"
port: 80
targetPort: 80
- name: "443"
port: 443
targetPort: 443
selector:
app: nginx-service
</code></pre>
<p>Thank you,</p>
| Manoj Kumar Maharana | <p>By default, the solution proposed by Kubernetes The Hard Way doesn't include a solution for LoadBalancer. The fact it's pending forever is expected behavior. You need to use out-of-the box solution for that. A very commonly used is <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>.</p>
<p>MetalLB isn't going to allocate an External IP for you, it will allocate a internal IP inside our VPC and you have to create the necessary routing rules to route traffic to this IP. </p>
| Mark Watney |
<p>I have a Vue.js application, and my deployment setup is very standard,</p>
<blockquote>
<p>Pod -> Service -> Ingress</p>
</blockquote>
<p>Here's the related code,</p>
<p><strong>Dockerfile</strong>:</p>
<pre><code>FROM node:lts-alpine AS build
WORKDIR /app
COPY . .
# Default build mode
ARG MODE=production
# Build the dist folder
RUN npm ci
RUN npm run build ${MODE}
# Serve from nginx
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY --from=build /app/dist /usr/share/nginx/html
</code></pre>
<p><strong>Nginx.conf</strong>:</p>
<pre><code>user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8080;
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
}
}
</code></pre>
<p><strong>Ingress Prod</strong>: (kept only the necessary bits for brevity sakes),</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: <my-app>
app.kubernetes.io/instance: <my-instance>
name: <my-name>
namespace: <my-namespace>
spec:
rules:
- host: <my-host>
http:
paths:
- backend:
serviceName: livspace-hub
servicePort: 80
path: /
</code></pre>
<p><strong>Ingress local</strong>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <my-app-name>
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: <my-host>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hub-service
port:
number: 9090
</code></pre>
<p>The error I get is,</p>
<blockquote>
<p>Uncaught SyntaxError: Unexpected token '<' chunk-vendors.a727ce10.js:1</p>
<p>Uncaught SyntaxError: Unexpected token '<' app.a68a0468.js:1</p>
</blockquote>
<p>And the content-type for both these resources in the network tab is <code>text/html</code>.</p>
<p><strong>Edit 1:</strong></p>
<p>This is what my folder looks like after deployment,</p>
<pre><code>/usr/share/nginx/html # ls
50x.html assets css favicon.ico fonts index.html js styles
</code></pre>
<p>The path for my js file is,</p>
<p><code>https://<my-domain>/js/app.a68a0468.js</code></p>
<p><strong>Edit 2:</strong></p>
<p>Here are the logs from my local vs deployed application.</p>
<p><strong>Local</strong>:</p>
<blockquote>
<p> - - [12/Apr/2021:10:38:18 +0000] "GET / HTTP/1.1" 200 3213 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36 Edg/89.0.774.75"</p>
<p> - - [12/Apr/2021:10:38:18 +0000] "GET /css/milestone.1c126aff.css HTTP/1.1" 200 1139 "http:///" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36 Edg/89.0.774.75"</p>
<p> - - [12/Apr/2021:10:38:18 +0000] "GET /css/catalogue.5794c500.css HTTP/1.1" 200 156 "http:///" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36 Edg/89.0.774.75"</p>
</blockquote>
<p><strong>Deployed</strong>:</p>
<blockquote>
<p> - - [12/Apr/2021:12:46:28 +0000] "GET / HTTP/1.1" 200 3213 "https:///" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.54"</p>
<p> - - [12/Apr/2021:12:46:28 +0000] "GET / HTTP/1.1" 200 3213 "https:///" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.54"</p>
<p> - - [12/Apr/2021:12:46:28 +0000] "GET / HTTP/1.1" 200 3213 "https:///" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.54"</p>
<p> - - [12/Apr/2021:12:46:28 +0000] "GET / HTTP/1.1" 200 3213 "https:///" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.54"</p>
</blockquote>
<p>My local instance is also run via docker/docker-compose, so the setup is essentially the same.</p>
<p>As you can see, my local setup logs show requests for specific files - <code>GET /<filename></code> - whereas the deployed instance shows only logs for <code>GET /</code>.</p>
| painotpi | <p><strong>TL;DR</strong></p>
<p><strong>Remove/Modify the following annotation from <code>Ingress Prod</code>:</strong></p>
<ul>
<li><code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code></li>
</ul>
<hr />
<h3>Explanation:</h3>
<p>The annotation that you are using (<code>rewrite-target: /$2</code>) is targeting a capture group that does not exist.</p>
<p>Each request that is sent to your application through your <code>Ingress</code> resource is getting rewritten to <code>/</code>.</p>
<p>To fix that you can <strong>either</strong>:</p>
<ul>
<li>Entirely remove this annotation.</li>
<li>Modify the annotation that would support your rewrite, for example: <code>/</code>.</li>
</ul>
<p>You can read more about rewrites, capture groups and how <code>nginx-ingress</code> handles them by following this documentation:</p>
<ul>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: Examples: Rewrite</a></em></li>
</ul>
<hr />
<h3>Example:</h3>
<p>I've used your exact <code>Ingress</code> manifest with slight tweaks and stumbled upon the same issue as you've described:</p>
<ul>
<li><code>curl IP</code></li>
<li><code>curl IP/hello.html</code></li>
</ul>
<p>To show the request that came to the <code>Pod</code> I've used <code>nginx</code> <code>Pod</code> as a backend:</p>
<pre class="lang-sh prettyprint-override"><code>/docker-entrypoint.sh: Configuration complete; ready for start up
10.88.0.20 - - [13/Apr/2021:15:01:37 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "SOURCE_IP_OF_MINE"
10.88.0.20 - - [13/Apr/2021:15:01:40 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.1" "SOURCE_IP_OF_MINE"
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
</ul>
| Dawid Kruk |
<p>Installed Jenkins using helm </p>
<pre><code>helm install --name jenkins -f values.yaml stable/jenkins
</code></pre>
<p>Jenkins Plugin Installed </p>
<pre><code>- kubernetes:1.12.6
- workflow-job:2.31
- workflow-aggregator:2.5
- credentials-binding:1.16
- git:3.9.3
- docker:1.1.6
</code></pre>
<p>Defined Jenkins pipeline to build docker container </p>
<pre><code>node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
}
</code></pre>
<p>Throws the error : docker not found </p>
<p><a href="https://i.stack.imgur.com/NlOI1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NlOI1.png" alt="enter image description here"></a></p>
| anish | <p>You can define agent pod with containers with required tools(docker, Maven, Helm etc) in the pipeline for that:</p>
<p>First, create agentpod.yaml with following values:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
some-label: pod
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
command:
- cat
tty: true
volumeMounts:
- name: m2
mountPath: /root/.m2
- name: docker
image: docker:19.03
command:
- cat
tty: true
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: m2
hostPath:
path: /root/.m2
</code></pre>
<p>Then configure the pipeline as:</p>
<pre><code>pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'agentpod.yaml'
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn package'
}
}
}
stage('Docker Build') {
steps {
container('docker') {
sh "docker build -t dockerimage ."
}
}
}
}
}
</code></pre>
| tibin tomy |
<p>I have an existing kubernetes deployment which is running fine. Now I want to edit it with some new environment variables which I will use in the pod.
Editing a deployment will delete and create new pod or it will update the existing pod.
My requirement is I want to create a new pod whenever I edit/update the deployment.</p>
| Farooq Rahman | <p>Kubernetes is always going to recreate your pods in case you change/create env vars. </p>
<p>Lets check this together creating a deployment without any env var on it:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Let's check and note these pod names so we can compare later: </p>
<pre><code>$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-56db997f77-9mpjx 1/1 Running 0 8s
nginx-deployment-56db997f77-mgdv9 1/1 Running 0 8s
nginx-deployment-56db997f77-zg96f 1/1 Running 0 8s
</code></pre>
<p>Now let's edit this deployment and include one env var making the manifest look like this: </p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: STACK_GREETING
value: "Hello from the MARS"
ports:
- containerPort: 80
</code></pre>
<p>After we finish the edition, let's check our pod names and see if it changed: </p>
<pre><code>$ kubectl get pod
nginx-deployment-5b4b68cb55-9ll7p 1/1 Running 0 25s
nginx-deployment-5b4b68cb55-ds9kb 1/1 Running 0 23s
nginx-deployment-5b4b68cb55-wlqgz 1/1 Running 0 21s
</code></pre>
<p>As we can see, all pod names changed. Let's check if our env var got applied: </p>
<pre><code>$ kubectl exec -ti nginx-deployment-5b4b68cb55-9ll7p -- sh -c 'echo $STACK_GREETING'
Hello from the MARS
</code></pre>
<p>The same behavior will occur if you change the var or even remove it. All pods need to be removed and created again for the changes to take place. </p>
| Mark Watney |
<p>I've set up a <code>minikube</code> cluster for development purpose inside a VM.<br />
I've deployed few services and an ingress controller (the minikube one) to be able to access it without using NodePorts.<br />
Inside my VM, I can access my services as usual with <code>curl http://hello-world.info</code> or another one. Everything works fine.</p>
<p>But when I'm outside of my VM, I can't access it even if I'm on the same network. I tried from my hosting server, from my laptop and outside with a VPN.<br />
The cluster IP is well listed in my addresses inside the VM (<code>ip a</code>), but not accessible outside of it (e.g: <code>ping xxx</code>).<br />
How can I access my cluster services on another machine inside the same network ?</p>
<p>My VM's IP is set to static (<code>ubuntu-server 20.XX.XX</code>) with netplan at <code>192.168.1.128</code> and my cluster IP is <code>192.168.49.2</code>. My DHCP server is allowed to distribute IPs only between <code>192.168.1.101-254</code>.</p>
<p>Thanks :).</p>
| DataHearth | <p>Personally I haven't found a way to expose <code>minikube</code> instance with <code>--driver=docker</code> on <code>LAN</code>.</p>
<p>As a workaround to expose your <code>minikube</code> instance on <code>LAN</code> you can either <code>--driver</code>:</p>
<ul>
<li><code>--driver=virtualbox</code></li>
<li><code>--driver=none</code></li>
</ul>
<p>Being specific to the exposing your Kubernetes cluster to <code>LAN</code>, I would also consider checking out other Kubernetes solutions like (you could run them on bare-metal or run them in a vbox vm with <code>bridged</code> networking:</p>
<ul>
<li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a></li>
<li><a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">kubespray</a></li>
<li><a href="https://microk8s.io/" rel="nofollow noreferrer">microk8s</a></li>
</ul>
<hr />
<h3><code>--driver=virtualbox</code>:</h3>
<p>Citing part of my own answer some time ago:</p>
<blockquote>
<p>As I previously mentioned: When you create your <code>minikube</code> instance with Virtualbox you will create below network interfaces:</p>
<ul>
<li><code>NAT</code>- interface which will allow your VM to access the Internet. This connection cannot be used to expose your services</li>
<li><code>Host-only-network-adapter</code> - interface created by your host which allows to communicate within the interface. It means that your host and other vm's with this particular adapter could connect with each other. It's designed for internal usage.</li>
</ul>
<p>You can read more about Virtualbox networking here:</p>
<ul>
<li><a href="https://www.virtualbox.org/manual/ch06.html" rel="nofollow noreferrer">Virtualbox.org: Virtual Networking</a></li>
</ul>
<p>I've managed to find a <strong>workaround</strong> to allow connections outside your laptop/pc to your <code>minikube</code> instance. You will need to change network interface in settings of your <code>minikube</code> instance from <strong><code>Host-only-network-adapter</code></strong> to <strong><code>Bridged Adapter</code></strong> (2nd adapter). This will work as another device was connected to your physical network. Please make sure that this bridged adapter is used with Ethernet NIC. <code>Minikube</code> should change IP address to match the one used in your physical one.</p>
<blockquote>
<p>You will also need to change your <code>.kube/config</code> as it will have the old/wrong IP address!</p>
</blockquote>
<p>After that you should be able to connect to your <code>Ingress</code> resource by IP accessible in your physical network.</p>
<p>-- <em><a href="https://stackoverflow.com/questions/62559281/expose-kubernetes-cluster-to-internet/62697373#62697373">Stackoverflow.com: Answers:: Expose Kubernetes cluster to Internet</a></em></p>
</blockquote>
<hr />
<h3><code>--driver=none</code></h3>
<p>You can also run <code>minikube</code> with <code>--driver=none</code> but there are some caveats to this method which you can read more about by following this docs (tl;dr you are running your <code>minikube</code> directly on your host):</p>
<ul>
<li><em><a href="https://minikube.sigs.k8s.io/docs/drivers/none/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Drivers: None</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">Minikube.sigs.k8s.io: Docs: Drivers</a></em></li>
</ul>
| Dawid Kruk |
<p>I am currently moving from ECS to EKS and I'm confused over the divide between Helm and Terraform.</p>
<p>We currently have a dedicate Terraform/Packer repo for our EKS cluster.</p>
<p>We then have a repo for our app. The app requires an AWS RDS instance and SQS/SNS.</p>
<p>My understanding is Helm doesn't support SQS or other service setup, in which case I question why I would bother with Helm when it's pretty easy to deploy all required queues/db/app in EKS using purely Terraform? It seems that by introducing Helm all I end up doing is creating an unnecessary split in the app setup for K8/NonK8 app setup.</p>
<p>I feel like I'm missing the point of Helm, but I'm struggling to see what it is? Help me see what I'm missing!</p>
| user460667 | <p>Helm is for installing applications on your EKS. SQS and RDS are not applications running on your container cluster, they are infrastructure.
For those you can use Terraform, CloudFormation or CDK.</p>
<p>You can find more examples on how to use the different tools here: <a href="https://www.eksworkshop.com/" rel="nofollow noreferrer">https://www.eksworkshop.com/</a></p>
| Shelly Dar Rapaport |
<p>I'm trying to set up Graylog within my Kubernetes cluster as described <a href="https://github.com/mouaadaassou/K8s-Graylog" rel="nofollow noreferrer">here</a>. The problem I'm running into is the definition of the environment variable GRAYLOG_HTTP_EXTERNAL_URI. The documentation tells me to enter "my IP adress" and from what I was able to find out, the variable is ment to tell the browser where to find the Graylog API.</p>
<p>But my cluster is accessed through a NGINX reverse proxy serving as an ingress controller, which means the browser can't access the Graylog pod directly and even less through http, so it don't really know what value I should assing there. I tried the public IP address of the ingress controller but all I'm getting is a 503. <strong>Is there a way to allow access to the Graylog API while still keeping the service protected behind the ingress controller?</strong></p>
| TigersEye120 | <p>It really depends on how are you exposing it. By default it's not exposed to the outside world. We have <code>graylog3</code> service of type <code>NodePort</code>, so we have only an internal IP that can be accessed from another pod or used to expose it with ingress. </p>
<pre><code>$ kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
es6 NodePort 10.23.244.28 <none> 9200:30001/TCP,9300:30002/TCP 54m service=es-deploy
graylog3 NodePort 10.23.242.128 <none> 9000:30003/TCP,12201:30004/TCP 54m service=graylog-deploy
kubernetes ClusterIP 10.23.240.1 <none> 443/TCP 57m <none>
mongo ClusterIP 10.23.243.160 <none> 27017/TCP 54m service=mongo-deploy
</code></pre>
<p>If we curl this service and port from another pod we have the following output: </p>
<pre><code>$ kubectl exec -ti ubuntu -- curl 10.23.242.128:9000
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="robots" content="noindex, nofollow">
<meta charset="UTF-8">
<title>Graylog Web Interface</title>
<link rel="shortcut icon" href="http://your_ip_address:30003/assets/favicon.png">
</head>
<body>
<script src="http://your_ip_address:30003/config.js"></script>
<script src="http://your_ip_address:30003/assets/vendor.4024e2a8db732781a971.js"></script>
<script src="http://your_ip_address:30003/assets/polyfill.a5e2fb591e8fd54ee4ef.js"></script>
<script src="http://your_ip_address:30003/assets/builtins.a5e2fb591e8fd54ee4ef.js"></script>
<script src="http://your_ip_address:30003/assets/plugin/org.graylog.plugins.threatintel.ThreatIntelPlugin/plugin.org.graylog.plugins.threatintel.ThreatIntelPlugin.b864ba54b438ac0bdc48.js"></script>
<script src="http://your_ip_address:30003/assets/plugin/org.graylog.plugins.collector.CollectorPlugin/plugin.org.graylog.plugins.collector.CollectorPlugin.bcc87290018e859a8a9e.js"></script>
<script src="http://your_ip_address:30003/assets/plugin/org.graylog.aws.AWSPlugin/plugin.org.graylog.aws.AWSPlugin.8ae7cb13983ce33eeb5b.js"></script>
<script src="http://your_ip_address:30003/assets/app.a5e2fb591e8fd54ee4ef.js"></script>
</body>
</html>
</code></pre>
<p>As can be seen, there is a reference for <code>http://your_ip_address:30003</code>. If we leave it this way, the application will break because it's referencing something nonexistent. </p>
<p>So I'll change 2 things, make it visible from the outside world with an ingress and change <code>GRAYLOG_HTTP_EXTERNAL_URI</code> to the correct IP I'll get:</p>
<p>1 - Creating ingress rule to expose Graylog: </p>
<p>This is how my ingress manifest is looking</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: graylog
spec:
backend:
serviceName: graylog3
servicePort: 9000
</code></pre>
<pre><code>$ kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
graylog * 34.107.139.231 80 56s
</code></pre>
<p>2 - Edit our <code>GRAYLOG_HTTP_EXTERNAL_URI</code> and substitute <code>http://your_ip_address:30003</code> for <code>http://34.107.139.231:80</code>. </p>
<p>Observe that here I'm changing port from <code>30003</code> to <code>80</code> since our ingress rule is exposing on port <code>80</code>.</p>
<pre><code>$ kubectl edit deployments graylog-deploy
</code></pre>
<p>Changes made, now let's curl this port from any console (give it some time for the pods to be recreated): </p>
<pre><code>$ curl 34.107.139.231:80
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="robots" content="noindex, nofollow">
<meta charset="UTF-8">
<title>Graylog Web Interface</title>
<link rel="shortcut icon" href="http://34.107.139.231:80/assets/favicon.png">
</head>
<body>
<script src="http://34.107.139.231:80/config.js"></script>
<script src="http://34.107.139.231:80/assets/vendor.4024e2a8db732781a971.js"></script>
<script src="http://34.107.139.231:80/assets/polyfill.a5e2fb591e8fd54ee4ef.js"></script>
<script src="http://34.107.139.231:80/assets/builtins.a5e2fb591e8fd54ee4ef.js"></script>
<script src="http://34.107.139.231:80/assets/plugin/org.graylog.plugins.threatintel.ThreatIntelPlugin/plugin.org.graylog.plugins.threatintel.ThreatIntelPlugin.b864ba54b438ac0bdc48.js"></script>
<script src="http://34.107.139.231:80/assets/plugin/org.graylog.plugins.collector.CollectorPlugin/plugin.org.graylog.plugins.collector.CollectorPlugin.bcc87290018e859a8a9e.js"></script>
<script src="http://34.107.139.231:80/assets/plugin/org.graylog.aws.AWSPlugin/plugin.org.graylog.aws.AWSPlugin.8ae7cb13983ce33eeb5b.js"></script>
<script src="http://34.107.139.231:80/assets/app.a5e2fb591e8fd54ee4ef.js"></script>
</body>
</html>
</code></pre>
<p>Now we can see <code>http://34.107.139.231:80/</code> as expected and the page can be loaded perfectly. </p>
<p>If you have a domain name that's redirecting to your application IP, put it in this variable. </p>
| Mark Watney |
<p>I've created a Helm test but when it succeeds or fails, it only outputs a simple message about whether it succeeded or failed <em>(see below)</em>. I would like for it to be able to output custom info about what tests it ran and some info about them. So if it failed, I'd see which things failed specifically. And even if it succeeds, it would show me what things it tested.</p>
<pre><code>NAME: my-app
LAST DEPLOYED: Thu Jan 28 17:45:51 2021
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: my-app-test
Last Started: Thu Jan 28 17:46:05 2021
Last Completed: Thu Jan 28 17:46:06 2021
Phase: Succeeded
</code></pre>
<p>I don't see anywhere I can specify this or allow it to add to that output?</p>
| Don Rhummy | <p>As I stated in the comment:</p>
<blockquote>
<p>Hello, <code>$ helm test</code> has a parameter <code>--logs</code> where it's description states: "dump the logs from test pods (this runs after all tests are complete, but before any cleanup)".</p>
</blockquote>
<p><code>helm test RELEASE_NAME --logs</code> can be used to produce logs from your scheduled tests:</p>
<ul>
<li><em><a href="https://helm.sh/docs/topics/chart_tests/" rel="nofollow noreferrer">Helm.sh: Docs: Topic: Chart test</a></em></li>
</ul>
<hr />
<p>An example of how <code>$ helm test</code> works could be following:</p>
<ul>
<li>Create an example <code>Helm</code> Chart:
<ul>
<li><code>$ helm create raven</code></li>
</ul>
</li>
<li>Go into directory and install the Chart:
<ul>
<li><code>$ cd raven && helm install raven .</code></li>
</ul>
</li>
<li>After the Chart is deployed run:
<ul>
<li><code>$ helm test raven</code></li>
</ul>
</li>
</ul>
<p>The tests for this example Chart are located in the:</p>
<ul>
<li><code>ROOT_CHART_DIR/templates/tests/</code></li>
</ul>
<p>The file that describes a test (<code>test-connection.yaml</code>) looks like below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: "{{ include "raven.fullname" . }}-test-connection"
labels:
{{- include "raven.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "raven.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
</code></pre>
<p>This tests could be modified to output the specific messages depending on what the tests actually determined.</p>
<p>The output of the last command should be following:</p>
<pre class="lang-sh prettyprint-override"><code>NAME: raven
<-- OMITTED -->
NOTES:
<-- OMITTED -->
POD LOGS: raven-test-connection
Connecting to raven:80 (10.8.12.208:80)
saving to 'index.html'
index.html 100% |********************************| 612 0:00:00 ETA
'index.html' saved
</code></pre>
<p>As it can be seen above the output of <code>wget</code> command with its args was displayed.</p>
| Dawid Kruk |
<p>I'm trying to add a custom domain to my AKS cluster. All of the components I'm dealing with are within the same VNET, but the custom DNS Server and AKS Service are in different subnets. I've also like to avoid changing the DNS Server at the VNET level. </p>
<p>I've followed this guide to no avail:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/coredns-custom#use-custom-domains" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/coredns-custom#use-custom-domains</a></p>
<p>I've also found previous answers used a similar setup: </p>
<p><a href="https://stackoverflow.com/questions/55612141/resolve-custom-dns-in-kubernetes-cluster-aks">Resolve custom dns in kubernetes cluster (AKS)</a></p>
<p>but that did not work either. The difference between the two being the coredns plugin that is used to forward the resolving traffic towards a custom resolver.</p>
<p>I've tried both the proxy and forward plugin with the same setup, and both end in the same error</p>
<p><strong>Proxy plugin:</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
test.server: |
mydomain.com:53 {
log
errors
proxy . [MY DNS SERVER'S IP]
}
</code></pre>
<p><strong>Forward Plugin:</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
test.server: |
mydomain.com:53 {
log
errors
forward . [MY DNS SERVER'S IP]
}
</code></pre>
<p><strong>Reproduce:</strong></p>
<p>1 VNET</p>
<p>2 Subnets (1 for AKS, 1 for the DNS VM)</p>
<p>Add a name to the DNS VM, and use a configmap to proxy traffic to the custom DNS instead of the node resolvers/VNET DNS</p>
<p><strong>Error:</strong></p>
<p>After applying either of the configmaps above, the coredns pods log this error:</p>
<blockquote>
<p>2019-11-11T18:41:46.224Z [INFO] 172.28.18.104:47434 - 45605 "A IN
mydomain.com. udp 55 false 512" REFUSED qr,rd 55 0.001407305s</p>
</blockquote>
| Zach O'Hearn | <p>Was just missing a few more steps of due diligence. After checking the logs on the DNS VM, I found that the requests were making to the host, but the host was refusing them. The named.conf.options whitelisted a subset of address spaces. After updating those address spaces in named.conf to match the new cloud network we recently moved to, the requests were resolving. </p>
<p>I ended up sticking with the forward plugin as the MS docs outlined.</p>
| Zach O'Hearn |
<p>Assume I have a Kubernetes <code>CronJob</code></p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cron-job-logging
spec:
schedule: "@hourly"
jobTemplate:
spec:
template:
spec:
containers:
- name: cron-job-logging
image: python:3
args:
- /usr/bin/python3
- -c
- import random; print('a') if random.random() < 0.5 else print('b')
restartPolicy: OnFailure
</code></pre>
<p>which runs on a GKE cluster 1.14.x with Cloud Operation for GKE activated for "System and workload logging and monitoring".</p>
<p>How can I collect the output for period t (let's say a month) so that I can see whether the pod printed <code>a</code> or <code>b</code>.</p>
<p>If seen some issues about this request, like <a href="https://github.com/kubernetes/kubernetes/issues/27768" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/27768</a>. The logs seem to be available for some users, but not for others which might be caused by the fact that <code>CronJobs</code> are a beta feature.</p>
| Kalle Richter | <p>I've deployed your cronjob, and just for example purposes I set schedule to run each 1 minute, Below there are a few ways on how to access it:</p>
<ul>
<li><p><strong>GKE console</strong> – In the <a href="https://console.cloud.google.com/kubernetes/workload" rel="nofollow noreferrer">Google Kubernetes Engine</a> section of Google Cloud Console, select the Kubernetes resources listed in <strong>Workloads</strong>, and then the <strong>Container</strong> or <strong>Audit Logs</strong> links, this method is kind of a shortcut for the next option: <em>cloud logging console</em></p>
</li>
<li><p><strong>Cloud Logging console</strong> – You can see your logs directly from the <a href="https://console.cloud.google.com/logs" rel="nofollow noreferrer">Cloud Logging console</a> by using the logging filters to select the Kubernetes resources, such as cluster, node, namespace, pod, or container logs. Here are sample <a href="https://cloud.google.com/logging/docs/view/query-library-preview#kubernetes-filters" rel="nofollow noreferrer">Kubernetes-related queries</a> to help get you started.</p>
<ul>
<li>This is the query I used (redacted project details):
<pre><code>resource.type="k8s_container"
resource.labels.project_id="PROJECT_NAME"
resource.labels.location="ZONE"
resource.labels.cluster_name="CLUSTER_NAME"
resource.labels.namespace_name="NAMESPACE"
labels.k8s-pod/job-name:"cron-job-logging-"
</code></pre>
</li>
<li>Here are the result:
<a href="https://i.stack.imgur.com/HwO0p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HwO0p.png" alt="enter image description here" /></a></li>
</ul>
</li>
<li><p><strong>Cloud Monitoring console</strong> – If you have enabled a Cloud Monitoring Workspace, in the <a href="https://console.cloud.google.com/monitoring/dashboards/resourceList/kubernetes" rel="nofollow noreferrer">Kubernetes Engine</a> section of the Cloud Monitoring console, select your <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/observing#alerting-details" rel="nofollow noreferrer">cluster, nodes, pod, or containers</a> to view your logs.</p>
</li>
<li><p><strong><code>gcloud</code> command-line tool</strong> – Using the <a href="https://cloud.google.com/logging/docs/reference/tools/gcloud-logging" rel="nofollow noreferrer"><code>gcloud logging read</code></a> command, select the appropriate cluster, node, pod, and container logs.</p>
</li>
<li><p>Here an example:</p>
</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>$ gcloud logging read "resource.labels.project_id=PROJECT AND resource.labels.cluster_name=CLUSTER_NAME AND labels.k8s-pod/job-name:cron-job-logging-"
---
insertId: 6vorvd43akuvy8fi3
labels:
k8s-pod/controller-uid: c525bbae-c6c9-11ea-931b-42010a80001f
k8s-pod/job-name: cron-job-logging-1594838040
logName: projects/PROJECT/logs/stdout
receiveTimestamp: '2020-07-15T18:35:14.937645549Z'
resource:
labels:
cluster_name: CLUSTER_NAME
container_name: cron-job-logging
location: ZONE
namespace_name: default
pod_name: cron-job-logging-1594838040-pngsk
project_id: PROJECT
type: k8s_container
severity: INFO
textPayload: |
a
timestamp: '2020-07-15T18:34:09.907735144Z'
</code></pre>
<p>More info here: <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/using-logs" rel="nofollow noreferrer">GKE - Using Logs</a></p>
<p>If you have any question, let me know in the comments.</p>
| Will R.O.F. |
<p>Just installed stable/prometheus chart with below values and I'm able to access the server frontend from pods but not from host's web browser.</p>
<p>My <strong>values.yaml</strong>:</p>
<pre><code>alertmanager:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- localhost/alerts
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- localhost/prom
pushgateway:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- localhost/push
</code></pre>
<p>I use nginx ingress and ingresses get created but for some unknown reason, it doesn't seem to map to the service.</p>
<p><strong>Some data:</strong></p>
<p>I'm able to access the server from ingress pods (also all others) via default and dns service names:</p>
<pre><code>kubectl exec -it nginx-ingress-controller-5cb489cd48-t4dgv -- sh
/etc/nginx $ curl prometheus-server.default.svc.cluster.local
<a href="/graph">Found</a>
/etc/nginx $ curl prometheus-server
<a href="/graph">Found</a>
</code></pre>
<p>List of active ingresses created by the chart:</p>
<pre><code>kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress localhost localhost 80 37h
prometheus-alertmanager localhost localhost 80 43m
prometheus-pushgateway localhost localhost 80 43m
prometheus-server localhost localhost 80 43m
</code></pre>
<p>List of active service resources:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37h
nginx-deployment ClusterIP 10.100.1.167 <none> 80/TCP 37h
nginx-ingress-controller LoadBalancer 10.109.57.131 localhost 80:32382/TCP,443:30669/TCP 36h
nginx-ingress-default-backend ClusterIP 10.107.91.35 <none> 80/TCP 36h
php-deployment ClusterIP 10.105.73.26 <none> 9000/TCP 37h
prometheus-alertmanager ClusterIP 10.97.89.149 <none> 80/TCP 44m
prometheus-kube-state-metrics ClusterIP None <none> 80/TCP,81/TCP 44m
prometheus-node-exporter ClusterIP None <none> 9100/TCP 44m
prometheus-pushgateway ClusterIP 10.105.81.111 <none> 9091/TCP 44m
prometheus-server ClusterIP 10.108.225.187 <none> 80/TCP 44m
</code></pre>
<p>On the other hand, if I declare subdomain as an ingress host, Prometheus is accessible:</p>
<pre><code>alertmanager:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- alerts.localhost
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- prom.localhost
pushgateway:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- push.localhost
</code></pre>
<p>Am I doing something wrong or there's some sort of issue with this?
Any suggestions?</p>
<p>Thanks in advance!</p>
<p>Version of Helm and Kubernetes:
Helm 3.0.3 / Kubernetes 1.15.5 (Docker for Mac, MacOS Catalina)</p>
| dzhi | <p>I reproduced your scenario and by running some tests I understood that this is not going to work in the way you want it to work. This is not the right way to implement it. </p>
<p>Let's dive into it a bit. </p>
<p>You can add <code>nginx.ingress.kubernetes.io/rewrite-target</code> in your ingresses as in this example: </p>
<pre><code>$ kubectl get ingresses. myprom-prometheus-pushgateway -oyaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2020-02-18T09:51:32Z"
generation: 1
labels:
app: prometheus
chart: prometheus-10.4.0
component: pushgateway
heritage: Helm
release: myprom
name: myprom-prometheus-pushgateway
namespace: default
resourceVersion: "3239"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myprom-prometheus-pushgateway
uid: 499372f4-52b1-4b37-982c-b52e70657d37
spec:
rules:
- host: localhost
http:
paths:
- backend:
serviceName: myprom-prometheus-pushgateway
servicePort: 9091
path: /push
status:
loadBalancer:
ingress:
- ip: 192.168.39.251
</code></pre>
<p>After adding this, you are going to be capable to access as you intended. Unfortunately after doing this you're going to face a new problem. If we inspect the html output from a curl command we can see this: </p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="robots" content="noindex,nofollow">
<title>Prometheus Pushgateway</title>
<link rel="shortcut icon" href="/static/favicon.ico?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
<script src="/static/jquery-3.4.1.min.js?v=793293bdadd51fdaca69de5bb25637b0f93b656b"></script>
<script src="/static/bootstrap-4.3.1-dist/js/bootstrap.min.js?v=793293bdadd51fdaca69de5bb25637b0f93b656b"></script>
<script src="/static/functions.js?v=793293bdadd51fdaca69de5bb25637b0f93b656b"></script>
<link type="text/css" rel="stylesheet" href="/static/bootstrap-4.3.1-dist/css/bootstrap.min.css?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
<link type="text/css" rel="stylesheet" href="/static/prometheus.css?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
<link type="text/css" rel="stylesheet" href="/static/bootstrap4-glyphicons/css/bootstrap-glyphicons.min.css?v=793293bdadd51fdaca69de5bb25637b0f93b656b">
</head>
</code></pre>
<p>As can be seen, we have <code>/static/jquery-3.4.1.min.js</code> for example, so this is going to redirect your browser to a different location. The problem is that you don't have this location. </p>
<p>That's why i suggest you to avoid using path and stick tom your secondary solution that involves using a sub-domain. </p>
<pre><code>alertmanager:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- alerts.localhost
server:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- prom.localhost
pushgateway:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- push.localhost
</code></pre>
| Mark Watney |
<p>We have a kubernetes environment(3 EC2 instances). I am trying to access the dashboard from outside the cluster, but its showing site can't be reached. So that i gone to some links and found through nginx-ingress we can access it.</p>
<p><a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/deployment/nginx-ingress.yaml" rel="nofollow noreferrer">I have gone to this url</a> and installed nginx.</p>
<p>And i have created this file to access.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
nginx.org/ssl-backends: "kubernetes-dashboard"
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard-ingress
namespace: kube-system
spec:
rules:
- host: serverdnsname
http:
paths:
- path: /dashboard
backend:
serviceName: kubernetes-dashboard
servicePort: 443
</code></pre>
<p>But still not able to access it.</p>
| horton | <p>We managed it like this</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
</code></pre>
<p>just added a clusterip service and use a nginx before it as a reverse proxy</p>
<pre><code>server {
listen 443 ssl http2;
server_name kubernetes.dev.xxxxx;
ssl_certificate /etc/letsencrypt/live/kubernetes.dev.xxxx/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/kubernetes.dev.xxxx/privkey.pem;
include ssl.conf;
location / {
deny all;
include headers.conf;
resolver 10.96.0.10 valid=30s; #ip of your dns service inside the cluster
set $upstream kubernetes-dashboard.kube-system.svc.cluster.local;
proxy_pass http://$upstream;
}
}
</code></pre>
<p>but should also be possible with NodePort</p>
| Michael |
<p>When deploying container in kube-system you can see the fqdn of masternode api being used.
When creating a namespace and deploy same container it is kubernetes service with internal ip for the <code>KUBERNETES_SERVICE_HOST</code> environment variable.</p>
<p>We cannot use <code>kind: PodPreset</code> in AKS so i don't know any other way on how to set this environment variable for new pods.
Pods are being deploying with helm in this namespace so you can't set this environment variable in a way that helm will use this in deployment.</p>
| Devnull | <p>The <a href="https://kubernetes.io/docs/concepts/containers/container-environment/#container-environment" rel="nofollow noreferrer">Kubernetes Container Environment</a> provides several important resources to Containers, one of them is:</p>
<blockquote>
<ul>
<li><strong>A list of all services that were running when a Container was created is available to that Container as environment variables.</strong> Those environment variables match the syntax of Docker links.</li>
</ul>
<p>For a service named <code>foo</code> that maps to a Container named bar, the following variables are defined:</p>
<p><code>FOO_SERVICE_HOST=0.0.0.0</code> (The IP Address of the service <code>foo</code>)</p>
<p><code>FOO_SERVICE_PORT=65535</code> (the port of the service <code>foo</code>)</p>
</blockquote>
<ul>
<li><code>XXX_SERVICE_PORT</code> is generated automatically based on the services available for the container at the moment of it's creation.</li>
</ul>
<hr>
<blockquote>
<p><em>When deploying container in kube-system you can see the fqdn of masternode api being used. When creating a namespace and deploy same container it is kubernetes service with internal ip for the KUBERNETES_SERVICE_HOST environment variable.</em></p>
</blockquote>
<ul>
<li>It should work both ways because the <code>kubernetes.default</code> service is a relay agent to the master API, take a look at the description of the service:</li>
</ul>
<pre><code>$ k describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.21.0.1
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 10.54.240.1:443
Session Affinity: None
Events: <none>
$ kubectl cluster-info
Kubernetes master is running at https://10.54.240.1
</code></pre>
<ul>
<li>The endpoint of the <code>kubernetes.default</code> service is the master API IP, so if your deployment is not working as intended, it might have another issue under the hood.</li>
</ul>
<p>You can also follow the instructions given in @djsly answer and open an issue on the prom-op github, vote for <code>podPreset</code> to become available on AKS or even experiment with other cloud providers (like GCP that offers a free tier so you can try out).</p>
<p>If you have any further questions let us know.</p>
| Will R.O.F. |
<p>I would like to reserve some worker nodes for a namespace. I see the notes of stackflow and medium</p>
<p><a href="https://stackoverflow.com/questions/52487333/how-to-assign-a-namespace-to-certain-nodes">How to assign a namespace to certain nodes?</a></p>
<p><a href="https://medium.com/@alejandro.ramirez.ch/reserving-a-kubernetes-node-for-specific-nodes-e75dc8297076" rel="nofollow noreferrer">https://medium.com/@alejandro.ramirez.ch/reserving-a-kubernetes-node-for-specific-nodes-e75dc8297076</a></p>
<p>I understand we can use taint and nodeselector to achieve that.
My question is if people get to know the details of nodeselector or taint, how can we prevent them to deploy pods into these dedicated worker nodes.</p>
<p>thank you</p>
| Honord | <p>To accomplish what you need, basically you have to use <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">taint</a>.
Let's suppose you have a Kubernetes cluster with one Master and 2 Worker nodes: </p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
knode01 Ready <none> 8d v1.16.2
knode02 Ready <none> 8d v1.16.2
kubemaster Ready master 8d v1.16.2
</code></pre>
<p>As example I'll setup knode01 as Prod and knode02 as Dev.</p>
<pre><code>$ kubectl taint nodes knode01 key=prod:NoSchedule
</code></pre>
<pre><code>$ kubectl taint nodes knode02 key=dev:NoSchedule
</code></pre>
<p>To run a pod into these nodes, we have to specify a toleration in spec session on you yaml file: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
tolerations:
- key: "key"
operator: "Equal"
value: "dev"
effect: "NoSchedule"
</code></pre>
<p>This pod (pod1) will always run in knode02 because it's setup as dev. If we want to run it on prod, our tolerations should look like that: </p>
<pre><code> tolerations:
- key: "key"
operator: "Equal"
value: "prod"
effect: "NoSchedule"
</code></pre>
<p>Since we have only 2 nodes and both are specified to run only prod or dev, if we try to run a pod without specifying tolerations, the pod will enter on a pending state: </p>
<pre><code>$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod0 1/1 Running 0 21m 192.168.25.156 knode01 <none> <none>
pod1 1/1 Running 0 20m 192.168.32.83 knode02 <none> <none>
pod2 1/1 Running 0 18m 192.168.25.157 knode01 <none> <none>
pod3 1/1 Running 0 17m 192.168.32.84 knode02 <none> <none>
shell-demo 0/1 Pending 0 16m <none> <none> <none> <none>
</code></pre>
<p>To remove a taint: </p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl taint nodes knode02 key:NoSchedule-
</code></pre>
| Mark Watney |
<p>I've created a pod that works as Nginx Proxy. It works well with the default configuration but when I add my custom configuration via ConfigMap it crashes.
This is the only log I have.</p>
<blockquote>
<p>/bin/bash: /etc/nginx/nginx.conf: Read-only file system</p>
</blockquote>
<p>My deployment.yaml</p>
<pre><code>volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: false
volumes:
- name: nginx-config
configMap:
name: nginx-config
</code></pre>
<p>If could help I've found <a href="https://stackoverflow.com/questions/51884999/chown-var-lib-postgresql-data-postgresql-conf-read-only-file-system">this</a> answer on StackOverflow but I don't understand how to do those passages.</p>
| octopi | <p>The config is correct. Reading the Nginx Docker documentation I've found that I have to add <code>command: [ "/bin/bash", "-c", "nginx -g 'daemon off;'" ]</code></p>
| octopi |
<p>I am trying to access my ingress-nginx service from a service but it gives connection refused. Here is my ingress</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /api/tickets/?(.*)
backend:
serviceName: tickets-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: http
port: 443
protocol: TCP
targetPort: https
</code></pre>
<pre><code>❯ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.101.124.218 10.101.124.218 80:30634/TCP,443:30179/TCP 15m
</code></pre>
<p>The ingress-nginx is running on namespace ingress-nginx.
So it should be accessible by <code>http://ingress-nginx.ingress-nginx.svc.cluster.local</code>. But when I access it, it says <code>connection refused 10.101.124.218:80</code>. I am able to access the ingress from outside, i.e. from the <code>ingress</code> ip.</p>
<p>I am using minikube and used ingress by running <code>minikube addons enable ingress</code>. Yes and im running the tunnel by <code>minikube tunnel</code></p>
| potatoxchip | <p>I tested your environment and found the same behavior, external access but internally getting connection refused, this is how I solved:</p>
<ul>
<li>The Minikube Ingress Addon deploys the controller in <code>kube-system</code> namespace. If you try to deploy the service in a newly created namespace, it will not reach the deployment in <code>kube-system</code> namespace. </li>
<li>It's easy to mix those concepts because the default <code>nginx-ingress</code> deployment uses the namespace <code>ingress-nginx</code> as you were trying.</li>
<li><p>Another issue I found, is that your service does not have all selectors assigned to the controller deployment.</p></li>
<li><p>The easiest way to make your deployment work, is to run <code>kubectl expose</code> on the nginx controller:</p></li>
</ul>
<pre><code>kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodePort -n kube-system
</code></pre>
<ul>
<li>Using this command to create the nginx-ingress-controller service, all communications were working, both external and internal.</li>
</ul>
<hr>
<p><strong>Reproduction:</strong></p>
<ul>
<li>For this example I'm using only two ingress backends to avoid being much repetitive in my explanation.</li>
<li>Using minikube 1.11.0</li>
<li>Enabled <code>ingress</code> and <code>metallb</code> addons.</li>
<li>Deployed two hello apps: <code>v1</code> and <code>v2</code>, both pods listens on port <code>8080</code> and are exposed as node port as follows:</li>
</ul>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello1-svc NodePort 10.110.211.119 <none> 8080:31243/TCP 95m
hello2-svc NodePort 10.96.9.66 <none> 8080:31316/TCP 93m
</code></pre>
<ul>
<li>Here is the ingress file, just like yours, just changed the backend services names and ports to match my deployed ones:</li>
</ul>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: hello1-svc
servicePort: 8080
- path: /?(.*)
backend:
serviceName: hello2-svc
servicePort: 8080
</code></pre>
<ul>
<li>Now I'll create the <code>nginx-ingress</code> service exposing the controller deployment, this way all tags and settings will be inherited:</li>
</ul>
<pre><code>$ kubectl expose deployment ingress-nginx-controller --target-port=80 --type=NodeP
ort -n kube-system
service/ingress-nginx-controller exposed
</code></pre>
<ul>
<li>Now we deploy the ingress object:</li>
</ul>
<pre><code>$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/ingress-service created
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> ticketing.dev 172.17.0.4 80 56s
$ minikube ip
172.17.0.4
</code></pre>
<ul>
<li>Testing the ingress from the outside:</li>
</ul>
<pre><code>$ tail -n 1 /etc/hosts
172.17.0.4 ticketing.dev
$ curl http://ticketing.dev/?foo
Hello, world!
Version: 2.0.0
Hostname: hello2-67bbbf98bb-s78c4
$ curl http://ticketing.dev/api/users/?foo
Hello, world!
Version: 1.0.0
Hostname: hello-576585fb5f-67ph5
</code></pre>
<ul>
<li>Then I deployed a <code>alpine</code> pod to test the access from inside the cluster:</li>
</ul>
<pre><code>$ kubectl run --generator=run-pod/v1 -it alpine --image=alpine -- /bin/sh
/ # nslookup ingress-nginx-controller.kube-system.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: ingress-nginx-controller.kube-system.svc.cluster.local
Address: 10.98.167.112
/ # apk update
/ # apk add curl
/ # curl -H "Host: ticketing.dev" ingress-nginx-controller.kube-system.svc.cluster.local/?foo
Hello, world!
Version: 2.0.0
Hostname: hello2-67bbbf98bb-s78c4
/ # curl -H "Host: ticketing.dev" ingress-nginx-controller.kube-system.svc.cluster.local/api/users/?foo
Hello, world!
Version: 1.0.0
Hostname: hello-576585fb5f-67ph5
</code></pre>
<p>As you can see, all requests were fulfilled.</p>
<hr>
<p><strong>Note:</strong></p>
<ul>
<li><p>As pointed by <strong>@suren</strong>, when curling ingress, I had to specify the host with -H</p></li>
<li><p>The service name needs to be fully FQDN because we are dealing with a service hosted in another namespace, using the format <code><SVC_NAME>.<NAMESPACE>.svc.cluster.local</code>.</p></li>
<li><p>In your JS app, you will have to pass the <code>Host</code> argument in order to reach the ingress.</p></li>
</ul>
<p>If you have any question let me know in the comments.</p>
| Will R.O.F. |
<p>We want to deploy using ArgoCD from our Jenkinsfile (which is slightly not how this is intended to be done but close enough), and after done some experiments want to try using the official container with the CLI, so we have added this snippet to our our pipeline kubernetes yaml:</p>
<pre><code> - name: argocdcli
image: argoproj/argocli
command:
- argo
args:
- version
tty: true
</code></pre>
<p>Unfortunately the usual way to keep these containers alive is to invoke <code>cat</code> in the container, which isn't there, so it fails miserably. Actually the <em>only</em> command in there is the "argo" command which doesn't have a way to sleep infinitely. (We are going to report this upstream so it will be fixed, but while we wait for that....)</p>
<p>My question therefore is, is there a way to indicate to Kubernetes that we <em>know</em> that this pod cannot keep itself up on its own, and therefore not tear it down immediately? </p>
| Thorbjørn Ravn Andersen | <p>Unfortunately it's not possible since as you stated, <code>argo</code> is the only command available on this image. </p>
<p>It can be confirmed <a href="https://github.com/argoproj/argo/blob/master/Dockerfile#L97-L102" rel="nofollow noreferrer">here</a>: </p>
<pre><code>####################################################################################################
# argocli
####################################################################################################
FROM scratch as argocli
COPY --from=argo-build /go/src/github.com/argoproj/argo/dist/argo-linux-amd64 /bin/argo
ENTRYPOINT [ "argo" ]
</code></pre>
<p>As we can see on this output, running argo is all this container is doing:</p>
<pre><code>$ kubectl run -i --tty --image argoproj/argocli argoproj1 --restart=Never
argo is the command line interface to Argo
Usage:
argo [flags]
argo [command]
...
</code></pre>
<p>You can optionally create you own image based on that and include sleep, so it'll be possible to keep it running as in this example: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
| Mark Watney |
<p>I am trying to do Consul setup via Kubernetes, helm chart, <a href="https://www.consul.io/docs/k8s/helm" rel="nofollow noreferrer">https://www.consul.io/docs/k8s/helm</a></p>
<p>Based on my pre-Kubernetes knowledge: services, using Consul access via Consul Agent, running on each host and listening on hosts IP</p>
<p>Now, I deployed via Helm chart to Kubernetes cluster. First misunderstanding the terminology, Consul Agent vs Client in this setup? I presume it is the same</p>
<p>Now, set up:</p>
<p>Helm chart config (Terraform fragment), nothing specific to Clients/Agent's and their service:</p>
<pre><code>global:
name: "consul"
datacenter: "${var.consul_config.datacenter}"
server:
storage: "${var.consul_config.storage}"
connect: false
syncCatalog:
enabled: true
default: true
k8sAllowNamespaces: ['*']
k8sDenyNamespaces: [${join(",", var.consul_config.k8sDenyNamespaces)}]
</code></pre>
<p>Pods, client/agent ones are DaemonSet, not in host network mode</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-server-0 1/1 Running 0 11h
consul-server-1 1/1 Running 0 11h
consul-server-2 1/1 Running 0 11h
consul-sync-catalog-8b688ff9b-klqrv 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
</code></pre>
<p>Services</p>
<pre><code> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.consul <none> 11h
consul-dns ClusterIP 172.20.124.238 <none> 53/TCP,53/UDP 11h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 11h
consul-ui ClusterIP 172.20.131.29 <none> 80/TCP 11h
</code></pre>
<p><strong>Question 1</strong> Where is a service, to target Client (Agent) pods, but not Server's pods ? Did I miss it in helm chart?</p>
<p>My plan is, while I am not going to use Host (Kubernetes node) networking:</p>
<ol>
<li>Find the Client/Agent service or make my own. So, it will be used by the Consul's user's. E.g., this service address I will specify for Consul template init pod of the Consul template. In the config consuming application</li>
</ol>
<pre><code>kubectl get pods --selector app=consul,component=client,release=consul
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
</code></pre>
<ol start="2">
<li>Optional: will add a topologyKeys in to agent service, so each consumer will not cross host boundary</li>
</ol>
<p><strong>Question 2</strong> Is it right approach? Or it is different for Consul Kubernetes deployments</p>
| Vetal | <p>You can use the Kubernetes downward API to inject the IP of host as an environment variable for your pod.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: 'consul:latest'
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- '/bin/sh'
- '-ec'
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never
</code></pre>
<p>See <a href="https://www.consul.io/docs/k8s/installation/install#accessing-the-consul-http-api" rel="nofollow noreferrer">https://www.consul.io/docs/k8s/installation/install#accessing-the-consul-http-api</a> for more info.</p>
| Blake Covarrubias |
<p>I would like to merge multiple kubeconfig files into one config file.I am using Windows 10 and PS for commnand line. I have 3 config files in <code>$HOME\.kube\config</code> directory and I set a <em>KUBECONFIG</em> environment variable with <code>C:\Users\Username\.kube.\config</code></p>
<p>I tried below command as below but I received an error says:</p>
<blockquote>
<p>KUBECONFIG=$HOME.kube\config:$HOME.kube\c1.kubeconfig\$HOME.kube\c2.kubeconfig : The module 'KUBECONFIG=$HOME' could not be loaded. For more information, run 'Import-Module KUBECONFIG=$HOME'. At line:1 char:1
+ KUBECONFIG=$HOME.kube\config:$HOME.kube\c1.kubeconfig\$HOME.k ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (KUBECONFIG=$HOM...2.kubeconfig:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CouldNotAutoLoadModule</p>
</blockquote>
<pre><code>KUBECONFIG=$HOME\.kube\config:$HOME\.kube\c1.kubeconfig\$HOME\.kube\c2.kubeconfig kubectl config view --merge --flatten $HOME\.kube\merged_kubeconfig
</code></pre>
<p>My Folder structure as below.</p>
<pre><code>.kube
-c1.kubeconfig
-c2.kubeconfig
-config
</code></pre>
| semural | <p>Resolved issue with merging kubeconfig files using below command for Windows</p>
<pre><code>$Env:KUBECONFIG=("$HOME\.kube\config;$HOME\.kube\c1.kubeconfig;$HOME\.kube\c2.kubeconfig"); kubectl config view --merge --flatten | Out-File "C:\Users\SU\tmp\config"
</code></pre>
| semural |
<ol>
<li>I use openssl to create a wildcard self-signed certificate. I set certificate validity duration to
to ten years (I double-checked the validity duration by inspecting the certificate with openssl)</li>
<li>I create a Kubernetes secret with the private key and certificate prepared in step 1 with following <code>kubectl</code> command:
<code>kubectl create secret tls my-secret -n test --key server.key --cert server.crt</code></li>
<li>We use nginx ingress controller version 0.25.1 running on AWS EKS</li>
<li>I refer to this secret in the Kubernetes ingress of my service</li>
<li>When connecting to my service via browser and inspecting the certificate, I notice it is issued by
"Kubernetes ingress Controller Fake certificate" and <strong>expires in one year instead of ten years</strong></li>
</ol>
<p>This certificate is used for internal traffic only, we expect the validity duration to be ten years. Why is it changed to one year? What can be done to keep the validity duration in the original certificate?</p>
<p><code>kubectl get secret dpaas-secret -n dpaas-prod -o yaml</code>:</p>
<pre><code>apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2ekNDQXFlZ0F3SUJBZ0lRSE5tMUQwakYxSHBQYVk2cTlCR2hGekFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVJrd0Z3WURWUVFLRXhCaFkyMWxJRk5sYkdZZ1UybG5ibVZrTVJVd0V3WURWUVFMRXd4aFkyMWxJR1Y0WVcxdwpiR1V4SERBYUJnTlZCQU1URTJGamJXVWdVMlZzWmlCVGFXZHVaV1FnUTBFd0hoY05NVGt4TWpFMk1UUXhOak14CldoY05Namt4TWpFMk1ERXhOak14V2pCWk1Rc3dDUVlEVlFRR0V3SlZVekVaTUJjR0ExVUVDaE1RWVdOdFpTQlQKWld4bUlGTnBaMjVsWkRFUk1BOEdBMVVFQ3hNSVlXTnRaUzVqYjIweEhEQWFCZ05WQkFNTUV5b3VaSEJ6TG0xNQpZMjl0Y0dGdWVTNWpiMjB3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzRjVmtaCndJY1cwS1VpTk5zQ096YjlleFREQU11SENwT3Jia2ExNmJHVHhMd2FmSmtybzF4TGVjYU8yM3p0SkpaRTZEZG8KZlB1UXAyWlpxVjJoL0VqL3ozSmZrSTRKTXdRQXQyTkd2azFwZk9YVlJNM1lUT1pWaFNxMU00Sm01ZC9BUHMvaApqZTdueUo4Y1J1aUpCMUh4SStRRFJpMllBK3Nzak12ZmdGOTUxdVBwYzR6Skppd0huK3JLR0ZUOFU3d2FueEdJCnRGRC9wQU1LaXRzUEFheW9aT2czazk4ZnloZS9ra1Z0TlNMTTdlWEZTODBwUEg2K3lEdGNlUEtMZ3N6Z3BqWFMKWGVsSGZLMHg1TXhneXIrS0dTUWFvU3Q0SVVkYWZHa05meVVYVURTMmtSRUdzVERVcC94R2NnQWo3UGhJeGdvZAovNWVqOXRUWURwNG0zWStQQWdNQkFBR2pnWXN3Z1lnd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXCk1CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1COEdBMVVkSXdRWU1CYUEKRk5VOE90NXBtM0dqU3pPTitNSVJta3k3dVBVRU1DZ0dBMVVkRVFRaE1CK0NIU291WkhCekxuVnpMV1ZoYzNRdApNUzV0ZVdOdmJYQmhibmt1WTI5dE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQlJITlN5MXZoWXRoeUpHRHpkCnpjSi9FR3QxbktXQkEzdFovRmtlRDhpelRiT282bmt0Sys5Rk52YmNxeGFQbzZSTWVtM2VBcEZCQkNFYnFrQncKNEZpRkMzODFRTDFuZy9JL3pGS2lmaVRlK0xwSCtkVVAxd1IzOUdOVm9mdWdNcjZHRmlNUk5BaWw4MVZWLzBEVworVWZ2dFg5ZCtLZU0wRFp4MGxqUkgvRGNubEVNN3Z3a3F5NXA2VklJNXN6Y1F4WTlGWTdZRkdMUms4SllHeXNrCjVGYW8vUFV1V1ZTaUMzRk45eWZ0Y3c1UGZ6MSt4em1PSmphS3FjQWkvcFNlSVZzZ0VTNTFZYm9JTUhLdDRxWGcKSlFqeU9renlKbFhpRzhBa2ZRMVJraS91cmlqZllqaWl6M04yUXBuK2laSFNhUmJ5OUhJcGtmOGVqc3VTb2wrcgpxZ3N4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdUhGWkdjQ0hGdENsSWpUYkFqczIvWHNVd3dETGh3cVRxMjVHdGVteGs4UzhHbnlaCks2TmNTM25HanR0ODdTU1dST2czYUh6N2tLZG1XYWxkb2Z4SS84OXlYNUNPQ1RNRUFMZGpScjVOYVh6bDFVVE4KMkV6bVZZVXF0VE9DWnVYZndEN1A0WTN1NThpZkhFYm9pUWRSOFNQa0EwWXRtQVByTEl6TDM0QmZlZGJqNlhPTQp5U1lzQjUvcXloaFUvRk84R3A4UmlMUlEvNlFEQ29yYkR3R3NxR1RvTjVQZkg4b1h2NUpGYlRVaXpPM2x4VXZOCktUeCt2c2c3WEhqeWk0TE00S1kxMGwzcFIzeXRNZVRNWU1xL2loa2tHcUVyZUNGSFdueHBEWDhsRjFBMHRwRVIKQnJFdzFLZjhSbklBSSt6NFNNWUtIZitYby9iVTJBNmVKdDJQandJREFRQUJBb0lCQUJuWFY2SnlCUHMvVkVPTQpvRHFaelVTS1lBaEtMam5IVTVVcktDRUlrdWFmSTdPYVRXTjl5Y3FSVHk1b3RnSUxwRG9YUnR3TzFyZ1huQkZuCjEwU0Fza0dVOFBOT3IzZStmQXNWcG9VYzJIKzFEZ1pwVTJYQXNHeSs4WkxkbXFHTUIyTko2Wm95Wm94MjRVUDIKODFGdmd4MkQ1OGhGcHRHcml1RjlBSHRaNHdhUXhnNE9EckNnQUVmZU8rTlNsckEvZHB0bERFcDJYUHBVVGg5VQpKMGk2b3VGZjUwTllHZXptMTR5ZkpWMDhOdWJGYjNjWldNOUZYZXAvUDhnZTFXZXBRemJsZWtWcEQ0eGZQa1ZjCjNwam1GREszdUVuSC9qcmRKeDJUb0NjTkJqK21nemN6K1JNZUxTNVBQZ29sc0huSFVNdkg4eG51cis5MURlRXgKWVlSYUtRRUNnWUVBMkVjUjFHcTFmSC9WZEhNU3VHa2pRUXJBZ0QvbEpiL0NGUExZU2xJS0pHdmV5Vi9qbWtKNApRdUFiYWFZRDVOdmtxQ0dDd2N1YXZaU05YTnFkMGp5OHNBWmk0M0xOaCt0S1VyRDhOVlVYM2ZiR2wyQUx0MTFsCmVUM0ZRY1NVZmNreEZONW9jdEFrV3hLeG5hR2hpOHJlelpjVStRZkMxdDF5VTdsOXR0MUgrODhDZ1lFQTJsRjQKRDBnZHpKWHduQnBJM0FJSjNyMDlwalZpMXlScGgyUDRvK1U2a1cvTmE5NnRLcTg5VEg3c0ViQkpSWEVjUDdBawpSYnNHN1p4SStCQkF5cy9YdFhkOWowdXRBODc4c1hsbm5ZVTlUa21xQXRoYVBCaWh3K00wZE9LNnFhRFpueWZBCnE5Z2NoZ0ZvS3pGenBuMzVvVzllNStId2EyYWk2YW8yZnFHSFlFRUNnWUVBcVVaR3dEaWN2MHJXYUlSQVhMRjkKZEVUVUVnendickU5V0dRUndXbWdvbzBESEIyKzZGZXFCTDJlOXZ1SEJMTE9yb0U3OUM1RmVLZ3lWRUNQVWFOVQpFM21NSUhVVVJKTjE0bTYvbDRaNFhiUHVEMENQS3Y4Z2t0b3o3NXZLbFFESk40b3p1ZGtLKzNVUUswMzhRSXVTCkF0dURBTDZBVXVlVHVjL3VneGVDWmFVQ2dZQnFZUlE5YmdpSExmQ21QL0NNczdtWGZXTFM0R1NmTExEM05mRnIKKzBDRXFaUFJJaG9ER0l5bi81aU1MZmdtREMyVm93Q3BzYTU0alpUSXV6SzNJSHVkZ3ZIOXB3UlJQTVRJdmIyTgpkZVVmaHFsKzVXbGlxeVgzeTNnK0ZGU2NYekpyYVBWclJzenZSelE1Qjhtd3NPVzRrZ29PdDN0cytnQWNGOEtpCkJaZHZnUUtCZ1FDdTBSQmwrTkRIZ2RiZC9YUnN0ZmZWWUlDRnpPOFJoWUo0NGpZYWRVc3BtcW5BYUp3T3liL0EKek9FNC9RbE85MGx2WCtYbXFud3FKMzlDOVF0SDFCeTZGdVhhVCtGdVdxeWVlK3MyNEd1Rnp5b3pUVnBZNURROQpSUS9iL2NQbXZuMTZMMnlTaVY2d3N3Nk8xWGdtc2ZCZ0JHUjJsdU5PTjIwNGdnazRZWGgvZ3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
kind: Secret
metadata:
creationTimestamp: "2019-12-16T14:31:59Z"
name: dpaas-secret
namespace: dpaas-prod
resourceVersion: "134564"
selfLink: /api/v1/namespaces/dpaas-prod/secrets/dpaas-secret
uid: d1c692b6-2010-11ea-bce8-1247666f5179
type: kubernetes.io/tls
</code></pre>
<p><code>kubectl describe ingress ingress-test4 -n dpaas-prod</code>:</p>
<pre><code>Name: ingress-test4
Namespace: dpaas-prod
Address: ad6c6ea681f5d11ea91440a6af5c8987-559e0a22f4b3e398.elb.us-east-1.amazonaws.com
Default backend: default-http-backend:80 (<none>)
TLS:
dpaas-secret terminates
Rules:
Host Path Backends
---- ---- --------
test4.dps.mycompany.com
/ cpe-test4:80 (10.0.13.222:8080,10.0.38.178:8080)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/server-alias: test4.dps.us-east-1.mycompany.com
nginx.ingress.kubernetes.io/ssl-redirect: true
Events: <none>
</code></pre>
| Assaf | <p>In general, "Kubernetes ingress Controller Fake certificate" indicates problems on the certificates itself or in your setup. You can read more about it <a href="https://github.com/kubernetes/ingress-nginx/issues/1044" rel="nofollow noreferrer">here</a>, <a href="https://rancher.com/docs/rancher/v2.x/en/installation/options/troubleshooting/#cert-cn-is-kubernetes-ingress-controller-fake-certificate" rel="nofollow noreferrer">here</a>, <a href="https://github.com/kubernetes/ingress-nginx/issues/4674" rel="nofollow noreferrer">here</a> and <a href="https://github.com/kubernetes/ingress-nginx/issues/1984" rel="nofollow noreferrer">here</a>. </p>
<p>None of these posts will tell you how to solve your problem as the reason may be very wide and depends on your certificate and how it was generated. </p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/4674#issuecomment-558504185" rel="nofollow noreferrer">Here</a> for example, it's reported that problem was not in the certificate itself but in his ingress:</p>
<blockquote>
<p>I just realized that I was missing the host in the rule per se (not
sure if this is required, but it fixed the issues and now the cert.
being used is the one I'm providing and not the Fake Kubernetes one).
Example of my ingress:</p>
</blockquote>
<p>So, I as suggested in the comments, you reviewed the steps used to generate your certificate and discovered that adding the certificate common name to the list of SANs and regenerating the self-signed certificate fixed the problem. </p>
| Mark Watney |
<p>Ask a question.
In the K8S environment, FluentBit outputs to Kafka, can establishes a connection and creates a topic, but generates data report error.</p>
<pre><code>[error] [output:kafka:kafka.0] fluent-bit#producer-1: [thrd:data39:9092/176]: data39:9092/176: Failed to resolve 'data39:9092': Name or service not known (after 197ms in state CONNECT)
</code></pre>
<p>I tried FluentBit1.5 and 1.6 and had the same problem.
I only changed the Kafka broker and topic in the FluentBit configuration file, everything else is the default configuration.
I checked /etc/hosts and there is no problem.</p>
<p>The Kafka profile is set to:
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://x.x.x.x:9092</p>
<p>Also, FluentBit output to ElasticSearch is normal.</p>
<p>Ask everyone to help, thank you very much.</p>
| user3804623 | <p>Posting this answer as a community wiki as original poster of this question provided a solution to his issue in the comments.</p>
<p>Feel free to edit and improve this answer.</p>
<blockquote>
<p>it works. in fluent-bit-ds.yaml - mountPath: /etc/hosts name: hosts readOnly: true</p>
</blockquote>
<p>The changes most probably were made to the <code>fluent-bit-ds.yaml</code> from this site:</p>
<ul>
<li><em><a href="https://docs.fluentbit.io/manual/installation/kubernetes#fluent-bit-to-elasticsearch" rel="nofollow noreferrer">Docs.fluentbit.io: Manual: Installation: Kubernetes: Fluent-bit</a></em></li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kafka.apache.org/" rel="nofollow noreferrer">Kafka.apache.org</a></em></li>
<li><em><a href="https://fluentbit.io/" rel="nofollow noreferrer">Fluentbit.io</a></em></li>
</ul>
| Dawid Kruk |
<p>I'm trying to add consul ingress to my project, and I'm using this GitHub repo as a doc for ui and ingress: <a href="https://github.com/hashicorp/consul-helm/blob/497ebbf3cf875b028006a0453d6232df50897d42/values.yaml#L610" rel="nofollow noreferrer">here</a> and as you can see unfortunately there is no ingress in doc, there is an ingressGateways which is not useful because doesn't create ingress inside Kubernetes(it can just expose URL to outside)</p>
<p>I have searched a lot, there are 2 possible options:</p>
<p>1: create extra deployment for ingress</p>
<p>2: create consul helm chart to add ingress deploy</p>
<p>(unfortunately I couldn't find a proper solution for this on the Internet)</p>
| sasan | <p>The <a href="https://github.com/hashicorp/consul-helm/blob/v0.25.0/values.yaml#L1205" rel="nofollow noreferrer"><code>ingressGateways</code></a> config in the Helm chart is for deploying a Consul <a href="https://www.consul.io/docs/connect/gateways/ingress-gateway" rel="nofollow noreferrer">ingress gateway</a> (powered by Envoy) for Consul service mesh. This is different from a Kubernetes Ingress.</p>
<p>Consul's ingress enables routing to applications running inside the service mesh, and is configured using an <a href="https://www.consul.io/docs/agent/config-entries/ingress-gateway" rel="nofollow noreferrer">ingress-gateway</a> configuration entry (or in the future using <a href="https://www.consul.io/docs/k8s/crds" rel="nofollow noreferrer">Consul CRDs</a>). It cannot route to endpoints that exist outside the service mesh, such as Consul's API/UI endpoints.</p>
<p>If you need a generic ingress that can route to applications outside the mesh, I recommend using a solution such as Ambassador, Traefik, or Gloo. All three of this also support integrations with Consul for service discovery, or service mesh.</p>
| Blake Covarrubias |
<p>I have 2 namespaces called <code>dev</code> and <code>stage</code>
in both namespaces I have similar setups. In both namespaces I have service called frontend.</p>
<p>I wanted to set up an ingress for this. I set up ingress in both namespaces with the following config: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: dev.myapp.io
http:
paths:
- backend:
serviceName: frontend
servicePort: 80
</code></pre>
<p>In the stage just changed the host to <code>stage.myapp.io</code>. It is not working for one of the namespaces.
Does my approach is correct? Or I need to set up ingress in another namepace (Kube-system maybe) and point paths in the same ingress? </p>
<p>PS: If I change service names and keep it different, 2 ingress works just fine but I want to set up services with same namespace, as it simplifies my other deployments.</p>
| Guru | <p>Your are supposed to include the namespace annotation to your Ingress. Considering it, your yaml files should look like this: </p>
<p>Dev:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-dev
namespace: dev
spec:
rules:
- host: dev.myapp.io
http:
paths:
- backend:
serviceName: frontend
servicePort: 80
</code></pre>
<p>Stage:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-stage
namespace: stage
spec:
rules:
- host: stage.myapp.io
http:
paths:
- backend:
serviceName: frontend
servicePort: 80
</code></pre>
| Mark Watney |
<p>I am using a baremetal cluster of 1 master and 2 nodes on premise in my home lab with istio, metallb and calico.</p>
<p>I want to create a DNS server in kubernetes that translates IPs for the hosts on the LAN.</p>
<p>Is it possible to use the coreDNS already installed in k8s?</p>
| rbr86 | <p>Yes, it's possible but there are some points to consider when doing that. Most of them are described in the Stackoverflow answer below:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/55834721/how-to-expose-kubernetes-dns-externally">Stackoverflow.com: Questions: How to expose Kubernetes DNS externally</a></em></li>
</ul>
<p>For example: The DNS server would be resolving the queries that are internal to the Kubernetes cluster (like <code>nslookup kubernetes.default.svc.cluster.local</code>).</p>
<hr />
<p>I've included the example on how you can expose your <code>CoreDNS</code> to external sources and add a <code>Service</code> that would be pointing to some IP address</p>
<p>Steps:</p>
<ul>
<li>Modify the <code>CoreDNS</code> <code>Service</code> to be available outside.</li>
<li>Modify the <code>configMap</code> of your <code>CoreDNS</code> accordingly to:
<ul>
<li><em><a href="https://coredns.io/plugins/k8s_external/" rel="nofollow noreferrer">CoreDNS.io: Plugins: K8s_external</a></em></li>
</ul>
</li>
<li>Create a <code>Service</code> that is pointing to external device.</li>
<li>Test</li>
</ul>
<hr />
<h3>Modify the <code>CoreDNS</code> <code>Service</code> to be available outside.</h3>
<p>As you are new to Kubernetes you are probably aware on how <code>Services</code> work and which can be made available outside. You will need to change your <code>CoreDNS</code> <code>Service</code> from <code>ClusterIP</code> to either <code>NodePort</code> or <code>LoadBalancer</code> (I'd reckon <code>LoadBalancer</code> would be a better idea considering the <code>metallb</code> is used and you will access the <code>DNS</code> server on a port: <code>53</code>)</p>
<ul>
<li><code>$ kubectl edit --namespace=kube-system service/coredns </code> (or <code>kube-dns</code>)</li>
</ul>
<blockquote>
<p>A side note!</p>
<p><code>CoreDNS</code> is using <code>TCP</code> and <code>UDP</code> simultaneously, it could be an issue when creating a <code>LoadBalancer</code>. Here you can find more information on it:</p>
<ul>
<li><a href="https://metallb.universe.tf/usage/" rel="nofollow noreferrer">Metallb.universe.tf: Usage</a> (at the bottom)</li>
</ul>
</blockquote>
<hr />
<h3>Modify the <code>configMap</code> of your <code>CoreDNS</code></h3>
<p>If you would like to resolve domain like for example: <code>example.org</code> you will need to edit the <code>configMap</code> of <code>CoreDNS</code> in a following way:</p>
<ul>
<li><code>$ kubectl edit configmap --namespace=kube-system coredns</code></li>
</ul>
<p>Add the line to the <code>Corefile</code>:</p>
<pre><code> k8s_external example.org
</code></pre>
<blockquote>
<p>This plugin allows an additional zone to resolve the external IP address(es) of a Kubernetes service. This plugin is only useful if the kubernetes plugin is also loaded.</p>
<p>The plugin uses an external zone to resolve in-cluster IP addresses. It only handles queries for A, AAAA and SRV records; all others result in NODATA responses. To make it a proper DNS zone, it handles SOA and NS queries for the apex of the zone.</p>
<p>-- <em><a href="https://coredns.io/plugins/k8s_external/" rel="nofollow noreferrer">CoreDNS.io: Plugins: K8s_external</a></em></p>
</blockquote>
<hr />
<h3>Create a <code>Service</code> that is pointing to external device.</h3>
<p>Following on the link that I've included, you can now create a <code>Service</code> that will point to an IP address:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
clusterIP: None
externalIPs:
- 192.168.200.123
type: ClusterIP
</code></pre>
<hr />
<h3>Test</h3>
<p>I've used <code>minikube</code> with <code>--driver=docker</code> (with <code>NodePort</code>) but I'd reckon your can use the <code>ExternalIP</code> of your <code>LoadBalancer</code> to check it:</p>
<ul>
<li><code>dig @192.168.49.2 test.default.example.org -p 32261 +short</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>192.168.200.123
</code></pre>
<p>where:</p>
<ul>
<li><code>@192.168.49.2</code> - IP address of <code>minikube</code></li>
<li><code>test.default.example.org</code> - service-name.namespace.k8s_external_domain</li>
<li><code>-p 32261</code> - <code>NodePort</code> port</li>
<li><code>+short</code> - to limit the output</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://linux.die.net/man/1/dig" rel="nofollow noreferrer">Linux.die.net: Man: Dig</a></em></li>
</ul>
| Dawid Kruk |
<p>In a consul-connect-service-mesh (using k8) how do you get to the consul-api itself?
For example to access the consul-kv.</p>
<p>I'm working through this <a href="https://www.consul.io/docs/k8s/connect" rel="nofollow noreferrer">tutorial</a>, and I'm wondering how
you can bind the consul (http) api in a service to localhost.</p>
<p>Do you have to configure the Helm Chart further?
I would have expected the consul-agent to always be an upstream service.</p>
<p>The only way i found to access the api is via the k8-service consul-server.</p>
<p>Environment:</p>
<ul>
<li>k8 (1.22.5, docker-desktop)</li>
<li>helm consul (0.42)</li>
<li>consul (1.11.3)</li>
<li>used helm-yaml</li>
</ul>
<pre><code>global:
name: consul
datacenter: dc1
server:
replicas: 1
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
fsGroup: 0
ui:
enabled: true
service:
type: 'NodePort'
connectInject:
enabled: true
controller:
enabled: true
</code></pre>
| Florian Hilfenhaus | <p>You can access the Consul API on the local agent by using the Kubernetes downward API to inject an environment variable in the pod with the IP address of the host. This is documented on Consul.io under <a href="https://www.consul.io/docs/k8s/installation/install#accessing-the-consul-http-api" rel="nofollow noreferrer">Installing Consul on Kubernetes: Accessing the Consul HTTP API</a>.</p>
<p>You will also need to exclude port 8500 (or 8501) from redirection using the <a href="https://www.consul.io/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports" rel="nofollow noreferrer"><code>consul.hashicorp.com/transparent-proxy-exclude-outbound-ports</code></a> label.</p>
| Blake Covarrubias |
<p>I am considering porting a legacy pipeline that builds and tests Docker/OCI images into GitLab CI/CD. I already have a GitLab Runner in a Kubernetes cluster and it's registered to a GitLab instance. Testing a particular image requires running certain commands inside (for running unit tests, etc.). Presumably this could be modeled by a job <code>my_test</code> like so:</p>
<pre><code>my_test:
stage: test
image: my_image_1
script:
- my_script.sh
</code></pre>
<p>However, these tests are not completely self-contained but also require the presence of a second container (a database, i.e.). At the outset, I can imagine one, perhaps suboptimal way for handling this (there would also have to be some logic for waiting until <code>my_image2</code> has started up and a way for <code>kubectl</code> to obtain sufficient credentials):</p>
<pre><code> before_script: kubectl deployment create my_deployment2 ...
after_script: kubectl delete deployment my_deployment2 ...
</code></pre>
<p>I am fairly new to GitLab CI/CD so I am wondering: What is best practice for modeling a test like this one, i.e. situations where tests requires orchestration of multiple containers? (Does this fit into the scope of a GitLab job or should it better be delegated to other software that <code>my_test</code> could talk to?)</p>
| rookie099 | <p>Your first look should be at <a href="https://docs.gitlab.com/ee/ci/services/" rel="nofollow noreferrer">Services</a>.</p>
<p>With services you can start a container running <code>MySQL</code> or <code>Postgres</code> and run tests which will connect to it.</p>
| Johann-Michael Thiebaut |
<p>OpenShift (and probably k8s, too) updates a deployment's existing environment variables and creates new ones when they were changed in the respective <code>DeploymentConfig</code> in a template file before applying it.</p>
<p>Is there a way to remove already existing environment variables if they are no longer specified in a template when you run <code>oc apply</code>?</p>
| Christian | <p>There is a way to achieve what you need and for that you need to <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="nofollow noreferrer">patch</a> your objects. You need to use the patch type <code>merge-patch+json</code> and as a patch you need to supply a complete/desired list of env vars.</p>
<p>As an example lets consider this deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
labels:
app: sample
spec:
replicas: 2
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: VAR1
value: "Hello, I'm VAR1"
- name: VAR2
value: "Hey, VAR2 here. Don't kill me!"
</code></pre>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydeployment-db84d9bcc-jg8cb 1/1 Running 0 28s
mydeployment-db84d9bcc-mnf4s 1/1 Running 0 28s
</code></pre>
<pre><code>$ kubectl exec -ti mydeployment-db84d9bcc-jg8cb -- env | grep VAR
VAR1=Hello, I'm VAR1
VAR2=Hey, VAR2 here. Don't kill me!
</code></pre>
<p>Now, to remove VAR2 we have to export our yaml deployment:</p>
<pre><code>$ kubectl get deployments mydeployment -o yaml --export > patch-file.yaml
</code></pre>
<p>Edit this file and remove VAR2 entry:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: sample
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sample
spec:
containers:
- env:
- name: VAR1
value: Hello, I'm VAR1
image: gcr.io/google-samples/node-hello:1.0
imagePullPolicy: IfNotPresent
name: patch-demo-ctr
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
</code></pre>
<p>Now we need to patch it with the following command:</p>
<pre><code>$ kubectl patch deployments mydeployment --type merge --patch "$(cat patch-file.yaml)"
deployment.extensions/mydeployment patched
</code></pre>
<p>Great, If we check our pods we can see that we have 2 new pods and the old ones are being terminated:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mydeployment-8484d6887-dvdnc 1/1 Running 0 5s
mydeployment-8484d6887-xzkhb 1/1 Running 0 3s
mydeployment-db84d9bcc-jg8cb 1/1 Terminating 0 5m33s
mydeployment-db84d9bcc-mnf4s 1/1 Terminating 0 5m33s
</code></pre>
<p>Now, if we check the new pods, we can see they have only VAR1:</p>
<pre><code>$ kubectl exec -ti mydeployment-8484d6887-dvdnc -- env | grep VAR
VAR1=Hello, I'm VAR1
</code></pre>
| Mark Watney |
<p>In a Hosted Rancher Kubernetes cluster, I have a service that exposes a websocket service (a Spring SockJS server).
This service is exposed to the outside thanks an ingress rule:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myIngress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600s"
nginx.ingress.kubernetes.io/enable-access-log: "true"
spec:
rules:
- http:
paths:
- path: /app1/mySvc/
backend:
serviceName: mySvc
servicePort: 80
</code></pre>
<p>A <strong>web application</strong> connects to the web socket service throught an ingress nginx and it works fine. The loaded js scripts is:</p>
<pre class="lang-js prettyprint-override"><code> var socket = new SockJS('ws');
stompClient = Stomp.over(socket);
stompClient.connect({}, onConnected, onError);
</code></pre>
<p>On the contrary, the <strong>standalone clients</strong> (js or python) do not work as they returns a 400 http error. </p>
<p>For example, here is the request sent by curl and the response from nginx:</p>
<pre class="lang-sh prettyprint-override"><code>curl --noproxy '*' --include \
--no-buffer \
-Lk \
--header "Sec-WebSocket-Key: l3ApADGCNFGSyFbo63yI1A==" \
--header "Sec-WebSocket-Version: 13" \
--header "Host: ingressHost" \
--header "Origin: ingressHost" \
--header "Connection: keep-alive, Upgrade" \
--header "Upgrade: websocket" \
--header "Sec-WebSocket-Extensions: permessage-deflate" \
--header "Sec-WebSocket-Protocol: v10.stomp, v11.stomp, v12.stomp" \
--header "Access-Control-Allow-Credentials: true" \
https://ingressHost/app1/mySvc/ws/websocket
HTTP/2 400
date: Wed, 20 Nov 2019 14:37:36 GMT
content-length: 34
vary: Origin
vary: Access-Control-Request-Method
vary: Access-Control-Request-Headers
access-control-allow-origin: ingressHost
access-control-allow-credentials: true
set-cookie: JSESSIONID=D0BC1540775544E34FFABA17D14C8898; Path=/; HttpOnly
strict-transport-security: max-age=15724800; includeSubDomains
Can "Upgrade" only to "WebSocket".
</code></pre>
<p>Why does it work with the browser and not standalone clients ?</p>
<p>Thanks</p>
| crepu | <p>The problem doesn't seem to be in the nginx Ingress. The presence of the JSESSIONID cookie most likely indicates that the Spring applications gets the request and sends a response.</p>
<p>A quick search through the Spring's code shows that <code>Can "Upgrade" only to "WebSocket".</code> is returned by <a href="https://github.com/spring-projects/spring-framework/blob/master/spring-websocket/src/main/java/org/springframework/web/socket/server/support/AbstractHandshakeHandler.java#L294-L300" rel="nofollow noreferrer">AbstractHandshakeHandler.java</a> when the <a href="https://github.com/spring-projects/spring-framework/blob/master/spring-websocket/src/main/java/org/springframework/web/socket/server/support/AbstractHandshakeHandler.java#L251-L254" rel="nofollow noreferrer"><code>Upgrade</code> header isn't equal to <code>WebSocket</code></a> (case-insensitive match).</p>
<p>I'd suggest double-checking that the <code>"Upgrade: websocket"</code> header is present when making the call with <code>curl</code>. </p>
<p>Also, this appears to be a <a href="https://stackoverflow.com/questions/38376316/handle-the-same-url-with-spring-mvc-requestmappinghandlermapping-and-spring-webs">similar problem</a> and may apply here as well if the application has several controllers.</p>
<p>And for what it's worth, after replacing <code>ingressHost</code> appropriately I tried the same <code>curl</code> command as in the question against <code>https://echo.websocket.org</code> and a local STOMP server I've implemented some time ago. It worked for both. </p>
<p>You may have already done this but have you tried capturing in the browser the Network traffic to see the request/response exchange, especially the one that returns HTTP <code>101 Switching Protocols</code>? Then try to exactly replicate the call that the browser makes and that is successful. For example, the STOMP client generates a session id and uses a queue/topic, which are put in the URL path in requests to the server (e.g. <code>/ws/329/dt1hvk2v/websocket</code>). The test request with <code>curl</code> doesn't seem to have them. </p>
| gears |
<p>I am looking to spin up a specific number of pods that are independent and not load balanced. (The intent is to use these to send and receive certain traffic to and from some external endpoint.) The way I am planning to do this is to create the pods explicitly (yaml snippet as below)</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: generator-agent-pod-1
labels:
app: generator-agent
version: v1
spec:
containers:
...
</code></pre>
<p>(In this, the name will be auto-generated as <code>generator-agent-pod-1, generator-agent-pod-2</code>, etc.)</p>
<p>I am then looking to create one service per pod: so essentially, there'll be a <code>generator-agent-service-1, generator-agent-service-2</code>, etc., and so I can then use the service to be able to reach the pod from outside.</p>
<p>I now have two questions:
1. In the service, how do I select a specific pod by name (instead of by labels)? something equivalent to:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: generator-agent-service-1
labels:
app: agent-service
spec:
type: NodePort
ports:
- port: 8085
protocol: TCP
selector:
metadata.name: generator-agent-pod-1
</code></pre>
<p>(This service does not get any endpoints, so the selector is incorrect, I suppose.)</p>
<ol start="2">
<li>Is there a better way to approach this problem (Kubernetes or otherwise)?</li>
</ol>
<p>Thanks!</p>
| Srinivas | <p>I think you are using StatefulSet for controlling Pods.
If so, you can use label <code>statefulset.kubernetes.io/pod-name</code> to select pods in a service.</p>
<p>For illustration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: generator-agent-service-1
labels:
app: agent-service
spec:
type: NodePort
ports:
- port: 8085
protocol: TCP
selector:
statefulset.kubernetes.io/pod-name: generator-agent-pod-1
</code></pre>
| DoroCoder |
<p>I am new to Kubernetes and I would like to try different CNI.</p>
<p>In my current Cluster, I am using Flannel</p>
<p>Now, I would like to use Calico but I cannot find a proper guide to clean up Flannel and install Calico.</p>
<p>Could you please point out the correct procedure?</p>
<p>Thanks</p>
| gaetano | <p>Calico provides a migration tool that performs a rolling update of the nodes in the cluster. At the end, you will have a fully-functional Calico cluster using VXLAN networking between pods.</p>
<p>From the <a href="https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/migration-from-flannel" rel="nofollow noreferrer">documentation</a> we have: </p>
<p><strong>Procedure</strong></p>
<p>1 - First, install Calico.</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/flannel-migration/calico.yaml
</code></pre>
<p>Then, install the migration controller to initiate the migration.</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/flannel-migration/migration-job.yaml
</code></pre>
<p>Once applied, you will see nodes begin to update one at a time.</p>
<p>2 - To monitor the migration, run the following command.</p>
<pre><code>kubectl get jobs -n kube-system flannel-migration
</code></pre>
<p>The migration controller may be rescheduled several times during the migration when the node hosting it is upgraded. The installation is complete when the output of the above command shows 1/1 completions. For example:</p>
<pre><code>NAME COMPLETIONS DURATION AGE
flannel-migration 1/1 2m59s 5m9s
</code></pre>
<p>3 - After completion, delete the migration controller with the following command.</p>
<pre><code>kubectl delete -f https://docs.projectcalico.org/v3.10/manifests/flannel-migration/migration-job.yaml
</code></pre>
<p>To know more about it: <strong><a href="https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/migration-from-flannel" rel="nofollow noreferrer">Migrating a cluster from flannel to Calico</a></strong></p>
<p>This article describes how migrate an existing Kubernetes cluster with flannel networking to use Calico networking.</p>
| Mark Watney |
<p>I am wondering if it is possible to have a byte array as kubernetes secret.
I created a byte array and a base64-encoded string as below</p>
<pre><code> SecureRandom random = new SecureRandom();
byte bytes[] = new byte[32];
random.nextBytes(bytes);
for (int i = 0; i < bytes.length; i++) {
System.out.print(bytes[i] + ",");
}
String token = Base64.getEncoder().withoutPadding().encodeToString(bytes);
</code></pre>
<p>Then I used the resulting string in a kubernetes secret. The secret gets created successfully.
Now I would like my Spring Boot application, that is running in kubernetes, to read and decode that value.
However, I get an IllegalArgumentException (Illegal base64 character)
When running the application locally reading the same token from a properties file, it can be decoded.</p>
<p>So my question again: Is it possible to use a byte array as kubernetes secret?</p>
| Martin Baeumer | <p>The plain value is expected when creating a secret with <code>kubectl create secret generic</code> whether using <code>--from-file</code> or <code>--from-literal</code> (as @fg78nc eluded to).</p>
<p>base64-encoded value is required when <a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually" rel="nofollow noreferrer">Creating a Secret Manually</a> from binary value.</p>
<p>If secret's value is a binary value, I'd suggest mounting the secret as a volume and reading it from the file as a byte array - it will be base64-decoded in the file. </p>
<p>The secrets are base64-decoded automatically when getting the value from an environment variable created from the secret, from a file mounted as a volume, but not by <code>kubectl get secret</code> or when directly using the Kubernetes API (<code>GET /api/v1/namespaces/{namespace}/secrets/{name}</code>).</p>
| gears |
<p>I have a scenario like:</p>
<ol>
<li>Have a single deployment containing two containers and have different ports like:</li>
</ol>
<pre><code> template: {
spec: {
containers: [
{
name: container1,
image: image1,
command: [...],
args: [...],
imagePullPolicy: IfNotPresent,
ports: [
{
name: port1,
containerPort: 80,
},
],
.............
},
{
name: container2,
image: image1,
command: [...],
args: [...],
imagePullPolicy: IfNotPresent,
ports: [
{
name: port2,
containerPort: 81,
},
],
------------
}
]
}
}
</code></pre>
<ol start="2">
<li>A service having multiple ports pointing to those containers like:</li>
</ol>
<pre><code>spec: {
type: ClusterIP,
ports: [
{
port: 7000,
targetPort: 80,
protocol: 'TCP',
name: port1,
},
{
port: 7001,
targetPort: 81,
protocol: 'TCP',
name: port2,
}
]
}
</code></pre>
<p>The problem I am facing is I can connect to the container having port 80 using service name and port 7000 but I can't connect to the container having port 81 using service name and port 7001. Did I miss anything here?
Also, note that both containers have identical images having different <strong>command</strong> and <strong>args</strong> for the internal logic.</p>
| Yashasvi Raj Pant | <p>You can use two services or one service with two exposed ports
you can try 2 services :
with the deployment like this :</p>
<pre><code>spec:
containers:
- name: container1
image:image1
ports:
- containerPort: 8080
- name: container2
image: image1
ports:
- containerPort: 8081
</code></pre>
<p>and the services :</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: container1
annotations:
version: v1.0
spec:
selector:
component: <deployment>
ports:
- name: container1
port: 8080
targetPort: 8080
type: ClusterIP
---
kind: Service
apiVersion: v1
metadata:
name: container2
annotations:
version: v1.0
spec:
selector:
component: <deployment>
ports:
- name: container2
port: 8080
targetPort: 8081
type: ClusterIP
</code></pre>
| ossama assaghir |
<p>I have a Jhipster application which I want to deploy to Kubernetes. I used the <code>jhipster kubernetes</code> command to create all the k8s objects and I provided an Docker Hub repository in which to push them. The Docker Hub repository is a private one. </p>
<p>The deployment object looks like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demodevices
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: demodevices
version: 'v1'
template:
metadata:
labels:
app: demodevices
version: 'v1'
spec:
initContainers:
- name: init-ds
image: busybox:latest
command:
- '/bin/sh'
- '-c'
- |
while true
do
rt=$(nc -z -w 1 demodevices-postgresql 5432)
if [ $? -eq 0 ]; then
echo "DB is UP"
break
fi
echo "DB is not yet reachable;sleep for 10s before retry"
sleep 10
done
containers:
- name: demodevices-app
image: myRepo/demo:demodevices-1.0.0
env: ...
resources: ...
ports: ...
readinessProbe: ...
livenessProbe: ...
imagePullSecrets:
- name: regcred
</code></pre>
<p>Because I used a private Docker Hub repo, I added the <code>imagePullSecret</code>. The secret is created and deployed to k8s. </p>
<p>When applying the file, in the pods I see the following messages: </p>
<pre><code> Warning Failed <invalid> (x4 over <invalid>) kubelet, k8node1 Failed to pull image "busybox:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/library/busybox/manifests/latest: unauthorized: incorrect username or password
Warning Failed <invalid> (x4 over <invalid>) kubelet, k8node1 Error: ErrImagePull
Normal BackOff <invalid> (x6 over <invalid>) kubelet, k8node1 Back-off pulling image "busybox:latest"
Warning Failed <invalid> (x6 over <invalid>) kubelet, k8node1 Error: ImagePullBackOff
</code></pre>
<p>As I understood, it tries do pull the busybox:latest image using the credentials for the private repository. The expected result is to pull the busybox:latest without errors and pull my custom image from my private repo. How to fix the above issue?</p>
| florin | <p>This error is not connected to the fact you are using <code>imagePullSecret</code>. </p>
<p><a href="https://kubernetes.io/docs/concepts/containers/images/#creating-a-secret-with-a-docker-config" rel="nofollow noreferrer">Review</a> the process you used to create your secret, here is an example: </p>
<pre><code>kubectl create secret docker-registry anyname \
--docker-server=docker.io \
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email>
</code></pre>
<p>I have reproduced your case and I have the same error when I create the secret with wrong information. </p>
| Mark Watney |
<p><strong><em>How to configure the health check for the service that is automatically registered using the Consul sync Catalog.?</em></strong></p>
| Ankit Singh | <p>The health checks Kubernetes performs are called <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">"probes"</a>. There are three types of probes - liveness, readiness, and startup - and they are checks on the <em>application/process</em> running in a Container, not the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service(s)</a> in front of the application's Pods. (One Pod can have one or more Containers; usually, as a best practice, a single application/process runs in a given Container)</p>
<p>Automatically syncing Kubernetes Services to Consul has no bearing on the probes executed against Containers running in Pods on the same Kubernetes cluster.</p>
<p>Kubernetes wouldn't check the health of Consul services synced to Kubernetes.</p>
<p>The answer to the question</p>
<blockquote>
<p>How to configure the health check for the service that is
automatically registered using the Consul sync Catalog.?</p>
</blockquote>
<p>is "It is not possible to configure Kubernetes probes for the Kubernetes Services synced with Consul or for Consul Services synced with Kubernetes. Kubernetes probes are configured for the application/process running in Container that is in a Kubernetes Pod".</p>
| gears |
<p>I created a deployment using kubernetes Deployment in openshift cluster. i expected it to create the service for all the container ports like openshift DeploymentConfig does.
but i did not find any service created by the kubernetes Deployment.</p>
<p>does kubernetes Deployment not create the service automatically like openshift DeploymentConfig does ?</p>
<p>My openshift version is 3.11</p>
| dsingh | <p>Both <strong>Deployment</strong> and <strong>DeploymentConfig</strong> does not create the <strong>Service</strong> component in OpenShift. These component are used for creation of Replication Control of the Pod.</p>
<p>Service has to be configured separately with the <strong>selector</strong> parameter to point to the specific Pods.</p>
<p>selector:
name: as in the deployment or in deploymentConfig.</p>
<p>This link would help you on the same.</p>
<p><a href="https://docs.openshift.com/container-platform/3.3/dev_guide/deployments/how_deployments_work.html#creating-a-deployment-configuration" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.3/dev_guide/deployments/how_deployments_work.html#creating-a-deployment-configuration</a></p>
| Wasim Ansari |
<p>I am using kubernetes dasboard in version: v1.10.1</p>
<p>When I go to "Roles" tab I can see a list of ClusterRoles and Roles.</p>
<p><a href="https://i.stack.imgur.com/xWxdX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xWxdX.png" alt="k8s dashboard"></a></p>
<p>I would like to see more details about a particular role from the list, but I do not see any "details" button. I want to see information about the role in the dashboard widget or even in yaml format. Am I missing something or this is not possible through dashboard? </p>
| fascynacja | <p>Unfortunately it's not possible to achieve what you described in Kubernetes Dashboard even on the most recent version. </p>
<p>To list all Roles on your cluster, you need to use the command line tool (kubectl): </p>
<pre><code>kubectl get rolebindings,clusterrolebindings --all-namespaces -o custom-columns='KIND:kind,NAMESPACE:metadata.namespace,NAME:metadata.name,SERVICE_ACCOUNTS:subjects[?(@.kind=="ServiceAccount")].name'
</code></pre>
<p>Than you can extract the yaml file as in this example: </p>
<pre><code>kubectl get clusterrolebindings prometheus -o yaml
</code></pre>
<p>Or you can just describe it:</p>
<pre><code>kubectl describe clusterrolebindings prometheus
</code></pre>
| Mark Watney |
<p>I have a k3s (light weighted k8s) cluster running on my Raspberry PI. So, I am not using any cloud hosted cluster but a bear metal one on my Raspberry PI.</p>
<p>I have deployed a application with this manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: myapp
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: bashofmann/rancher-demo:1.0.0
imagePullPolicy: Always
resources:
requests:
cpu: 200m
ports:
- containerPort: 8080
name: web
protocol: TCP
</code></pre>
<p>I also created a service to forward traffic to the application pod. Its manifest is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: demo-app-svc
namespace: myapp
spec:
selector:
app: hello-world
ports:
- name: web
protocol: TCP
port: 31113
targetPort: 8080
</code></pre>
<p>Then, I created a Ingress for the routing rules:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ing
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: myapp.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: demo-app-svc
port:
number: 31113
</code></pre>
<p>I successfully deployed above application pod, service & Ingress to my k3s cluster. Like the manifests indicate, they are under namespace <code>myapp</code>.</p>
<p>The next thing I would like to do is to deploy the <strong>Kubernetes Nginx Ingress Controller</strong> in order to have the clients outside the cluster be able to access the deployed application.</p>
<p>So, I deployed it by :</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>The above command successfully deployed <strong>Ingress Controller</strong> under namespace <code>ingress-nginx</code> along with other objects as shown below with command <code>k get all -n ingress-nginx</code>:
<a href="https://i.stack.imgur.com/QZH7P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QZH7P.png" alt="enter image description here" /></a></p>
<p>As you can see above, the <code>LoadBalancer</code> type <code>service</code> external IP is with value <code><pending></code>. So, client outside the cluster still can not access the application pod.</p>
<p>Why is that & what do I miss deploying the Nginx Ingress Controller on a bear metal machine? The goal is to have an external IP that can be used to access the application pod from outside cluster, how can I achieve that?</p>
<p><strong>===== Update =====</strong></p>
<p>Based on the answer below from @Dawid Kruk , I decided to use the k3s default Traefik Ingress Controller.</p>
<p>So, I deleted all the deployed Nginx Ingress Controller resources by <code>k delete all --all -n ingress-nginx</code> .</p>
<p>Then, I checked the Traefik Ingress related <code>LoadBalancer</code> type service:
<a href="https://i.stack.imgur.com/gjrbN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gjrbN.png" alt="enter image description here" /></a></p>
<p>The <code>external IP</code> of that Traefik service is exactly my Raspberry PI's IP address!</p>
<p>So, added this IP to <code>/etc/hosts</code> to map it to the hostname defined in my Ingress object:</p>
<pre><code>192.168.10.203 myapp.com
</code></pre>
<p>I opened browser & use address <a href="http://myapp.com" rel="nofollow noreferrer">http://myapp.com</a>, with the routing rules defined in my <code>Ingress</code> object (see the manifest for my ingress above), I hoped I could see my deployed web application now. But get <code>404 Page Not Found</code>. What am I missing now to access my deployed application?</p>
<p><strong>Another side question:</strong> I noticed when I check the deployed <code>Ingress</code> object, its IP address is empty, I wonder am I supposed to see an IP address for this object or not when the Traefik Ingress Controller takes effect?
<a href="https://i.stack.imgur.com/3GDNq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3GDNq.png" alt="enter image description here" /></a></p>
<p><strong>Another issue:</strong> Now, when I re-deploy my ingress manifest by <code>k apply -f ingress.yaml</code>, I get error:</p>
<pre><code>Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
...
for: "ingress.yaml": error when patching "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
</code></pre>
<p>It looks like even I decided to use Traefik Ingress Controller, I still need to instal Nginx Ingress Controller. I get confused now, anyone can explain it?</p>
| user842225 | <p>I'm not K3S expert but I think I found a piece of documentation that is addressing your issue.</p>
<p>Take a look:</p>
<blockquote>
<h3>Service Load Balancer</h3>
<p>Any service load balancer (LB) can be used in your K3s cluster. By default, K3s provides a load balancer known as <a href="https://github.com/k3s-io/klipper-lb" rel="nofollow noreferrer">ServiceLB</a> (formerly Klipper Load Balancer) that uses available host ports.</p>
<p>Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn't include a default load balancer implementation, so these services will remain <code>pending</code> until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration.</p>
<h3>How the Service LB Works</h3>
<p>The ServiceLB controller watches Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a> with the <code>spec.type</code> field set to <code>LoadBalancer</code>.</p>
<p>For each LoadBalancer Service, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> is created in the <code>kube-system</code> namespace. This DaemonSet in turn creates Pods with a <code>svc-</code> prefix, on each node. <strong>These Pods use iptables to forward traffic from the Pod's NodePort, to the Service's ClusterIP address and port.</strong></p>
<p>If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's <code>status.loadBalancer.ingress</code> address list. Otherwise, the node's internal IP is used.</p>
<p>If multiple LoadBalancer Services are created, a separate DaemonSet is created for each Service.</p>
<p>It is possible to expose multiple Services on the same node, as long as they use different ports.</p>
<p>If you try to create a LoadBalancer Service that listens on port 80, the ServiceLB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will remain Pending.</p>
<p>-- <em><a href="https://docs.k3s.io/networking#service-load-balancer" rel="nofollow noreferrer">Docs.k3s.io: Networking</a></em></p>
</blockquote>
<p>As a possible solution, I'd recommend to use <code>Traefik</code> as it's a default <code>Ingress</code> controller within <code>K3S</code>.</p>
<p>The <code>Pending</code> status on your <code>LoadBalancer</code> is most likely caused by another service used on that port (<code>Traefik</code>).</p>
<p>If you wish to still use <code>NGINX</code>, the same documentation page explains how you can disable <code>Traefik</code>.</p>
<hr />
<h3>UPDATE</h3>
<p>I'd be more careful to delete resources as you did. The following command:</p>
<ul>
<li><code>k delete all --all -n ingress-nginx</code></li>
</ul>
<p>Will not delete all of the resources created. The better way in my opinion would be to use the command that you've used to create and instead of:</p>
<ul>
<li><code>kubectl create -f ...</code></li>
</ul>
<p>Use:</p>
<ul>
<li><code>kubectl delete -f ...</code></li>
</ul>
<p>I assume that you did not modify your <code>Ingress</code> definition, hence you receive the error and the <code>kubectl get ingress</code> is showing incorrect results.</p>
<p>What you will need to do:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
ingressClassName: nginx # <-- DELETE IT OR CHANGE TO "traefik"
</code></pre>
<p>Either delete or change should work as <code>traefik</code> is set to be a default <code>IngressClass</code> for this setup.</p>
| Dawid Kruk |
<p>I'm new to open distro for elasticsearch and trying to run it on the Kubernetes cluster. After deploying the cluster, I need to change the password for <code>admin</code> user.</p>
<p>I went through this post - <a href="https://discuss.opendistrocommunity.dev/t/default-password-reset/102" rel="nofollow noreferrer">default-password-reset</a></p>
<p>I came to know that, to change the password I need to do the following steps:</p>
<ul>
<li><code>exec</code> in one of the master nodes</li>
<li>generate a hash for the new password using <code>/usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh</code> script</li>
<li>update <code>/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml</code> with the new hash</li>
<li>run <code>/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh</code> with parameters </li>
</ul>
<p>Questions:</p>
<ul>
<li>Is there any way to set those (via <code>env</code> or <code>elasticsearch.yml</code>) during bootstrapping the cluster?</li>
</ul>
| Kamol Hasan | <p>docker exec -ti ELASTIC_MASTER bash</p>
<p>/usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh</p>
<p>##enter pass</p>
<p>yum install nano</p>
<p>#replace generated hash with new one
nano /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml</p>
<p>#exec this command to take place
sh /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert config/root-ca.pem -cert config/admin.pem -key config/admin-key.pem</p>
| Aref |
<p>From the Kubernetes' <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/validation/validation.go" rel="nofollow noreferrer">validation source code</a>, at least those resources are immutable after creation:</p>
<ul>
<li><em>Persistent Volumes</em></li>
<li><em>Storage Classes</em></li>
</ul>
<p>Why is that ?</p>
| Amine Zaine | <p>This is a core concept on Kubernetes. A few specs are immutable because their change has impact in the basic structure of the resource it's connected. </p>
<p>For example, changing the Persistent Volumes may impact pods that are using this PV. Let's suppose you have a mysql pod running on a PV and you change it in a way that all the data is gone. </p>
<p>On Kubernetes 1.18 Secrets and ConfigMaps also became immutable as an Alpha feature, meaning that this will be the new default soon. Check the GitHub Issue <a href="https://github.com/kubernetes/enhancements/issues/1412" rel="nofollow noreferrer">here</a>. </p>
<p><a href="https://blog.alcide.io/kubernetes-1.18-introduces-immutable-configmaps-and-secretes" rel="nofollow noreferrer">What is it good for?</a></p>
<blockquote>
<p>The most popular and the most convenient way of consuming Secrets and
ConfigMaps by Pods is consuming it as a file. However, any update to a
Secret or ConfigMap object is quickly (roughly within a minute)
reflected in updates of the file mounted for all Pods consuming them.
That means that a bad update (push) of Secret and/or ConfigMap can
very quickly break the entire application.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/1412-immutable-secrets-and-configmaps/README.md#motivation" rel="nofollow noreferrer">Here</a> you can read more about the motivation behind this decision.</p>
<blockquote>
<p>In this KEP, we are proposing to introduce an ability to specify that
contents of a particular Secret/ConfigMap should be immutable for its
whole lifetime. For those Secrets/ConfigMap, Kubelets will not be
trying to watch/poll for changes to updated mounts for their Pods.
Given there are a lot of users not really taking advantage of
automatic updates of Secrets/ConfigMaps due to consequences described
above, this will allow them to:</p>
<ul>
<li>protect themselves better for accidental bad updates that could cause outages of their applications</li>
<li>achieve better performance of their cluster thanks to significant reduction of load on apiserver</li>
</ul>
</blockquote>
| Mark Watney |
<p>My NodeJS microservice is deployed to k8s cluster.</p>
<p>I would like this microservice to access the k8s API server. For that, I guess I need to create a <code>ServiceAccount</code> for it. So I did this:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-service-account
namespace: myapp-ns
</code></pre>
<p>Then, I also created a <code>ClusterRole</code> to define the permissions:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: myapp-cluster-role
namespace: myapp-ns
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>Finally, I created a <code>ClusterRoleBinding</code>:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-app-role-binding
namespace: myapp-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myapp-cluster-role
subjects:
kind: ServiceAccount
name: my-app-service-account
</code></pre>
<p>When I deploy them (I use Ansible to do the deployment), I get the following error:</p>
<pre><code>"error": 400, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"ClusterRoleBinding in version \\\\\"v1\\\\\" cannot be handled as a ClusterRoleBinding: json: cannot unmarshal object into Go struct field ClusterRoleBinding.subjects of type []v1.Subject\",\"reason\":\"BadRequest\",\"code\":400}\\n'",
</code></pre>
<p>Why this error? Where am I wrong?</p>
| user842225 | <p>I'd reckon the issue is with the resources, not with Ansible.</p>
<p>Take a look:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: myapp-cluster-role
namespace: myapp-ns # <-- NOT A NAMESPACED RESOURCE
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-app-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myapp-cluster-role
subjects:
- kind: ServiceAccount # <-- Added (-)
name: my-app-service-account
namespace: myapp-ns # <-- MOVED FROM METADATA
</code></pre>
<p>To summarize:</p>
<ul>
<li><code>Clusterrole</code> is a not a namespaced resource, hence you should not specify it</li>
<li>You've missed a <code>-</code> in the <code>.subjects</code></li>
<li>You should move <code>.namespace</code> from <code>.metadata</code> to <code>.suspects...</code></li>
</ul>
<p>More explanation on namespaced/non namespaced resources:</p>
<ul>
<li><code>kubectl api-resources </code></li>
</ul>
<pre class="lang-bash prettyprint-override"><code>NAME SHORTNAMES APIVERSION
roles rbac.authorization.k8s.io/v1 true Role
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
</code></pre>
<p>I encourage you to check on the following docs:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access Authn Authz: RBAC</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#not-all-objects-are-in-a-namespace" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Overview: Working with objects: Namespaces: Not all objects are in a namespace</a></em></li>
</ul>
| Dawid Kruk |
<p>I have a ruby on rails deployment and I want to use it in the frontend deployment so I created a service exposing port 3000 called "flicron-backend-service"</p>
<p>here is the description of the service</p>
<pre><code>kubectl describe svc flicron-backend-service
Name: flicron-backend-service
Namespace: default
Labels: io.kompose.service=flicron-backend-service
Annotations: kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.28.0 (c4137012e)
Selector: io.kompose.service=flicron-backend
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.107.112.244
IPs: 10.107.112.244
Port: 3000 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.244.0.144:3000
Session Affinity: None
Events: <none>
</code></pre>
<p>I am trying to use the service name but it does not get resolved
I have tried from inside minikube to curl the backend-service-name did not work</p>
<pre><code>curl flicron-backend-service:3000
curl: (6) Could not resolve host: flicron-backend-service
curl flicron-backend-service.default.svc.cluster.local:3000
curl: (6) Could not resolve host: flicron-backend-service.default.svc.cluster.local
</code></pre>
<p>but if I used the ip it works fine</p>
<pre><code>curl 10.107.112.244:3000
# some HTML
</code></pre>
<p>Here is my DNS configuration</p>
<pre><code>kubectl describe cm coredns -n kube-system
Name: coredns
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
Corefile:
----
.:53 {
log
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
hosts {
192.168.49.1 host.minikube.internal
fallthrough
}
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
BinaryData
====
Events: <none>
</code></pre>
<p>FrontEnd Service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: flicron-frontend-service
name: flicron-frontend-service
spec:
selector:
app: flicron-frontend
type: NodePort
ports:
- name: http
port: 4000
targetPort: 4000
nodePort: 32123
</code></pre>
<p>backend service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.28.0 (c4137012e)
creationTimestamp: null
labels:
io.kompose.service: flicron-backend-service
name: flicron-backend-service
spec:
# clusterIP: None
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
io.kompose.service: flicron-backend # this maps this service to that deployment
</code></pre>
| Sergi | <blockquote>
<p>I have tried from inside minikube to curl the backend-service-name did not work</p>
</blockquote>
<p>If you explicitly meant that you've:</p>
<ul>
<li>connected to your minikube instance (i.e. <code>minikube ssh</code>)</li>
<li><code>curl</code> the service name directly from <code>minikube</code> instance.</li>
</ul>
<p>Then it should <strong>not work</strong>.</p>
<p>It will work from a Pod/container that is targeting internal Kubernetes DNS resolution.</p>
<hr />
<p>Let me explain:</p>
<p>DNS resolution within Kubernetes environment is handled by <code>coredns</code> residing in <code>kube-system</code> namespace. It is a containerized service that is hosted inside of Kubernetes. Information on how to connect to it is injected to Pods via Kubelet.</p>
<p>You can see it by:</p>
<ul>
<li><code>kubectl run -it basic-pod --image=nginx -- /bin/bash</code></li>
<li><code>cat /etc/resolv.conf</code></li>
</ul>
<pre class="lang-bash prettyprint-override"><code>nameserver 10.96.0.10 # <-- SERVICE KUBE-DNS IN KUBE-SYSTEM (CLUSTER-IP)
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>Minikube itself does not have the core-dns configured in <code>/etc/hosts</code>.</p>
<p>Try to contact your <code>Service</code> with an actual <code>Pod</code>:</p>
<ul>
<li><code>kubectl run -it basic-pod --image=nginx -- /bin/bash</code></li>
<li><code>apt update && apt install dnsutils -y</code> - <code>nginx</code> image used for simplicity</li>
<li><code>nslookup nginx</code> - there is a <code>Service</code> named <code>nginx</code> in my <code>minikube</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>root@basic-pod:/# nslookup nginx
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: nginx.default.svc.cluster.local
Address: 10.109.51.22
</code></pre>
<p>I encourage you to take a look on following documentation:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: DNS Pod Service</a></em></li>
</ul>
| Dawid Kruk |
<p>So I have a cluster with 3 CentOS 7 VMs (1 Master, 2 Worker) and 2 Windows Server 2019 worker nodes. I'm able to update the nodes on the CentOS VMs following this: <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/</a>. Now, I need to update the version on the 2019 nodes and I can't figure out how to do it correctly. The version on the 2019 VMs are 1.14.3 (kubectl, kubelet and kubeadm) and I was able to upgrade them to 1.14.10 with no issues, but the moment I jump to the next major version (any 1.15.x), the kubelet service running under Windows services gets paused and obviously the node versions still show 1.14.10. Is the only way to upgrade the nodes is to recreate it and re-add it to the existing cluster? I haven't been able to find anything online besides the initial setup guides.</p>
<p>Thanks!</p>
| NP2020 | <p>I ended up recreating the node, which was faster and now it works on Windows.</p>
| NP2020 |
<p>I have a node in Google Cloud Platform Kubernetes public cluster. When I make HTTP request from my application to external website, nginx in that website shows some IP address different than the IP address of my kubernetes cluster. I can't figure out where that IP address comes from. I'm not using NAT in GCP.</p>
| Beks | <p><strong>I will just add some official terminology to put some light on <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview" rel="nofollow noreferrer">GKE networking</a> before providing an answer;</strong></p>
<p>Let's have a look at some <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#terminology_related_to_ip_addresses_in_kubernetes" rel="nofollow noreferrer">GKE networking terminology</a>:</p>
<blockquote>
<p>The Kubernetes networking model relies heavily on IP addresses. Services, Pods, containers, and nodes communicate using IP addresses and ports. Kubernetes provides different types of load balancing to direct traffic to the correct Pods. All of these mechanisms are described in more detail later in this topic. Keep the following terms in mind as you read:</p>
<p><strong>ClusterIP:</strong> The IP address assigned to a Service. In other documents, it may be called the "Cluster IP". This address is stable for the lifetime of the Service, as discussed in the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#services" rel="nofollow noreferrer">Services</a> section in this topic.</p>
<p><strong>Pod IP:</strong> The IP address assigned to a given Pod. This is ephemeral, as discussed in the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#pods" rel="nofollow noreferrer">Pods</a> section in this topic.</p>
<p><strong>Node IP:</strong> The IP address assigned to a given node.</p>
</blockquote>
<p>Additionally you may have a look at the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#introduction" rel="nofollow noreferrer">exposing your service</a> documentation which may give you even more insight.</p>
<p><strong>And to support the fact that <a href="https://stackoverflow.com/questions/68727769/gcp-cluster-ip-address-is-not-the-same-as-requests-remoteaddr#comment121465828_68727769">you got your node's IP</a></strong> - <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="nofollow noreferrer">GKE uses an IP masquerading</a>:</p>
<blockquote>
<p>IP masquerading is a form of network address translation (NAT) used to perform many-to-one IP address translations, which allows multiple clients to access a destination using a single IP address. A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses.</p>
</blockquote>
| Wojtek_B |
<p>I was playing with Kubernetes in Minikube. I could able to deploy spring boot sample application into Kubernetes. </p>
<p>I am exploring Kubernetes configMap. I could successfully run a spring boot application with a spring cloud starter and picking the property keys from config map. Till here I am successful.</p>
<p>The issue I am facing currently is configmap reload. </p>
<p>Here is my config map :</p>
<p><strong>ConfigMap.yaml</strong></p>
<pre><code> apiVersion: v1
kind: ConfigMap
metadata:
name: minikube-sample
namespace: default
data:
app.data.name: name
application.yml: |-
app:
data:
test: test
</code></pre>
<p><strong>bootstrap.yaml</strong></p>
<pre><code>management:
endpoint:
health:
enabled: true
info:
enabled: true
restart:
enabled: true
spring:
application:
name: minikube-sample
cloud:
kubernetes:
config:
enabled: true
name: ${spring.application.name}
namespace: default
reload:
enabled: true
</code></pre>
<p><strong>HomeController:</strong></p>
<pre><code>package com.minikube.sample.rest.controller;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.minikube.sample.properties.PropertiesConfig;
import lombok.Getter;
import lombok.Setter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Lookup;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* @author Gorantla, Eresh
* @created 06-12-2018
*/
@RestController
@RequestMapping("/home")
public class HomeResource {
@Autowired
PropertiesConfig config;
@GetMapping("/data")
public ResponseEntity<ResponseData> getData() {
ResponseData responseData = new ResponseData();
responseData.setId(1);
responseData.setName(config.getName());
responseData.setPlace("Hyderabad");
responseData.setValue(config.getTest());
return new ResponseEntity<>(responseData, HttpStatus.OK);
}
@Getter
@Setter
public class ResponseData {
private String name;
private Integer id;
private String place;
private String value;
}
}
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: minikube-sample
namespace: default
spec:
selector:
matchLabels:
app: minikube-sample
replicas: 1
template:
metadata:
labels:
app: minikube-sample
spec:
containers:
- name: minikube-sample
image: minikube-sample:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: env.namespace
value: default
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: minikube-sample
</code></pre>
<p>I used @ConfigurationProperties to reload properties.</p>
<p><strong>Dependencies</strong></p>
<pre><code> <dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-config</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
</code></pre>
<p><strong>What I did ?</strong>
I have gone through spring cloud documentation. <strong>"The view role on the service account is required in order to listen for config map changes."</strong>
Then I created cluster view role through below command</p>
<pre><code>C:\Users\eresh.gorantla\apps\minikube-sample\src\main\fabric8 (master -> origin)
λ kubectl create clusterrolebinding minikube-sample --clusterrole=view --serviceaccount=default:minikube --namespace=default
clusterrolebinding.rbac.authorization.k8s.io/minikube-sample created
</code></pre>
<p>But when I update the configmap in kubernetes, the properties are not reloaded on the fly.
I suspect something wrong in clusterrolebinding.
Please provide your thoughts. Any help is appreciated. </p>
| Eresh | <p>The Deployment doesn't have <code>serviceAccountName</code> configured so it uses the <code>default</code> service account. The command in the question, however - <code>kubectl create clusterrolebinding ... --serviceaccount=default:minikube...</code> - is for an account named <code>minikube</code> in the <code>default</code> namespace.</p>
<p>Moreover, creating <code>clusterrolebinding</code> may be "too much" when <code>rolebinding</code> for the namespace would work.</p>
<p>With the Deployment being for the <code>default</code> namespace (<code>metadata.namespace: default</code>), this should create a proper <code>rolebinding</code> to grant read-only permission to the <code>default</code> account:</p>
<pre><code>kubectl create rolebinding default-sa-view \
--clusterrole=view \
--serviceaccount=default:default \
--namespace=default
</code></pre>
<p>For reference, see <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a>. </p>
| gears |
<p>I've set up a kafka cluster on minikube with linkedin Cruise Control. I'm trying to enable Cruise Control GUI by following the steps on <a href="https://github.com/linkedin/cruise-control-ui" rel="nofollow noreferrer">Cruise Control ui git hub page</a> but it wont work.</p>
<p>When I curl the adress from within the pod it returns the html code from the page that i'm looking to enter, but when i try in my web browser the page does not connect. I figured i would need to expose the pod through a Service, but it also doesn't work. I also tried to change the properties on cruisecontrol.properties but nothing.</p>
<p>Pod's running:</p>
<pre><code>cruise-control-cbdd6bf54-prfts 1/1 Running 2 23h
kafka-0 1/1 Running 1 23h
kafka-1 1/1 Running 3 23h
kafka-2 1/1 Running 1 23h
pzoo-0 1/1 Running 0 23h
pzoo-1 1/1 Running 0 23h
pzoo-2 1/1 Running 1 23h
topic-cruise-control-metrics-fjdlw 0/1 Completed 0 23h
zoo-0 1/1 Running 1 23h
zoo-1 1/1 Running 1 23h
</code></pre>
<p>cruise-control pod</p>
<pre><code>Containers:
cruise-control:
Container ID: docker://b6d43bb8db047480374b19671a761013a2fba39a398215276ffb456a1d9a9f2d
Image: solsson/kafka-cruise-control@sha256:f48acf73d09e6cf56f15fd0b9057cad36b3cee20963a52d263732bf7e5c1aae1
Image ID: docker-pullable://solsson/kafka-cruise-control@sha256:f48acf73d09e6cf56f15fd0b9057cad36b3cee20963a52d263732bf7e5c1aae1
Port: 8090/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 21 Jul 2020 10:32:27 -0300
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 21 Jul 2020 10:31:47 -0300
Finished: Tue, 21 Jul 2020 10:31:59 -0300
Ready: True
Restart Count: 2
Requests:
cpu: 100m
memory: 512Mi
Readiness: tcp-socket :8090 delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/opt/cruise-control/config from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5kp9f (ro)
</code></pre>
<p>cruisecontrol.properties</p>
<pre><code>configurations for the webserver
# ================================
# HTTP listen port
webserver.http.port=8090
# HTTP listen address
webserver.http.address=0.0.0.0
# Whether CORS support is enabled for API or not
webserver.http.cors.enabled=false
# Value for Access-Control-Allow-Origin
webserver.http.cors.origin=http://localhost:8080/
# Value for Access-Control-Request-Method
webserver.http.cors.allowmethods=OPTIONS,GET,POST
# Headers that should be exposed to the Browser (Webapp)
# This is a special header that is used by the
# User Tasks subsystem and should be explicitly
# Enabled when CORS mode is used as part of the
# Admin Interface
webserver.http.cors.exposeheaders=User-Task-ID
# REST API default prefix
# (dont forget the ending *)
webserver.api.urlprefix=/kafkacruisecontrol/*
# Location where the Cruise Control frontend is deployed
webserver.ui.diskpath=./cruise-control-ui/dist/
# URL path prefix for UI
# (dont forget the ending *)
webserver.ui.urlprefix=/*
</code></pre>
<p>CC Service</p>
<pre><code>Name: cruise-control
Namespace: mindlabs
Labels: <none>
Annotations: Selector: app=cruise-control
Type: ClusterIP
IP: 10.98.201.106
Port: <unset> 8090/TCP
TargetPort: 8090/TCP
Endpoints: 172.18.0.14:8090
Session Affinity: None
Events: <none>
</code></pre>
<p>Thanks for the help!</p>
| Cayo Eduardo | <p>Turns out that i had to forward the local 8090 port to the cluster 8090 port.
I solved it using: <code>kubectl port-forward svc/cruise-control 8090:8090</code></p>
| Cayo Eduardo |
<p>I am trying to bind my Google Service Account (GSA) to my Kubernetes Service Account (KSA) so I can connect to my Cloud SQL database from the Google Kubernetes Engine (GKE). I am currently using the follow guide provided in Google's documentation (<a href="https://cloud.google.com/sql/docs/sqlserver/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/sqlserver/connect-kubernetes-engine</a>).</p>
<p>Currently I have a cluster running on GKE named <code>MY_CLUSTER</code>, a GSA with the correct Cloud SQL permissions named <code>MY_GCP_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com</code>, and a KSA named <code>MY_K8S_SERVICE_ACCOUNT</code>. I am trying to bind the two accounts using the following command.</p>
<pre><code>gcloud iam service-accounts add-iam-policy-binding \
--member "serviceAccount:PROJECT_ID.svc.id.goog[K8S_NAMESPACE/MY_K8S_SERVICE_ACCOUNT]" \
--role roles/iam.workloadIdentityUser \
MY_GCP_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com
</code></pre>
<p>However when I run the previous command I get the following error message.</p>
<pre><code>ERROR: Policy modification failed. For a binding with condition, run "gcloud alpha iam policies lint-condition" to identify issues in condition.
ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) INVALID_ARGUMENT: Identity Pool does not exist (PROJECT_ID.svc.id.goog). Please check that you specified a valid resource name as returned in the `name` attribute in the configuration API.
</code></pre>
<p>Why am I getting this error when I try to bind my GSA to my KSA?</p>
| Riley Conrardy | <p>In order to bind your Google Service Account (GSA) to you Kubernetes Service Account (KSA) you need to enable Workload Identity on the cluster. This is explained in more details in Google's documentation (<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity</a>).</p>
<p>To enable Workload Identity on an existing cluster you can run.</p>
<pre><code>gcloud container clusters update MY_CLUSTER \
--workload-pool=PROJECT_ID.svc.id.goog
</code></pre>
| Riley Conrardy |
<p>I have an issue at work with K8s Ingress and I will use fake examples here to illustrate my point.
Assume I have an app called Tweeta and my company is called ABC. My app currently sits on tweeta.abc.com.
But we want to migrate our app to app.abc.com/tweeta.</p>
<p>My current ingress in K8s is as belows:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress
spec:
rules:
- host: tweeta.abc.com
http:
paths:
- path: /
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /api
backend:
serviceName: tweeta-backend
servicePort: 80
</code></pre>
<p>For migration, I added a second ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta
backend:
serviceName: tweeta-frontend
servicePort: 80
- path: /tweeta/api
backend:
serviceName: tweeta-backend
servicePort: 80
</code></pre>
<p>For sake of continuity, I would like to have 2 ingresses pointing to my services at the same time. When the new domain is ready and working, I would just need to tear down the old ingress.</p>
<p>However, I am not getting any luck with the new domain with this ingress. Is it because it is hosted on a path and the k8s ingress needs to host on root? Or is it a configuration I would need to do on the nginx side?</p>
| aijnij | <p>As far as I tried, I couldn't reproduce your problem. So I decided to describe how I tried to reproduce it, so you can follow the same steps and depending on where/if you fail, we can find what is causing the issue.</p>
<p>First of all, make sure you are using a <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">NGINX Ingress</a> as it's more powerful.</p>
<p>I installed my NGINX Ingress using Helm following these steps:</p>
<pre><code>$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo update
$ helm install nginx-ingress stable/nginx-ingress
</code></pre>
<p>For the deployment, we are going to use an example from <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">here</a>.</p>
<p>Deploy a hello, world app</p>
<ol>
<li><p>Create a Deployment using the following command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
</code></pre>
<p>Output:</p>
<pre class="lang-sh prettyprint-override"><code>deployment.apps/web created
</code></pre>
</li>
<li><p>Expose the Deployment:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl expose deployment web --type=NodePort --port=8080
</code></pre>
<p>Output:</p>
<pre class="lang-sh prettyprint-override"><code>service/web exposed
</code></pre>
</li>
</ol>
<p>Create Second Deployment</p>
<ol>
<li><p>Create a v2 Deployment using the following command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0
</code></pre>
<p>Output:</p>
<pre class="lang-sh prettyprint-override"><code>deployment.apps/web2 created
</code></pre>
</li>
<li><p>Expose the Deployment:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl expose deployment web2 --port=8080 --type=NodePort
</code></pre>
<p>Output:</p>
<pre class="lang-sh prettyprint-override"><code>service/web2 exposed
</code></pre>
</li>
</ol>
<p>It this point we have the Deployments and Services running:</p>
<pre><code>$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
web 1/1 1 1 24m
web2 1/1 1 1 22m
</code></pre>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d5h
nginx-ingress-controller LoadBalancer 10.111.183.151 <pending> 80:31974/TCP,443:32396/TCP 54m
nginx-ingress-default-backend ClusterIP 10.104.30.84 <none> 80/TCP 54m
web NodePort 10.102.38.233 <none> 8080:31887/TCP 24m
web2 NodePort 10.108.203.191 <none> 8080:32405/TCP 23m
</code></pre>
<p>For the ingress, we are going to use the one provided in the question but we have to change the backends:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress
spec:
rules:
- host: tweeta.abc.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 8080
- path: /api
backend:
serviceName: web2
servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tweeta-ingress-v2
spec:
rules:
- host: app.abc.com
http:
paths:
- path: /tweeta
backend:
serviceName: web
servicePort: 8080
- path: /tweeta/api
backend:
serviceName: web2
servicePort: 8080
</code></pre>
<p>Now let's test our ingresses:</p>
<pre><code>$ curl tweeta.abc.com
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk
$ curl tweeta.abc.com/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n
$ curl app.abc.com/tweeta
Hello, world!
Version: 1.0.0
Hostname: web-6785d44d5-j8bgk
$ curl app.abc.com/tweeta/api
Hello, world!
Version: 2.0.0
Hostname: web2-8474c56fd-lx55n
</code></pre>
<p>As can be seen, everything is working fine with no mods in your ingresses.</p>
| Mark Watney |
<ol>
<li><p>Once I register the crd into the k8s cluster, I can use .yaml to create it, without operator running. Then What happends to these created resouces?</p>
</li>
<li><p>I have seen the <code>Reconciler</code> of operator, but it's more like an async status transfer. When we create a pod, we can directly get the pod ip from the create result. But it seems that I didn't find a place to write my <code>OnCreate</code> hook. (I just see some <code>validate</code> webhook, but never see a hook that be called when creation request made, defines how to create the resourse, and return the created resourse info to the caller ).</p>
</li>
<li><p>If my story is that for one kind of resource, in a time window, all coming creation will multiplex only one pod. Can you give me some advice?</p>
</li>
</ol>
| wymli | <p>That's a big story for kubernetes <code>crd</code>/<code>controller</code> life cycle, I try to make a simple representation.</p>
<ol>
<li>After register a new CRD, and create CR, <code>kube-api-server</code> do not care if there is a related <code>controller</code> existed or not. see the process:
<a href="https://i.stack.imgur.com/WZUQj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WZUQj.png" alt="enter image description here" /></a></li>
</ol>
<p>That's means the resource(your CR) will be store to etcd, has no business of your <code>controller</code></p>
<ol start="2">
<li>ok, let talk about your controller. your controller will setup a <code>list/watch</code>(actually a long live http link) to the <code>api-server</code> and register <code>hook</code>(what you ask, right?) for different event: <code>onCreate</code>, <code>onUpdate</code> and <code>onDelete</code>. Actually you will handle all event in your controller's <code>reconcile</code> (remember kubernetes reconcile's responsibility: move current state to desired state). see the diagram:
<a href="https://i.stack.imgur.com/OvDqY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OvDqY.png" alt="enter image description here" /></a></li>
</ol>
<ol start="3">
<li>For the <code>list/watch</code> link in your <code>controller</code>, you need set different link for different kind of resource. for example: if you care about event for <code>pod</code>, you need set <code>pod</code> <code>list/watch</code> or care about deployment, and set a <code>deployment</code> <code>list/watch</code>...</li>
</ol>
| vincent pli |
<p>Hi i am trying to deploy on my gke cluster through cloud build.I am able to deploy. But every time i am pushing new images.My cluster is not picking up the new image but deploy the pod with the old image only(nothing is changed).When i am deleting my pod and triggering the cloudbuild then it is picking the new image. I have also added ImagePullPolicy= Always.
Below is my cloudbuild.yaml file.</p>
<pre><code> - id: 'build your instance'
name: 'maven:3.6.0-jdk-8-slim'
entrypoint: mvn
args: ['clean','package','-Dmaven.test.skip=true']
- id: "docker build"
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PID/test', '.']
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PID/TEST']
- id: 'Deploy image to kubernetes'
name: 'gcr.io/cloud-builders/gke-deploy'
args:
- run
- --filename=./run/helloworld/src
- --location=us-central1-c
- --cluster=cluster-2
</code></pre>
<p>My pod manifest looks like this.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: Test
labels:
app: hello
spec:
containers:
- name: private-reg-containers
image: gcr.io/PID/test
imagePullPolicy: "Always"
</code></pre>
<p>Any help is appreciated.</p>
| Abhinav | <p>This is an expected behavior and you may be confusing the usage of <code>imagePullPolicy: "Always"</code>. This is well explanined in this <a href="https://stackoverflow.com/a/45906651/12265927">answer</a>:</p>
<blockquote>
<p>Kubernetes is not watching for a new version of the image. The image pull policy specifies how to acquire the image to run the container. Always means it will try to pull a new version each time it's starting a container. To see the update you'd need to delete the Pod (not the Deployment) - the newly created Pod will run the new image.
<br />
<br />
There is no direct way to have Kubernetes automatically update running containers with new images. This would be part of a continuous delivery system (perhaps using kubectl set image with the new sha256sum or an image tag - but not latest).</p>
</blockquote>
<p>This is why when you recreate the pods, those get the newest image. So the answer to your question is to explicitly tell K8s to get the newest image. In the example I share with you I use two tags, the clasic <code>latest</code> which is more used to share the image with a friendly name and the tag using the <code>$BUILD_ID</code> which is used to update the image in GKE. In this example I update the image for a deployment so you only change it for updating an standalone pod which should be your little "homework".</p>
<pre class="lang-yaml prettyprint-override"><code>steps:
#Building Image
- name: 'gcr.io/cloud-builders/docker'
id: build-loona
args:
- build
- --tag=${_LOONA}:$BUILD_ID
- --tag=${_LOONA}:latest
- .
dir: 'loona/'
waitFor: ['-']
#Pushing image (this pushes the image with both tags)
- name: 'gcr.io/cloud-builders/docker'
id: push-loona
args:
- push
- ${_LOONA}
waitFor:
- build-loona
#Deploying to GKE
- name: "gcr.io/cloud-builders/gke-deploy"
id: deploy-gke
args:
- run
- --filename=k8s/
- --location=${_COMPUTE_ZONE}
- --cluster=${_CLUSTER_NAME}
#Update Image
- name: 'gcr.io/cloud-builders/kubectl'
id: update-loona
args:
- set
- image
- deployment/loona-deployment
- loona=${_LOONA}:$BUILD_ID
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
waitFor:
- deploy-gke
substitutions:
_CLUSTER_NAME: my-cluster
_COMPUTE_ZONE: us-central1
_LOONA: gcr.io/${PROJECT_ID}/loona
</code></pre>
| Puteri |
<p>Docker Containers have cgroups and namespaces associated with them, whether they are running in a pod or vm or host machine.<br />
Similarly, does a Kubernetes Pod's have namespaces and cgroups associated with them, or it's just the containers within the pod have these(cgroup & namespace) associations. If they do, how can I find this info from the host?</p>
| samshers | <p>From the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod" rel="noreferrer">documentation</a> we can read:</p>
<p>"A Pod (as in a pod of whales or pea pod) is a group of one or more containers"</p>
<p>This makes us understand that for every pod we have one or more containers and a cgroup associated to it.</p>
<p>The following answer demonstrates that.</p>
<hr />
<p>Posting this answer as Community Wiki as it is a copy/paste from this <a href="https://stackoverflow.com/a/62727007/12153576">answer</a>.</p>
<h1>Cgroups</h1>
<p>Container in a pod share part of cgroup hierarchy but each container get's it's own cgroup. We can try this out and verify ourself.</p>
<ol>
<li>Start a multi container pod.</li>
</ol>
<pre><code># cat mc2.yaml
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: container1
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
- name: container2
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
</code></pre>
<pre><code># kubectl apply -f mc2.yaml
pod/two-containers created
</code></pre>
<ol start="2">
<li>Find the process cgroups on the host machine</li>
</ol>
<pre><code># ps -ax | grep while | grep -v grep
19653 ? Ss 0:00 /bin/bash -c -- while true; do sleep 30; done;
19768 ? Ss 0:00 /bin/bash -c -- while true; do sleep 30; done;
</code></pre>
<pre><code># cat /proc/19653/cgroup
12:hugetlb:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
11:memory:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
10:perf_event:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
9:freezer:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
8:cpuset:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
7:net_cls,net_prio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
6:cpu,cpuacct:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
5:blkio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
4:pids:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
3:devices:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
2:rdma:/
1:name=systemd:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
0::/
</code></pre>
<pre><code># cat /proc/19768/cgroup
12:hugetlb:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
11:memory:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
10:perf_event:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
9:freezer:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
8:cpuset:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
7:net_cls,net_prio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
6:cpu,cpuacct:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
5:blkio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
4:pids:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
3:devices:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
2:rdma:/
1:name=systemd:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/e10fa18a63cc26de27f3f79f46631cd814efa3ef7c2f5ace4b84cf5abce89765
0::/
</code></pre>
<p>As you can see the containers in the pods share the cgroup hierarchy until <code>/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011</code> and then they get their own cgroup. (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes" rel="noreferrer">These containers are under <code>besteffort</code> cgroup because we have not specified the resource requests</a>)</p>
<p>You can also find the cgroups of the container by logging into the container and viewing /proc/self/cgroup file. (This may not work in recent versions of kubernetes if cgroup namespace is enabled)</p>
<pre><code># kubectl exec -it two-containers -c container2 bash
# root@two-containers:# cat /proc/self/cgroup
12:hugetlb:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
11:memory:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
10:perf_event:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
9:freezer:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
8:cpuset:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
7:net_cls,net_prio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
6:cpu,cpuacct:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
5:blkio:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
4:pids:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
3:devices:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
2:rdma:/
1:name=systemd:/kubepods/besteffort/poda9c80282-3f6b-4d5b-84d5-a137a6668011/ed89697807a981b82f6245ac3a13be232c1e13435d52bc3f53060d61babe1997
0::/
</code></pre>
<hr />
<h1>Namespaces</h1>
<p>Containers in pod also share network and IPC namespaces by default.</p>
<pre><code># cd /proc/19768/ns/
# /proc/19768/ns# ls -lrt
total 0
lrwxrwxrwx 1 root root 0 Jul 4 01:41 uts -> uts:[4026536153]
lrwxrwxrwx 1 root root 0 Jul 4 01:41 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Jul 4 01:41 pid_for_children -> pid:[4026536154]
lrwxrwxrwx 1 root root 0 Jul 4 01:41 pid -> pid:[4026536154]
lrwxrwxrwx 1 root root 0 Jul 4 01:41 net -> net:[4026536052]
lrwxrwxrwx 1 root root 0 Jul 4 01:41 mnt -> mnt:[4026536152]
lrwxrwxrwx 1 root root 0 Jul 4 01:41 ipc -> ipc:[4026536049]
lrwxrwxrwx 1 root root 0 Jul 4 01:41 cgroup -> cgroup:[4026531835]
</code></pre>
<pre><code># cd /proc/19653/ns
# /proc/19653/ns# ls -lrt
total 0
lrwxrwxrwx 1 root root 0 Jul 4 01:42 uts -> uts:[4026536150]
lrwxrwxrwx 1 root root 0 Jul 4 01:42 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Jul 4 01:42 pid_for_children -> pid:[4026536151]
lrwxrwxrwx 1 root root 0 Jul 4 01:42 pid -> pid:[4026536151]
lrwxrwxrwx 1 root root 0 Jul 4 01:42 net -> net:[4026536052]
lrwxrwxrwx 1 root root 0 Jul 4 01:42 mnt -> mnt:[4026536149]
lrwxrwxrwx 1 root root 0 Jul 4 01:42 ipc -> ipc:[4026536049]
lrwxrwxrwx 1 root root 0 Jul 4 01:42 cgroup -> cgroup:[4026531835]
</code></pre>
<p>As you can see the containers share the network and IPC namespaces. You can also make the container share pid namespace using <code>shareProcessNamespace</code> field in the pod spec.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace</a></p>
<hr />
<blockquote>
<p>cgroup:[4026531835] is same for both the containers. Is this(cgroup namespace) different from the cgroups they (containers) are part of.</p>
</blockquote>
<p>cgroups limits the resources(cpu, memory etc) which a process(or group of processes) can use.</p>
<p>namespaces isolate and limit the visibility a process(or a group of processes) has over system resources like network, process trees etc. There are different namespace groups like network, IPC etc. One of such namespace is cgroup namespace. Using cgroup namespace you can limit the visibility of other cgroups from a process(or group of processes)</p>
<p>cgroup namespace virtualises the view of a process's cgroups. Currently if you try <code>cat /proc/self/cgroup</code> from within the container, you would be able to see the full cgroup hierarchy starting from the global cgroup root. This can be avoided using cgroup namespaces and is available from <a href="https://github.com/kubernetes/enhancements/pull/1370" rel="noreferrer">kubernetes v1.19</a>. <a href="https://github.com/moby/moby/pull/38377" rel="noreferrer">Docker also supports this from version 20.03</a>. When cgroup namespace is used while creating the container, you would see the cgroup root as <code>/</code> inside the container instead of seeing the global cgroups hierarchy.</p>
<p><a href="https://man7.org/linux/man-pages/man7/cgroup_namespaces.7.html" rel="noreferrer">https://man7.org/linux/man-pages/man7/cgroup_namespaces.7.html</a></p>
| Mark Watney |
<p>I installed argocd in my cluster and now want to get the kustomize-helm example app running. So I modified the Config Map, as described in the <a href="https://github.com/argoproj/argocd-example-apps/tree/master/plugins/kustomized-helm" rel="nofollow noreferrer">docs</a>, but I don't know how I can use this plugin in my application crd for the kustomized-helm example application. Up to now I have come up with this:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kustomize-helm
spec:
destination:
name: ''
namespace: kustomize-helm
server: 'https://kubernetes.default.svc'
source:
path:
- plugins/kustomized-helm
repoURL: 'https://github.com/argoproj/argocd-example-apps'
targetRevision: HEAD
plugin: kustoimized-helm
project: default
syncPolicy:
syncOptions:
- CreateNamespace=true
</code></pre>
<p>When Applying the Application, I get a error, that the Validation failed.
So how can I get this example application with an Application Manifest to work?</p>
| 8bit | <p>Have no idea how you get the definition of <code>application</code>, but correct one should be:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kustomized-helm
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
path: plugins/kustomized-helm
plugin:
name: kustomized-helm
repoURL: https://github.com/argoproj/argocd-example-apps
</code></pre>
| vincent pli |
<p>I want to run kubectl and get all the secrets of type = X. Is this possible?</p>
<p>I.e if I want to get all secrets where type=tls</p>
<p>something like <code>kubectl get secrets --type=tls</code>?</p>
| Nate | <p>How about <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="noreferrer">field-selector</a>:</p>
<pre><code>$ kubectl get secrets --field-selector type=kubernetes.io/tls
</code></pre>
| Sean Z |
<p>I have a simple DeploymentConfig:</p>
<pre><code>apiVersion: v1
kind: DeploymentConfig
metadata:
name: my-dc
labels:
app: my
spec:
replicas: 3
template:
metadata:
labels:
app: my
spec:
containers:
- name: my
image: image-registry.openshift-image-registry.svc:5000/pc/rhel-atomic
livenessProbe:
exec:
command:
- echo
- "I'm alive!"
initialDelaySeconds: 10
readinessProbe:
exec:
command:
- echo
- "I'm healthy!"
initialDelaySeconds: 15
periodSeconds: 15
</code></pre>
<p>The <code>image-registry.openshift-image-registry.svc:5000/pc/rhel-atomic</code> image stream points to my own image that is simply:</p>
<pre><code>FROM registry.access.redhat.com/rhel7/rhel-atomic
</code></pre>
<p>When I do <code>oc create -f my-dc.yaml</code> and try to check what is going on, I see that my pods are crash-looping.</p>
<p>To debug it, I did a <code>oc status --suggest</code>. It suggests listing the container logs with <code>oc logs my-dc-1-z889c -c my</code>. However, there is no logs for any of the containers.</p>
<p>My <code>oc get events</code> does not help either. It just cycles through these messages:</p>
<pre><code><unknown> Normal Scheduled pod/my-dc-1-vnhmp Successfully assigned pc/my-dc-1-vnhmp to ip-10-0-128-37.ec2.internal
31m Normal Pulling pod/my-dc-1-vnhmp Pulling image "image-registry.openshift-image-registry.svc:5000/pc/rhel-atomic"
31m Normal Pulled pod/my-dc-1-vnhmp Successfully pulled image "image-registry.openshift-image-registry.svc:5000/pc/rhel-atomic"
31m Normal Created pod/my-dc-1-vnhmp Created container my
31m Normal Started pod/my-dc-1-vnhmp Started container my
27m Warning BackOff pod/my-dc-1-vnhmp Back-off restarting failed container
<unknown> Normal Scheduled pod/my-dc-1-z8jgb Successfully assigned pc/my-dc-1-z8jgb to ip-10-0-169-70.ec2.internal
31m Normal Pulling pod/my-dc-1-z8jgb Pulling image "image-registry.openshift-image-registry.svc:5000/pc/rhel-atomic"
31m Normal Pulled pod/my-dc-1-z8jgb Successfully pulled image "image-registry.openshift-image-registry.svc:5000/pc/rhel-atomic"
31m Normal Created pod/my-dc-1-z8jgb Created container my
31m Normal Started pod/my-dc-1-z8jgb Started container my
27m Warning BackOff pod/my-dc-1-z8jgb Back-off restarting failed container
</code></pre>
<p>How do I debug this? Why the containers crash?</p>
<p>I am using OpenShift Online.</p>
| foki | <p>It seems that the single container in the pod doesn't have any process running. So, the container is terminated right after it is started.</p>
<p>An option to keep the container running is to add these to the DeploymentConfig for the container:</p>
<pre><code> command:
- /bin/sh
stdin: true
</code></pre>
<p>Replace <code>/bin/sh</code> with a different shell (e.g. bash) or from a different location based on what's available in the image.</p>
| gears |
<p>I want to list a node's pods and pod statues, eg.</p>
<pre><code>Node A
Pod1 Status
Pod2 Status
Node B
Pod1 Status
Pod2 Status
</code></pre>
<p>Is there a <code>kubectl</code> command I can use for this?</p>
| angelokh | <p>Try this:<br />
<code>kubectl get pods -A --field-selector spec.nodeName=<node name> | awk '{print $2" "$4}'</code></p>
| vincent pli |
<p>I am trying to use Patch and Put API to modify the podspec, I am able to update container images version with both Patch and Put API.
But I am not able to modify the Env variables for pod, I want to update Env variables, Can you please help here. Attached is the image<a href="https://i.stack.imgur.com/jUP1D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jUP1D.png" alt="enter image description here" /></a></p>
| Jayashree Madanala | <p>Patching Pod may not change fields other than <code>spec.containers[*].image</code>, <code>spec.initContainers[*].image</code>, <code>spec.activeDeadlineSeconds</code> or <code>spec.tolerations</code> (only additions to existing tolerations).</p>
<p>Env variables are immutable for pods because this information is set when the pod gets created. So what you need is only achievable using a Deployment instead of a Pod.</p>
<p>When you update a env variable in a Deployment, all pods will be recreated to make changes happen.</p>
<p>An easier method to set/change variables is to make use of <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-env-em-" rel="nofollow noreferrer">kubectl set env</a>.</p>
<pre><code>kubectl set env deployment/test LOG_LEVEL=ERROR
</code></pre>
| Mark Watney |
<p>I have an EKS cluster running kubernetes 1.14. I deployed the Nginx controller on the cluster following these steps from the following <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">link</a>.</p>
<p>Here are the steps that I followed - </p>
<blockquote>
<p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml</a></p>
<p>kubectl apply -f
<a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l4.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l4.yaml</a></p>
<p>kubectl apply -f
<a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l4.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l4.yaml</a></p>
</blockquote>
<p>But I keep getting these errors intermittently in the ingress controller.</p>
<pre><code>2019/10/15 15:21:25 [error] 40#40: *243746 upstream timed out (110: Connection timed out) while connecting to upstream, client: 63.xxx.xx.xx, server: x.y.com, request: "HEAD / HTTP/1.1", upstream: "http://172.20.166.58:80/", host: "x.y.com"
</code></pre>
<p>And sometimes these - </p>
<pre><code>{"log":"2019/10/15 02:58:40 [error] 119#119: *2985 connect() failed (113: No route to host) while connecting to upstream, client: xx.1xx.81.1xx, server: a.b.com , request: \"OPTIONS /api/v1/xxxx/xxxx/xxx HTTP/2.0\", upstream: \"http://172.20.195.137:9050/api/xxx/xxx/xxxx/xxx\ ", host: \"a.b.com \", referrer: \"https://x.y.com/app/connections\"\n","stream":"stderr","time":"2019-10-15T02:58:40.565930449Z "}
</code></pre>
<p>I am using the native Amazon VPC CNI plugin for Kubernetes for networking - </p>
<blockquote>
<p>amazon-k8s-cni:v1.5.4</p>
</blockquote>
<p>I noticed that a couple of replicas out of the 5 replicas of the nginx ingress controller pod were not able to talk to the backend application.
To check the connectivity between the nginx ingress controller pods and the backend applications I sshed into the nginx ingress controller pod and tried to curl the backend service and it timed out, but when I ssh into another backend service and then curl the same backend service it returns a 200 status code. The way I temporarily fixed it was by deleting the replicas that were not able to talk to the backend and recreated it. This temporarily fixed the issue but after a few hours the same errors start showing up again.</p>
| Anshul Tripathi | <pre><code>amazon-k8s-cni:v1.5.4
</code></pre>
<p>Has known issues with DNS and pod to pod communication. It's recommended to revert back to</p>
<pre><code>amazon-k8s-cni:v1.5.3
</code></pre>
<p><a href="https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.5.4" rel="nofollow noreferrer">v1.5.4 Release Notes</a> </p>
<p>I had the same issues you're seeing and going back to v1.5.3 seemed to resolve it for me. I think they recently reverted the plugin back to v1.5.3 for when an eks cluster is launched anyways.</p>
| Lan Pham |
<p>Background: I have a springboot app that is containerized using docker and runs on a kubernetes cluster. It performs API calls, and relies on an SSLContext that requires loading a .jks truststore, like so:</p>
<pre class="lang-java prettyprint-override"><code>SSLContext sslcontext = SSLContexts.custom().loadTrustMaterial(ResourceUtils.getFile(keyStoreFilePath),
keyStorePassword.toCharArray(), new TrustSelfSignedStrategy()).build();
</code></pre>
<p>Note that the String keyStoreFilePath is currently injected as an environment/property variable at release time, and points to a location like /etc/ssl/keystore.jks on the host machine that runs a container. The disadvantage is that I have to resort to mounting this as a persistent volume in kubernetes for my containerized application to access it. </p>
<p>Instead, I decided to embed it into the application's classpath so that our operations team don't have to setting it up in all the host machines. But when I do, by specifying the keyStoreFilePath value like so: <code>classpath:security/keystore.jks</code>, it runs fine when I run the project in Eclipse/STS. But it fails with the error below inside the container:</p>
<p><code>class path resource [security/cacerts] cannot be resolved to absolute file path because it does not reside in the file system: jar:file:/app.jar!/BOOT-INF/classes!/security/cacerts","stackTrace":"org.springframework.util.ResourceUtils.getFile(ResourceUtils.java:217)
</code></p>
<p>Again, what is interesting is that the exact same thing runs just fine in Eclipse, but fails inside the container. Any pointers?</p>
<p><strong>Update:</strong> verified the keystore.jks file is < 1 MB in size.</p>
| code4kix | <p>The error message shows <code>file:/<some-path-in-the-container>/App.jar!/keystore.jks</code> - no "security" folder whereas the value passed in for <code>keyStoreFilePath</code> is <code>classpath:security/keystore.jks</code>?</p>
<p>It is unlikely that the JKS file is bigger than 1MB, which is the limit in Kubernetes for ConfigMap-s and Secrets, so an option is to create a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secret</a> with the JKS file and mount a volume from the Secret - no need to use a persistent volume or <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> volume.</p>
<p>HTH</p>
| gears |
<p>I am able to access my django app deployment using LoadBalancer service type but I'm trying to switch to ClusterIP service type and ingress-nginx but I am getting 503 Service Temporarily Unavailable when I try to access the site via the host url. Describing the ingress also shows <code>error: endpoints "django-service" not found</code> and <code>error: endpoints "default-http-backend" not found</code>. What am I doing wrong?</p>
<p>This is my service and ingress yaml:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: django-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
tls:
- hosts:
- django.example.com
rules:
- host: django.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: django-service
port:
number: 80
ingressClassName: nginx
</code></pre>
<p>kubectl get all</p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/django-app-5bdd8ffff9-79xzj 1/1 Running 0 7m44s
pod/postgres-58fffbb5cc-247x9 1/1 Running 0 7m44s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/django-service ClusterIP 10.233.29.58 <none> 80/TCP 7m44s
service/pg-service ClusterIP 10.233.14.137 <none> 5432/TCP 7m44s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/django-app 1/1 1 1 7m44s
deployment.apps/postgres 1/1 1 1 7m44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/django-app-5bdd8ffff9 1 1 1 7m44s
replicaset.apps/postgres-58fffbb5cc 1 1 1 7m44s
</code></pre>
<p>describe ingress</p>
<pre><code>$ kubectl describe ing django-ingress
Name: django-ingress
Labels: <none>
Namespace: django
Address: 10.10.30.50
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
SNI routes django.example.com
Rules:
Host Path Backends
---- ---- --------
django.example.com
/ django-service:80 (<error: endpoints "django-service" not found>)
Annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync
Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync
</code></pre>
| bayman | <p>I think you forgot to make the link with your deployment in your service.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: django-service
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8000
selector:
app: your-deployment-name
</code></pre>
<p>Your label must be set in your deployment as well:</p>
<pre><code>spec:
selector:
matchLabels:
app: your-deployment-name
template:
metadata:
labels:
app: your-deployment-name
</code></pre>
| sbrienne |
<p>I have created an ingress resource in my Kubernetes cluster on google cloud. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gordion
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.global-static-ip-name: gordion-ingress
networking.gke.io/managed-certificates: gordion-certificate,gordion-certificate-backend
spec:
rules:
- host: backend.gordion.io
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
</code></pre>
<p>Everything works. However, I have not created any <code>ingress-controller</code>. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">the official docs state</a> that it must have it.</p>
<blockquote>
<p>You must have an ingress controller to satisfy an Ingress. Only
creating an Ingress resource has no effect.</p>
</blockquote>
<p>So where is my ingress-controller if my routing actually works? how do I see its configuration?</p>
| LiranC | <p>In Google Kubernetes Engine (GKE), when you create an Ingress object, the built-in GKE ingress controller will take care of creating the appropriate HTTP(S) load balancer which conforms to your Ingress and its Service(s). For more information, have a look at this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">Google Cloud Document</a> on "HTTP(S) load balancing with Ingress".</p>
| Vivak P |
<p>I've just created a new kubernetes cluster. The only thing I have done beyond set up the cluster is install Tiller using <code>helm init</code> and install kubernetes dashboard through <code>helm install stable/kubernetes-dashboard</code>.</p>
<p>The <code>helm install</code> command seems to be successful and <code>helm ls</code> outputs:</p>
<pre><code>NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
exhaling-ladybug 1 Thu Oct 24 16:56:49 2019 DEPLOYED kubernetes-dashboard-1.10.0 1.10.1 default
</code></pre>
<p>However after waiting a few minutes the deployment is still not ready. </p>
<p>Running <code>kubectl get pods</code> shows that the pod's status as <code>CrashLoopBackOff</code>.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
exhaling-ladybug-kubernetes-dashboard 0/1 CrashLoopBackOff 10 31m
</code></pre>
<p>The description for the pod shows the following events:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31m default-scheduler Successfully assigned default/exhaling-ladybug-kubernetes-dashboard to nodes-1
Normal Pulling 31m kubelet, nodes-1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Pulled 31m kubelet, nodes-1 Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Started 30m (x4 over 31m) kubelet, nodes-1 Started container kubernetes-dashboard
Normal Pulled 30m (x4 over 31m) kubelet, nodes-1 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
Normal Created 30m (x5 over 31m) kubelet, nodes-1 Created container kubernetes-dashboard
Warning BackOff 107s (x141 over 31m) kubelet, nodes-1 Back-off restarting failed container
</code></pre>
<p>And the logs show the following panic message</p>
<pre><code>panic: secrets is forbidden: User "system:serviceaccount:default:exhaling-ladybug-kubernetes-dashboard" cannot create resource "secrets" in API group "" in the namespace "kube-system"
</code></pre>
<p>Am I doing something wrong? Why is it trying to create a secret somewhere it cannot?</p>
<p>Is it possible to setup without giving the dashboard account cluster-admin permissions?</p>
| Increasingly Idiotic | <p>By default i have puted namespace equals default, but if is other you need to replace for yours</p>
<pre><code>kubectl create serviceaccount exhaling-ladybug-kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=default:exhaling-ladybug-kubernetes-dashboard
</code></pre>
| Lucas Serra |
<p>Below is my YAML for volume mounting:</p>
<pre><code>initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumes:
- name: data
hostPath:
path: /usr/share/elasticsearch/data
type: DirectoryOrCreate
</code></pre>
<p>Even after changing type to DirectoryOrCreate, it shows the error:</p>
<blockquote>
<p>MountVolume.SetUp failed for volume "data" : hostPath type check failed: /usr/share/elasticsearch/data is not a directory</p>
</blockquote>
<p>How can I fix this ??</p>
| ashique | <p>You can add a volumeMounts inside your container:</p>
<pre><code>containers:
- name: increase-fd-ulimit
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
</code></pre>
<p>and add a volumeClaimTemplates inside your sts:</p>
<pre><code>spec:
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data
namespace: your_namespace
annotations:
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
volumeMode: Filesystem
</code></pre>
| sbrienne |
<p>I'm new to GKE-Python. I would like to delete my GKE(Google Kubernetes Engine) cluster using a python script.
I found an API <strong>delete_cluster()</strong> from the <strong>google-cloud-container</strong> python library to delete the GKE cluster.
<a href="https://googleapis.dev/python/container/latest/index.html" rel="nofollow noreferrer">https://googleapis.dev/python/container/latest/index.html</a></p>
<p>But I'm not sure how to use that API by passing the required parameters in python. Can anyone explain me with an example?</p>
<p>Or else If there is any other way to delete the GKE cluster in python?</p>
<p>Thanks in advance.</p>
| Pepper | <p>First you'd need to configure the Python Client for Google Kubernetes Engine as explained on <a href="https://googleapis.dev/python/container/latest/index.html#installation" rel="nofollow noreferrer">this section</a> of the link you shared. Basically, set up a <a href="https://docs.python.org/3.7/library/venv.html" rel="nofollow noreferrer">virtual environment</a> and install the library with <code>pip install google-cloud-container</code>.</p>
<p>If you are running the script within an environment such as the <a href="https://cloud.google.com/shell/docs" rel="nofollow noreferrer">Cloud Shell</a> with an user that has enough access to manage the GKE resources (with at least the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/iam#container.clusterAdmin" rel="nofollow noreferrer">Kubernetes Engine Cluster Admin</a> permission assigned) the client library will handle the necessary authentication from the script automatically and the following script will most likely work:</p>
<pre><code>from google.cloud import container_v1
project_id = "YOUR-PROJECT-NAME" #Change me.
zone = "ZONE-OF-THE-CLUSTER" #Change me.
cluster_id = "NAME-OF-THE-CLUSTER" #Change me.
name = "projects/"+project_id+"/locations/"+zone+"/clusters/"+cluster_id
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=name)
print(response)
</code></pre>
<p>Notice that as per the <a href="https://googleapis.dev/python/container/latest/gapic/v1/api.html#google.cloud.container_v1.ClusterManagerClient.delete_cluster" rel="nofollow noreferrer">delete_cluster</a> method documentation you only need to pass the <code>name</code> parameter. If by some reason you are just provided the credentials (generally in the form of a JSON file) of a service account that has enough permissions to delete the cluster you'd need to modify the client for the script and use the <a href="https://googleapis.dev/python/container/latest/gapic/v1/api.html#google.cloud.container_v1.ClusterManagerClient.delete_cluster" rel="nofollow noreferrer">credentials</a> parameter to get the client correctly authenticated in a similar fashion to:</p>
<pre><code>...
client = container_v1.ClusterManagerClient(credentials=credentials)
...
</code></pre>
<p>Where the <code>credentials</code> variable is pointing to the JSON filename (and path if it's not located in the folder where the script is running) of the service account credentials file with enough permissions that was provided.</p>
<p>Finally notice that the <code>response</code> variable that is returned by the <code>delete_cluster</code> method is of the <a href="https://googleapis.dev/python/container/latest/gapic/v1/types.html#google.cloud.container_v1.types.Operation" rel="nofollow noreferrer">Operations class</a> which can serve to monitor a long running operation in a similar fashion as to how it is explained <a href="https://cloud.google.com/dialogflow/docs/how/long-running-operations#get" rel="nofollow noreferrer">here</a> with the <code>self_link</code> attribute corresponding to the long running operation.</p>
<p>After running the script you could use a curl command in a similar fashion to:</p>
<pre><code>curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://container.googleapis.com/v1/projects/[RPOJECT-NUMBER]/zones/[ZONE-WHERE-THE-CLUSTER-WAS-LOCATED]/operations/operation-[OPERATION-NUMBER]
</code></pre>
<p>by checking the <code>status</code> field (which could be in RUNNING state while it is happening) of the response to that curl command. Or your could also use the <a href="https://requests.readthedocs.io/en/master/" rel="nofollow noreferrer">requests</a> library or any equivalent to automate this checking procedure of the long running operation within your script.</p>
| Daniel Ocando |
<p>I configure all of the following configurations but the request_per_second does not appear when I type the command</p>
<blockquote>
<p>kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1</p>
</blockquote>
<p>In the node.js that should be monitored I installed prom-client, I tested the /metrics and it's working very well and the metric "resquest_count" is the object it returns</p>
<p>Here are the important parts of that node code</p>
<pre><code>(...)
const counter = new client.Counter({
name: 'request_count',
help: 'The total number of processed requests'
});
(...)
router.get('/metrics', async (req, res) => {
res.set('Content-Type', client.register.contentType)
res.end(await client.register.metrics())
})
</code></pre>
<p>This is my service monitor configuration</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: un1qnx-validation-service-monitor-node
namespace: default
labels:
app: node-request-persistence
release: prometheus
spec:
selector:
matchLabels:
app: node-request-persistence
endpoints:
- interval: 5s
path: /metrics
port: "80"
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
namespaceSelector:
matchNames:
- un1qnx-aks-development
</code></pre>
<p>This the node-request-persistence configuration</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: node-request-persistence
namespace: un1qnx-aks-development
name: node-request-persistence
spec:
selector:
matchLabels:
app: node-request-persistence
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
labels:
app: node-request-persistence
spec:
containers:
- name: node-request-persistence
image: node-request-persistence
imagePullPolicy: Always # IfNotPresent
resources:
requests:
memory: "200Mi" # Gi
cpu: "100m"
limits:
memory: "400Mi"
cpu: "500m"
ports:
- name: node-port
containerPort: 80
</code></pre>
<p>This is the prometheus adapter</p>
<pre><code>prometheus:
url: http://prometheus-server.default.svc.cluster.local
port: 9090
rules:
custom:
- seriesQuery: 'request_count{namespace!="", pod!=""}'
resources:
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
name:
as: "request_per_second"
metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))"
</code></pre>
<p>This is the hpa</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: un1qnx-validation-service-hpa-angle
namespace: un1qnx-aks-development
spec:
minReplicas: 1
maxReplicas: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: un1qnx-validation-service-angle
metrics:
- type: Pods
pods:
metric:
name: request_per_second
target:
type: AverageValue
averageValue: "5"
</code></pre>
<p>The command</p>
<blockquote>
<p>kubectl get hpa -n un1qnx-aks-development</p>
</blockquote>
<p>results in "unknown/5"</p>
<p>Also, the command</p>
<blockquote>
<p>kubectl get --raw "http://prometheus-server.default.svc.cluster.local:9090/api/v1/series"</p>
</blockquote>
<p>Results in</p>
<blockquote>
<p>Error from server (NotFound): the server could not find the requested resource</p>
</blockquote>
<p>I think it should return some value about the collected metrics... I think that the problem is from the service monitor, but I am new to this</p>
<p>As you noticed I am trying to scale a deployment based on another deployment pods, don't know if there is a problem there</p>
<p>I appreciate an answer, because this is for my thesis</p>
<p>kubernetes - version 1.19.9</p>
<p>Prometheus - chart prometheus-14.2.1 app version 2.26.0</p>
<p>Prometheus Adapter - chart 2.14.2 app version 0.8.4</p>
<p>And all where installed using helm</p>
| BragaMann | <p>After some time I found the problems and I changed the following</p>
<p>Changed the port on the prometheus adapter, the time on the query and the names of the resource override. But to know the names of the resources override you need to port forward to the prometheus server and check the labels on the targets page of the app that you are monitoring.</p>
<pre><code>prometheus:
url: http://prometheus-server.default.svc.cluster.local
port: 80
rules:
custom:
- seriesQuery: 'request_count{kubernetes_namespace!="", kubernetes_pod_name!=""}'
resources:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
name:
matches: "request_count"
as: "request_count"
metricsQuery: "round(avg(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>))"
</code></pre>
<p>I also added annotations on the deployment yaml</p>
<pre><code>spec:
selector:
matchLabels:
app: node-request-persistence
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
labels:
</code></pre>
| BragaMann |
<p>Input:
GCP, Kubernetes, java 11 spring boot 2 application</p>
<p>Container is started with memory limit 1.6GB. Java application is limiting memory as well -XX:MaxRAMPercentage=80.0. Under a "heavy" (not really) load - about 1 http request per 100 ms during about 4 hours application is killed by OOMKiller. Internal diagnostic tools is showing that memory is far from limit: </p>
<p><a href="https://i.stack.imgur.com/3w75M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3w75M.png" alt="enter image description here"></a></p>
<p>However GCP tools is showing the following:</p>
<p><a href="https://i.stack.imgur.com/MlzcM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MlzcM.png" alt="enter image description here"></a></p>
<p>There is a suspicious that GCP is measuring something else? POD contains only java app (+jaeger agent). The odd thing that after restart GCP shows almost maximum memory usage instead of slowly growing if it was memory leak.</p>
<p><strong>EDIT:</strong></p>
<p>Docker file:</p>
<pre><code>FROM adoptopenjdk/openjdk11:x86_64-ubuntu-jdk-11.0.3_7-slim
VOLUME /tmp
VOLUME /javamelody
RUN apt-get update && apt-get install procps wget -y
RUN mkdir /opt/cdbg && wget -qO- https://storage.googleapis.com/cloud-debugger/compute-java/debian-wheezy/cdbg_java_agent_gce.tar.gz | tar xvz -C /opt/cdbg
RUN apt-get install fontconfig ttf-dejavu -y
ARG JAR_FILE
ARG VERSION
ARG MODULENAME
ENV TAG=$VERSION
ENV MODULE=$MODULENAME
COPY target/${JAR_FILE} app.jar
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD java -agentpath:/opt/cdbg/cdbg_java_agent.so \
-Dcom.google.cdbg.module=${MODULE} \
-Dcom.google.cdbg.version=${TAG} \
-Djava.security.egd=file:/dev/./urandom \
-XX:MaxRAMPercentage=80.0 \
-XX:+CrashOnOutOfMemoryError \
-XX:ErrorFile=tmp/hs_err_pid%p.log \
-XX:NativeMemoryTracking=detail \
-XX:+UnlockDiagnosticVMOptions \
-XX:+PrintNMTStatistics \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath=tmp/ \
-jar /app.jar
</code></pre>
<p>and run it with Kubernetes (extra details are ommited):</p>
<pre><code>apiVersion: apps/v1
spec:
replicas: {{ .Values.replicas }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 0
template:
spec:
initContainers:
bla-bla
containers:
lifecycle:
preStop:
exec:
command: [
# Gracefully shutdown java
"pkill", "java"
]
resources:
limits:
cpu: 1600
memory: 1300
requests:
cpu: 1600
memory: 1300
</code></pre>
<p><strong>UPDATE</strong>
according top command memory limit is also far from limit however CPU utilization became more then 100% before container is OOMKilled. Is it possible that Kubernetes kills container that is trying to get more CPU then allowed?</p>
<pre><code>Tasks: 5 total, 1 running, 4 sleeping, 0 stopped, 0 zombie
%Cpu(s): 34.1 us, 2.0 sy, 0.0 ni, 63.4 id, 0.0 wa, 0.0 hi, 0.5 si, 0.0 st
KiB Mem : 7656868 total, 1038708 free, 2837764 used, 3780396 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 4599760 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6 root 20 0 5172744 761664 30928 S 115.3 9.9 21:11.24 java
1 root 20 0 4632 820 748 S 0.0 0.0 0:00.02 sh
103 root 20 0 4632 796 720 S 0.0 0.0 0:00.00 sh
108 root 20 0 38276 3660 3164 R 0.0 0.0 0:00.95 top
112 root 20 0 4632 788 716 S 0.0 0.0 0:00.00 sh
command terminated with exit code 137
</code></pre>
<p><strong>UPDATE2</strong></p>
<pre><code># pmap -x 7
7: java -agentpath:/opt/cdbg/cdbg_java_agent.so -Dcom.google.cdbg.module=engine-app -Dcom.google.cdbg.version= -Djava.security.egd=file:/dev/./urandom -XX:MaxRAMPercentage=80.0 -XX:+CrashOnOutOfMemoryError -XX:ErrorFile=tmp/hs_err_pid%p.log -XX:NativeMemoryTracking=detail -XX:+UnlockDiagnosticVMOptions -XX:+PrintNMTStatistics -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=tmp/ -jar /app.jar
Address Kbytes RSS Dirty Mode Mapping
0000000000400000 4 4 0 r-x-- java
0000000000400000 0 0 0 r-x-- java
0000000000600000 4 4 4 r---- java
0000000000600000 0 0 0 r---- java
0000000000601000 4 4 4 rw--- java
0000000000601000 0 0 0 rw--- java
00000000006d5000 4900 4708 4708 rw--- [ anon ]
00000000006d5000 0 0 0 rw--- [ anon ]
00000000b0000000 86144 83136 83136 rw--- [ anon ]
00000000b0000000 0 0 0 rw--- [ anon ]
00000000b5420000 350720 0 0 ----- [ anon ]
00000000b5420000 0 0 0 ----- [ anon ]
00000000caaa0000 171944 148928 148928 rw--- [ anon ]
00000000caaa0000 0 0 0 rw--- [ anon ]
00000000d528a000 701912 0 0 ----- [ anon ]
00000000d528a000 0 0 0 ----- [ anon ]
0000000100000000 23552 23356 23356 rw--- [ anon ]
0000000100000000 0 0 0 rw--- [ anon ]
0000000101700000 1025024 0 0 ----- [ anon ]
0000000101700000 0 0 0 ----- [ anon ]
00007f447c000000 39076 10660 10660 rw--- [ anon ]
00007f447c000000 0 0 0 rw--- [ anon ]
00007f447e629000 26460 0 0 ----- [ anon ]
00007f447e629000 0 0 0 ----- [ anon ]
00007f4481c8f000 1280 1164 1164 rw--- [ anon ]
00007f4481c8f000 0 0 0 rw--- [ anon ]
00007f4481dcf000 784 0 0 ----- [ anon ]
00007f4481dcf000 0 0 0 ----- [ anon ]
00007f4481e93000 1012 12 12 rw--- [ anon ]
00007f4481e93000 0 0 0 rw--- [ anon ]
00007f4481f90000 16 0 0 ----- [ anon ]
...
00007ffcfcd48000 8 4 0 r-x-- [ anon ]
00007ffcfcd48000 0 0 0 r-x-- [ anon ]
ffffffffff600000 4 0 0 r-x-- [ anon ]
ffffffffff600000 0 0 0 r-x-- [ anon ]
---------------- ------- ------- -------
total kB 5220936 772448 739852
</code></pre>
<p>this pmap was called not far before OOMKilled. 5Gb? Why top is not showing this? Also not sure how to interpretate pmap command result</p>
| Dmitrii Borovoi | <p>Per the log file, there are more than 10,000 started threads. That's <em>a lot</em> even if we don't look at the less that 2 CPUs/cores reserved for the container (limits.cpu = request.cpu = 1600 millicores).</p>
<p>Each thread, and its stack, is allocated in memory separate from the heap. It is quite possible that the large number of started threads is the cause for the OOM problem.</p>
<p>The JVM is started with the Native Memory Tracking related options (<code>-XX:NativeMemoryTracking=detail, -XX:+UnlockDiagnosticVMOptions, -XX:+PrintNMTStatistics)</code> that could help to see the memory usage, including what's consumed by those threads. <a href="https://docs.oracle.com/en/java/javase/11/vm/native-memory-tracking.html#GUID-56C3FD2E-E227-4902-B361-3EEE3492D70D" rel="nofollow noreferrer">This doc</a> could be a starting point for Java 11.</p>
<p>In any case, it would be highly recommended to <em>not</em> have that many threads started. E.g. use a pool, start and stop them when not needed anymore...</p>
| gears |
<p>I am trying to submit a Pyspark job on ADLS Gen2 to Azure-Kubernetes-Services (AKS) and get the following exception:</p>
<pre><code>Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2595)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3269)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.spark.deploy.DependencyUtils$.resolveGlobPath(DependencyUtils.scala:191)
at org.apache.spark.deploy.DependencyUtils$.$anonfun$resolveGlobPaths$2(DependencyUtils.scala:147)
at org.apache.spark.deploy.DependencyUtils$.$anonfun$resolveGlobPaths$2$adapted(DependencyUtils.scala:145)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at org.apache.spark.deploy.DependencyUtils$.resolveGlobPaths(DependencyUtils.scala:145)
at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$6(SparkSubmit.scala:365)
at scala.Option.map(Option.scala:230)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:365)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2499)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2593)
... 27 more
</code></pre>
<p>My spark-submit looks like this:</p>
<pre><code>$SPARK_HOME/bin/spark-submit \
--master k8s://https://XXX \
--deploy-mode cluster \
--name spark-pi \
--conf spark.kubernetes.file.upload.path=file:///tmp \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.container.image=XXX \
--conf spark.hadoop.fs.azure.account.auth.type.XXX.dfs.core.windows.net=SharedKey \
--conf spark.hadoop.fs.azure.account.key.XXX.dfs.core.windows.net=XXX \
--py-files abfss://[email protected]/py-files/ml_pipeline-0.0.1-py3.8.egg \
abfss://[email protected]/py-files/main_kubernetes.py
</code></pre>
<p>The job runs just fine on my VM and also loads data from ADLS Gen2 without problems.
In this post <a href="https://stackoverflow.com/questions/66421569/java-lang-classnotfoundexception-class-org-apache-hadoop-fs-azurebfs-secureazur">java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found</a> it is recommended to download the package and add it to the spark/jars folder. But I don't know where to download it and why it has to be included in the first place, if it works fine locally.</p>
<p>EDIT:
So I managed to include the jars in the Docker container. And if I ssh into that container and run the Job it works fine and loads the files from the ADLS.
But if I submit the job to Kubernetes it throws the same exception as before.
Please, can someone help?</p>
<p>Spark 3.1.1, Python 3.8.5, Ubuntu 18.04</p>
| Lorenz | <p>So I managed to fix my problem. It is definitely a workaround but it works.</p>
<p>I modified the PySpark Docker container by changing the entrypoint to:</p>
<pre><code>ENTRYPOINT [ "/opt/entrypoint.sh" ]
</code></pre>
<p>Now I was able to run the container without it exiting immediately:</p>
<pre><code>docker run -td <docker_image_id>
</code></pre>
<p>And could ssh into it:</p>
<pre><code>docker exec -it <docker_container_id> /bin/bash
</code></pre>
<p>At this point I could submit the spark job inside the container with the --package flag:</p>
<pre><code>$SPARK_HOME/bin/spark-submit \
--master local[*] \
--deploy-mode client \
--name spark-python \
--packages org.apache.hadoop:hadoop-azure:3.2.0 \
--conf spark.hadoop.fs.azure.account.auth.type.user.dfs.core.windows.net=SharedKey \
--conf spark.hadoop.fs.azure.account.key.user.dfs.core.windows.net=xxx \
--files "abfss://[email protected]/config.yml" \
--py-files "abfss://[email protected]/jobs.zip" \
"abfss://[email protected]/main.py"
</code></pre>
<p>Spark then downloaded the required dependencies and saved them under /root/.ivy2 in the container and executed the job succesfully.</p>
<p>I copied the whole folder from the container onto the host machine:</p>
<pre><code>sudo docker cp <docker_container_id>:/root/.ivy2/ /opt/spark/.ivy2/
</code></pre>
<p>And modified the Dockerfile again to copy the folder into the image:</p>
<pre><code>COPY .ivy2 /root/.ivy2
</code></pre>
<p>Finally I could submit the job to Kubernetes with this newly build image and everything runs as expected.</p>
| Lorenz |
<p>The database and the server are not connected.
Attempting to deploy in Kubernetes environment.</p>
<p>this is deployment, sevice of mongodb , golang http server</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- image: royroyee/backend:0.8
name: backend
ports:
- containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
run: backend-service
spec:
ports:
- port: 9001
targetPort: 9001
protocol: TCP
selector:
app: backend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo-db
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
run: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
app: mongo
</code></pre>
<p>and my golang code ...
mongodb session</p>
<pre><code>func getSession() *mgo.Session {
s, err := mgo.Dial("mongodb://mongo-service:27017/mongo-db")
</code></pre>
<p>pls let me know ..</p>
<p>also I tried something like this.</p>
<pre><code>// mongodb://mongo-service:27017/backend
// mongodb://mongo-service:27017/mongo-db
// mongodb://mongo-service:27017
</code></pre>
| Younghwan Kim | <p>To connect MongoDB with Golang in a Kubernetes environment, you need to follow these steps:</p>
<p>Deploy MongoDB as a statefulset or a deployment in your Kubernetes cluster.</p>
<p>Create a Service for MongoDB to access the deployed pods from your Golang application.</p>
<p>In your Golang application, use the official MongoDB Go driver to establish a connection to the MongoDB service by specifying the service name and port.</p>
<p>Verify the connection by running a simple test that inserts and retrieves data from the MongoDB database.</p>
<p>Finally, package the Golang application as a Docker image and deploy it as a deployment in the same Kubernetes cluster.</p>
<p>Here is a sample Go code to connect to MongoDB:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"context"
"fmt"
"log"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func main() {
// Set client options
clientOptions := options.Client().ApplyURI("mongodb://mongodb-service:27017")
// Connect to MongoDB
client, err := mongo.Connect(context.TODO(), clientOptions)
if err != nil {
log.Fatal(err)
}
// Check the connection
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Connected to MongoDB!")
}
</code></pre>
<p>Here's a sample YAML file for deploying MongoDB as a StatefulSet and a Go application as a Deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-data
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- name: mongodb
port: 27017
targetPort: 27017
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app
spec:
replicas: 1
selector:
matchLabels:
app: go-app
template:
metadata:
labels:
app: go-app
spec:
containers:
- name: go-app
image: <your-go-app-image>
ports:
- containerPort: 8080
</code></pre>
<p>Note: You will need to replace <code>your-go-app-image</code> with the actual Docker image of your Go application.</p>
| Amirhossein Dolatkhah |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.